Feb 20 06:37:23 localhost kernel: Linux version 5.14.0-284.11.1.el9_2.x86_64 (mockbuild@x86-vm-09.build.eng.bos.redhat.com) (gcc (GCC) 11.3.1 20221121 (Red Hat 11.3.1-4), GNU ld version 2.35.2-37.el9) #1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023
Feb 20 06:37:23 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Feb 20 06:37:23 localhost kernel: Command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Feb 20 06:37:23 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 20 06:37:23 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 20 06:37:23 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 20 06:37:23 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 20 06:37:23 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format.
Feb 20 06:37:23 localhost kernel: signal: max sigframe size: 1776
Feb 20 06:37:23 localhost kernel: BIOS-provided physical RAM map:
Feb 20 06:37:23 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 20 06:37:23 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 20 06:37:23 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 20 06:37:23 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Feb 20 06:37:23 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Feb 20 06:37:23 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Feb 20 06:37:23 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 20 06:37:23 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000043fffffff] usable
Feb 20 06:37:23 localhost kernel: NX (Execute Disable) protection: active
Feb 20 06:37:23 localhost kernel: SMBIOS 2.8 present.
Feb 20 06:37:23 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Feb 20 06:37:23 localhost kernel: Hypervisor detected: KVM
Feb 20 06:37:23 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 20 06:37:23 localhost kernel: kvm-clock: using sched offset of 2627942488 cycles
Feb 20 06:37:23 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 20 06:37:23 localhost kernel: tsc: Detected 2799.998 MHz processor
Feb 20 06:37:23 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 20 06:37:23 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 20 06:37:23 localhost kernel: last_pfn = 0x440000 max_arch_pfn = 0x400000000
Feb 20 06:37:23 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 20 06:37:23 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Feb 20 06:37:23 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Feb 20 06:37:23 localhost kernel: Using GB pages for direct mapping
Feb 20 06:37:23 localhost kernel: RAMDISK: [mem 0x2eef4000-0x33771fff]
Feb 20 06:37:23 localhost kernel: ACPI: Early table checksum verification disabled
Feb 20 06:37:23 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Feb 20 06:37:23 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 20 06:37:23 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 20 06:37:23 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 20 06:37:23 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Feb 20 06:37:23 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 20 06:37:23 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 20 06:37:23 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Feb 20 06:37:23 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Feb 20 06:37:23 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Feb 20 06:37:23 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Feb 20 06:37:23 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Feb 20 06:37:23 localhost kernel: No NUMA configuration found
Feb 20 06:37:23 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000043fffffff]
Feb 20 06:37:23 localhost kernel: NODE_DATA(0) allocated [mem 0x43ffd5000-0x43fffffff]
Feb 20 06:37:23 localhost kernel: Reserving 256MB of memory at 2800MB for crashkernel (System RAM: 16383MB)
Feb 20 06:37:23 localhost kernel: Zone ranges:
Feb 20 06:37:23 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 20 06:37:23 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Feb 20 06:37:23 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000043fffffff]
Feb 20 06:37:23 localhost kernel:   Device   empty
Feb 20 06:37:23 localhost kernel: Movable zone start for each node
Feb 20 06:37:23 localhost kernel: Early memory node ranges
Feb 20 06:37:23 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 20 06:37:23 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Feb 20 06:37:23 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000043fffffff]
Feb 20 06:37:23 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000043fffffff]
Feb 20 06:37:23 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 20 06:37:23 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 20 06:37:23 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Feb 20 06:37:23 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Feb 20 06:37:23 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 20 06:37:23 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Feb 20 06:37:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Feb 20 06:37:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 20 06:37:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 20 06:37:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 20 06:37:23 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 20 06:37:23 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 20 06:37:23 localhost kernel: TSC deadline timer available
Feb 20 06:37:23 localhost kernel: smpboot: Allowing 8 CPUs, 0 hotplug CPUs
Feb 20 06:37:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Feb 20 06:37:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Feb 20 06:37:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Feb 20 06:37:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Feb 20 06:37:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Feb 20 06:37:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Feb 20 06:37:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Feb 20 06:37:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Feb 20 06:37:23 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Feb 20 06:37:23 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Feb 20 06:37:23 localhost kernel: Booting paravirtualized kernel on KVM
Feb 20 06:37:23 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 20 06:37:23 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Feb 20 06:37:23 localhost kernel: percpu: Embedded 55 pages/cpu s188416 r8192 d28672 u262144
Feb 20 06:37:23 localhost kernel: pcpu-alloc: s188416 r8192 d28672 u262144 alloc=1*2097152
Feb 20 06:37:23 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Feb 20 06:37:23 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Feb 20 06:37:23 localhost kernel: Fallback order for Node 0: 0 
Feb 20 06:37:23 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 4128475
Feb 20 06:37:23 localhost kernel: Policy zone: Normal
Feb 20 06:37:23 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Feb 20 06:37:23 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64", will be passed to user space.
Feb 20 06:37:23 localhost kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear)
Feb 20 06:37:23 localhost kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Feb 20 06:37:23 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 20 06:37:23 localhost kernel: software IO TLB: area num 8.
Feb 20 06:37:23 localhost kernel: Memory: 2873456K/16776676K available (14342K kernel code, 5536K rwdata, 10180K rodata, 2792K init, 7524K bss, 741260K reserved, 0K cma-reserved)
Feb 20 06:37:23 localhost kernel: random: get_random_u64 called from kmem_cache_open+0x1e/0x210 with crng_init=0
Feb 20 06:37:23 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Feb 20 06:37:23 localhost kernel: ftrace: allocating 44803 entries in 176 pages
Feb 20 06:37:23 localhost kernel: ftrace: allocated 176 pages with 3 groups
Feb 20 06:37:23 localhost kernel: Dynamic Preempt: voluntary
Feb 20 06:37:23 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 20 06:37:23 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Feb 20 06:37:23 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Feb 20 06:37:23 localhost kernel:         Rude variant of Tasks RCU enabled.
Feb 20 06:37:23 localhost kernel:         Tracing variant of Tasks RCU enabled.
Feb 20 06:37:23 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 20 06:37:23 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Feb 20 06:37:23 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Feb 20 06:37:23 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 20 06:37:23 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Feb 20 06:37:23 localhost kernel: random: crng init done (trusting CPU's manufacturer)
Feb 20 06:37:23 localhost kernel: Console: colour VGA+ 80x25
Feb 20 06:37:23 localhost kernel: printk: console [tty0] enabled
Feb 20 06:37:23 localhost kernel: printk: console [ttyS0] enabled
Feb 20 06:37:23 localhost kernel: ACPI: Core revision 20211217
Feb 20 06:37:23 localhost kernel: APIC: Switch to symmetric I/O mode setup
Feb 20 06:37:23 localhost kernel: x2apic enabled
Feb 20 06:37:23 localhost kernel: Switched APIC routing to physical x2apic.
Feb 20 06:37:23 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Feb 20 06:37:23 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Feb 20 06:37:23 localhost kernel: pid_max: default: 32768 minimum: 301
Feb 20 06:37:23 localhost kernel: LSM: Security Framework initializing
Feb 20 06:37:23 localhost kernel: Yama: becoming mindful.
Feb 20 06:37:23 localhost kernel: SELinux:  Initializing.
Feb 20 06:37:23 localhost kernel: LSM support for eBPF active
Feb 20 06:37:23 localhost kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 20 06:37:23 localhost kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 20 06:37:23 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Feb 20 06:37:23 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Feb 20 06:37:23 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Feb 20 06:37:23 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 20 06:37:23 localhost kernel: Spectre V2 : Mitigation: Retpolines
Feb 20 06:37:23 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb 20 06:37:23 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb 20 06:37:23 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Feb 20 06:37:23 localhost kernel: RETBleed: Mitigation: untrained return thunk
Feb 20 06:37:23 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Feb 20 06:37:23 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Feb 20 06:37:23 localhost kernel: Freeing SMP alternatives memory: 36K
Feb 20 06:37:23 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Feb 20 06:37:23 localhost kernel: cblist_init_generic: Setting adjustable number of callback queues.
Feb 20 06:37:23 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Feb 20 06:37:23 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Feb 20 06:37:23 localhost kernel: cblist_init_generic: Setting shift to 3 and lim to 1.
Feb 20 06:37:23 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Feb 20 06:37:23 localhost kernel: ... version:                0
Feb 20 06:37:23 localhost kernel: ... bit width:              48
Feb 20 06:37:23 localhost kernel: ... generic registers:      6
Feb 20 06:37:23 localhost kernel: ... value mask:             0000ffffffffffff
Feb 20 06:37:23 localhost kernel: ... max period:             00007fffffffffff
Feb 20 06:37:23 localhost kernel: ... fixed-purpose events:   0
Feb 20 06:37:23 localhost kernel: ... event mask:             000000000000003f
Feb 20 06:37:23 localhost kernel: rcu: Hierarchical SRCU implementation.
Feb 20 06:37:23 localhost kernel: rcu:         Max phase no-delay instances is 400.
Feb 20 06:37:23 localhost kernel: smp: Bringing up secondary CPUs ...
Feb 20 06:37:23 localhost kernel: x86: Booting SMP configuration:
Feb 20 06:37:23 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Feb 20 06:37:23 localhost kernel: smp: Brought up 1 node, 8 CPUs
Feb 20 06:37:23 localhost kernel: smpboot: Max logical packages: 8
Feb 20 06:37:23 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Feb 20 06:37:23 localhost kernel: node 0 deferred pages initialised in 21ms
Feb 20 06:37:23 localhost kernel: devtmpfs: initialized
Feb 20 06:37:23 localhost kernel: x86/mm: Memory block size: 128MB
Feb 20 06:37:23 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 20 06:37:23 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Feb 20 06:37:23 localhost kernel: pinctrl core: initialized pinctrl subsystem
Feb 20 06:37:23 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 20 06:37:23 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations
Feb 20 06:37:23 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 20 06:37:23 localhost kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 20 06:37:23 localhost kernel: audit: initializing netlink subsys (disabled)
Feb 20 06:37:23 localhost kernel: audit: type=2000 audit(1771569442.410:1): state=initialized audit_enabled=0 res=1
Feb 20 06:37:23 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Feb 20 06:37:23 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 20 06:37:23 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 20 06:37:23 localhost kernel: cpuidle: using governor menu
Feb 20 06:37:23 localhost kernel: HugeTLB: can optimize 4095 vmemmap pages for hugepages-1048576kB
Feb 20 06:37:23 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 20 06:37:23 localhost kernel: PCI: Using configuration type 1 for base access
Feb 20 06:37:23 localhost kernel: PCI: Using configuration type 1 for extended access
Feb 20 06:37:23 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 20 06:37:23 localhost kernel: HugeTLB: can optimize 7 vmemmap pages for hugepages-2048kB
Feb 20 06:37:23 localhost kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
Feb 20 06:37:23 localhost kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
Feb 20 06:37:23 localhost kernel: cryptd: max_cpu_qlen set to 1000
Feb 20 06:37:23 localhost kernel: ACPI: Added _OSI(Module Device)
Feb 20 06:37:23 localhost kernel: ACPI: Added _OSI(Processor Device)
Feb 20 06:37:23 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 20 06:37:23 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 20 06:37:23 localhost kernel: ACPI: Added _OSI(Linux-Dell-Video)
Feb 20 06:37:23 localhost kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
Feb 20 06:37:23 localhost kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
Feb 20 06:37:23 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 20 06:37:23 localhost kernel: ACPI: Interpreter enabled
Feb 20 06:37:23 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Feb 20 06:37:23 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Feb 20 06:37:23 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 20 06:37:23 localhost kernel: PCI: Using E820 reservations for host bridge windows
Feb 20 06:37:23 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Feb 20 06:37:23 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 20 06:37:23 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [3] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [4] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [5] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [6] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [7] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [8] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [9] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [10] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [11] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [12] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [13] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [14] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [15] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [16] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [17] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [18] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [19] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [20] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [21] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [22] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [23] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [24] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [25] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [26] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [27] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [28] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [29] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [30] registered
Feb 20 06:37:23 localhost kernel: acpiphp: Slot [31] registered
Feb 20 06:37:23 localhost kernel: PCI host bridge to bus 0000:00
Feb 20 06:37:23 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 20 06:37:23 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 20 06:37:23 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 20 06:37:23 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Feb 20 06:37:23 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x440000000-0x4bfffffff window]
Feb 20 06:37:23 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 20 06:37:23 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.1: reg 0x20: [io  0xc140-0xc14f]
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io  0x01f0-0x01f7]
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io  0x03f6]
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io  0x0170-0x0177]
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io  0x0376]
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.2: reg 0x20: [io  0xc100-0xc11f]
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Feb 20 06:37:23 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000
Feb 20 06:37:23 localhost kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref]
Feb 20 06:37:23 localhost kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref]
Feb 20 06:37:23 localhost kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfeb90000-0xfeb90fff]
Feb 20 06:37:23 localhost kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfeb80000-0xfeb8ffff pref]
Feb 20 06:37:23 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 20 06:37:23 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000
Feb 20 06:37:23 localhost kernel: pci 0000:00:03.0: reg 0x10: [io  0xc080-0xc0bf]
Feb 20 06:37:23 localhost kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfeb91000-0xfeb91fff]
Feb 20 06:37:23 localhost kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref]
Feb 20 06:37:23 localhost kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb00000-0xfeb7ffff pref]
Feb 20 06:37:23 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000
Feb 20 06:37:23 localhost kernel: pci 0000:00:04.0: reg 0x10: [io  0xc000-0xc07f]
Feb 20 06:37:23 localhost kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfeb92000-0xfeb92fff]
Feb 20 06:37:23 localhost kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref]
Feb 20 06:37:23 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00
Feb 20 06:37:23 localhost kernel: pci 0000:00:05.0: reg 0x10: [io  0xc0c0-0xc0ff]
Feb 20 06:37:23 localhost kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref]
Feb 20 06:37:23 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00
Feb 20 06:37:23 localhost kernel: pci 0000:00:06.0: reg 0x10: [io  0xc120-0xc13f]
Feb 20 06:37:23 localhost kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref]
Feb 20 06:37:23 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 20 06:37:23 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 20 06:37:23 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 20 06:37:23 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 20 06:37:23 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 20 06:37:23 localhost kernel: iommu: Default domain type: Translated 
Feb 20 06:37:23 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode 
Feb 20 06:37:23 localhost kernel: SCSI subsystem initialized
Feb 20 06:37:23 localhost kernel: ACPI: bus type USB registered
Feb 20 06:37:23 localhost kernel: usbcore: registered new interface driver usbfs
Feb 20 06:37:23 localhost kernel: usbcore: registered new interface driver hub
Feb 20 06:37:23 localhost kernel: usbcore: registered new device driver usb
Feb 20 06:37:23 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Feb 20 06:37:23 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Feb 20 06:37:23 localhost kernel: PTP clock support registered
Feb 20 06:37:23 localhost kernel: EDAC MC: Ver: 3.0.0
Feb 20 06:37:23 localhost kernel: NetLabel: Initializing
Feb 20 06:37:23 localhost kernel: NetLabel:  domain hash size = 128
Feb 20 06:37:23 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Feb 20 06:37:23 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Feb 20 06:37:23 localhost kernel: PCI: Using ACPI for IRQ routing
Feb 20 06:37:23 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 20 06:37:23 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 20 06:37:23 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Feb 20 06:37:23 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Feb 20 06:37:23 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Feb 20 06:37:23 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 20 06:37:23 localhost kernel: vgaarb: loaded
Feb 20 06:37:23 localhost kernel: clocksource: Switched to clocksource kvm-clock
Feb 20 06:37:23 localhost kernel: VFS: Disk quotas dquot_6.6.0
Feb 20 06:37:23 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 20 06:37:23 localhost kernel: pnp: PnP ACPI init
Feb 20 06:37:23 localhost kernel: pnp 00:03: [dma 2]
Feb 20 06:37:23 localhost kernel: pnp: PnP ACPI: found 5 devices
Feb 20 06:37:23 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 20 06:37:23 localhost kernel: NET: Registered PF_INET protocol family
Feb 20 06:37:23 localhost kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 20 06:37:23 localhost kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear)
Feb 20 06:37:23 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 20 06:37:23 localhost kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 20 06:37:23 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Feb 20 06:37:23 localhost kernel: TCP: Hash tables configured (established 131072 bind 65536)
Feb 20 06:37:23 localhost kernel: MPTCP token hash table entries: 16384 (order: 6, 393216 bytes, linear)
Feb 20 06:37:23 localhost kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear)
Feb 20 06:37:23 localhost kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear)
Feb 20 06:37:23 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 20 06:37:23 localhost kernel: NET: Registered PF_XDP protocol family
Feb 20 06:37:23 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 20 06:37:23 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 20 06:37:23 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 20 06:37:23 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Feb 20 06:37:23 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x440000000-0x4bfffffff window]
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Feb 20 06:37:23 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 20 06:37:23 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 20 06:37:23 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 26851 usecs
Feb 20 06:37:23 localhost kernel: PCI: CLS 0 bytes, default 64
Feb 20 06:37:23 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Feb 20 06:37:23 localhost kernel: Trying to unpack rootfs image as initramfs...
Feb 20 06:37:23 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Feb 20 06:37:23 localhost kernel: ACPI: bus type thunderbolt registered
Feb 20 06:37:23 localhost kernel: Initialise system trusted keyrings
Feb 20 06:37:23 localhost kernel: Key type blacklist registered
Feb 20 06:37:23 localhost kernel: workingset: timestamp_bits=36 max_order=22 bucket_order=0
Feb 20 06:37:23 localhost kernel: zbud: loaded
Feb 20 06:37:23 localhost kernel: integrity: Platform Keyring initialized
Feb 20 06:37:23 localhost kernel: NET: Registered PF_ALG protocol family
Feb 20 06:37:23 localhost kernel: xor: automatically using best checksumming function   avx       
Feb 20 06:37:23 localhost kernel: Key type asymmetric registered
Feb 20 06:37:23 localhost kernel: Asymmetric key parser 'x509' registered
Feb 20 06:37:23 localhost kernel: Running certificate verification selftests
Feb 20 06:37:23 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Feb 20 06:37:23 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Feb 20 06:37:23 localhost kernel: io scheduler mq-deadline registered
Feb 20 06:37:23 localhost kernel: io scheduler kyber registered
Feb 20 06:37:23 localhost kernel: io scheduler bfq registered
Feb 20 06:37:23 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Feb 20 06:37:23 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Feb 20 06:37:23 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Feb 20 06:37:23 localhost kernel: ACPI: button: Power Button [PWRF]
Feb 20 06:37:23 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Feb 20 06:37:23 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Feb 20 06:37:23 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Feb 20 06:37:23 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 20 06:37:23 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 20 06:37:23 localhost kernel: Non-volatile memory driver v1.3
Feb 20 06:37:23 localhost kernel: rdac: device handler registered
Feb 20 06:37:23 localhost kernel: hp_sw: device handler registered
Feb 20 06:37:23 localhost kernel: emc: device handler registered
Feb 20 06:37:23 localhost kernel: alua: device handler registered
Feb 20 06:37:23 localhost kernel: libphy: Fixed MDIO Bus: probed
Feb 20 06:37:23 localhost kernel: ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
Feb 20 06:37:23 localhost kernel: ehci-pci: EHCI PCI platform driver
Feb 20 06:37:23 localhost kernel: ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
Feb 20 06:37:23 localhost kernel: ohci-pci: OHCI PCI platform driver
Feb 20 06:37:23 localhost kernel: uhci_hcd: USB Universal Host Controller Interface driver
Feb 20 06:37:23 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Feb 20 06:37:23 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Feb 20 06:37:23 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Feb 20 06:37:23 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Feb 20 06:37:23 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Feb 20 06:37:23 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Feb 20 06:37:23 localhost kernel: usb usb1: Product: UHCI Host Controller
Feb 20 06:37:23 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-284.11.1.el9_2.x86_64 uhci_hcd
Feb 20 06:37:23 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Feb 20 06:37:23 localhost kernel: hub 1-0:1.0: USB hub found
Feb 20 06:37:23 localhost kernel: hub 1-0:1.0: 2 ports detected
Feb 20 06:37:23 localhost kernel: usbcore: registered new interface driver usbserial_generic
Feb 20 06:37:23 localhost kernel: usbserial: USB Serial support registered for generic
Feb 20 06:37:23 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 20 06:37:23 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 20 06:37:23 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 20 06:37:23 localhost kernel: mousedev: PS/2 mouse device common for all mice
Feb 20 06:37:23 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Feb 20 06:37:23 localhost kernel: rtc_cmos 00:04: registered as rtc0
Feb 20 06:37:23 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Feb 20 06:37:23 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-02-20T06:37:22 UTC (1771569442)
Feb 20 06:37:23 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Feb 20 06:37:23 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Feb 20 06:37:23 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 20 06:37:23 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Feb 20 06:37:23 localhost kernel: usbcore: registered new interface driver usbhid
Feb 20 06:37:23 localhost kernel: usbhid: USB HID core driver
Feb 20 06:37:23 localhost kernel: drop_monitor: Initializing network drop monitor service
Feb 20 06:37:23 localhost kernel: Initializing XFRM netlink socket
Feb 20 06:37:23 localhost kernel: NET: Registered PF_INET6 protocol family
Feb 20 06:37:23 localhost kernel: Segment Routing with IPv6
Feb 20 06:37:23 localhost kernel: NET: Registered PF_PACKET protocol family
Feb 20 06:37:23 localhost kernel: mpls_gso: MPLS GSO support
Feb 20 06:37:23 localhost kernel: IPI shorthand broadcast: enabled
Feb 20 06:37:23 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Feb 20 06:37:23 localhost kernel: AES CTR mode by8 optimization enabled
Feb 20 06:37:23 localhost kernel: sched_clock: Marking stable (861403945, 177246373)->(1096324121, -57673803)
Feb 20 06:37:23 localhost kernel: registered taskstats version 1
Feb 20 06:37:23 localhost kernel: Loading compiled-in X.509 certificates
Feb 20 06:37:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72'
Feb 20 06:37:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Feb 20 06:37:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Feb 20 06:37:23 localhost kernel: zswap: loaded using pool lzo/zbud
Feb 20 06:37:23 localhost kernel: page_owner is disabled
Feb 20 06:37:23 localhost kernel: Key type big_key registered
Feb 20 06:37:23 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Feb 20 06:37:23 localhost kernel: Freeing initrd memory: 74232K
Feb 20 06:37:23 localhost kernel: Key type encrypted registered
Feb 20 06:37:23 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 20 06:37:23 localhost kernel: Loading compiled-in module X.509 certificates
Feb 20 06:37:23 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kernel signing key: aaec4b640ef162b54684864066c7d4ffd428cd72'
Feb 20 06:37:23 localhost kernel: ima: Allocated hash algorithm: sha256
Feb 20 06:37:23 localhost kernel: ima: No architecture policies found
Feb 20 06:37:23 localhost kernel: evm: Initialising EVM extended attributes:
Feb 20 06:37:23 localhost kernel: evm: security.selinux
Feb 20 06:37:23 localhost kernel: evm: security.SMACK64 (disabled)
Feb 20 06:37:23 localhost kernel: evm: security.SMACK64EXEC (disabled)
Feb 20 06:37:23 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Feb 20 06:37:23 localhost kernel: evm: security.SMACK64MMAP (disabled)
Feb 20 06:37:23 localhost kernel: evm: security.apparmor (disabled)
Feb 20 06:37:23 localhost kernel: evm: security.ima
Feb 20 06:37:23 localhost kernel: evm: security.capability
Feb 20 06:37:23 localhost kernel: evm: HMAC attrs: 0x1
Feb 20 06:37:23 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Feb 20 06:37:23 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Feb 20 06:37:23 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Feb 20 06:37:23 localhost kernel: usb 1-1: Manufacturer: QEMU
Feb 20 06:37:23 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Feb 20 06:37:23 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Feb 20 06:37:23 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Feb 20 06:37:23 localhost kernel: Freeing unused decrypted memory: 2036K
Feb 20 06:37:23 localhost kernel: Freeing unused kernel image (initmem) memory: 2792K
Feb 20 06:37:23 localhost kernel: Write protecting the kernel read-only data: 26624k
Feb 20 06:37:23 localhost kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K
Feb 20 06:37:23 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 60K
Feb 20 06:37:23 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Feb 20 06:37:23 localhost kernel: Run /init as init process
Feb 20 06:37:23 localhost kernel:   with arguments:
Feb 20 06:37:23 localhost kernel:     /init
Feb 20 06:37:23 localhost kernel:   with environment:
Feb 20 06:37:23 localhost kernel:     HOME=/
Feb 20 06:37:23 localhost kernel:     TERM=linux
Feb 20 06:37:23 localhost kernel:     BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64
Feb 20 06:37:23 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 20 06:37:23 localhost systemd[1]: Detected virtualization kvm.
Feb 20 06:37:23 localhost systemd[1]: Detected architecture x86-64.
Feb 20 06:37:23 localhost systemd[1]: Running in initrd.
Feb 20 06:37:23 localhost systemd[1]: No hostname configured, using default hostname.
Feb 20 06:37:23 localhost systemd[1]: Hostname set to <localhost>.
Feb 20 06:37:23 localhost systemd[1]: Initializing machine ID from VM UUID.
Feb 20 06:37:23 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Feb 20 06:37:23 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 20 06:37:23 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 20 06:37:23 localhost systemd[1]: Reached target Initrd /usr File System.
Feb 20 06:37:23 localhost systemd[1]: Reached target Local File Systems.
Feb 20 06:37:23 localhost systemd[1]: Reached target Path Units.
Feb 20 06:37:23 localhost systemd[1]: Reached target Slice Units.
Feb 20 06:37:23 localhost systemd[1]: Reached target Swaps.
Feb 20 06:37:23 localhost systemd[1]: Reached target Timer Units.
Feb 20 06:37:23 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 20 06:37:23 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Feb 20 06:37:23 localhost systemd[1]: Listening on Journal Socket.
Feb 20 06:37:23 localhost systemd[1]: Listening on udev Control Socket.
Feb 20 06:37:23 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 20 06:37:23 localhost systemd[1]: Reached target Socket Units.
Feb 20 06:37:23 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 20 06:37:23 localhost systemd[1]: Starting Journal Service...
Feb 20 06:37:23 localhost systemd[1]: Starting Load Kernel Modules...
Feb 20 06:37:23 localhost systemd[1]: Starting Create System Users...
Feb 20 06:37:23 localhost systemd[1]: Starting Setup Virtual Console...
Feb 20 06:37:23 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 20 06:37:23 localhost systemd[1]: Finished Load Kernel Modules.
Feb 20 06:37:23 localhost systemd-journald[283]: Journal started
Feb 20 06:37:23 localhost systemd-journald[283]: Runtime Journal (/run/log/journal/a53ba2274db845edbb705a295cbaca1c) is 8.0M, max 314.7M, 306.7M free.
Feb 20 06:37:23 localhost systemd-modules-load[284]: Module 'msr' is built in
Feb 20 06:37:23 localhost systemd[1]: Started Journal Service.
Feb 20 06:37:23 localhost systemd[1]: Finished Setup Virtual Console.
Feb 20 06:37:23 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Feb 20 06:37:23 localhost systemd[1]: Starting dracut cmdline hook...
Feb 20 06:37:23 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 20 06:37:23 localhost systemd-sysusers[285]: Creating group 'sgx' with GID 997.
Feb 20 06:37:23 localhost systemd-sysusers[285]: Creating group 'users' with GID 100.
Feb 20 06:37:23 localhost systemd-sysusers[285]: Creating group 'dbus' with GID 81.
Feb 20 06:37:23 localhost systemd-sysusers[285]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Feb 20 06:37:23 localhost systemd[1]: Finished Create System Users.
Feb 20 06:37:23 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 20 06:37:23 localhost dracut-cmdline[288]: dracut-9.2 (Plow) dracut-057-21.git20230214.el9
Feb 20 06:37:23 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 20 06:37:23 localhost dracut-cmdline[288]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,gpt3)/vmlinuz-5.14.0-284.11.1.el9_2.x86_64 root=UUID=a3dd82de-ffc6-4652-88b9-80e003b8f20a console=tty0 console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M
Feb 20 06:37:23 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 20 06:37:23 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 20 06:37:23 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 20 06:37:23 localhost systemd[1]: Finished dracut cmdline hook.
Feb 20 06:37:23 localhost systemd[1]: Starting dracut pre-udev hook...
Feb 20 06:37:23 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 20 06:37:23 localhost kernel: device-mapper: uevent: version 1.0.3
Feb 20 06:37:23 localhost kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com
Feb 20 06:37:23 localhost kernel: RPC: Registered named UNIX socket transport module.
Feb 20 06:37:23 localhost kernel: RPC: Registered udp transport module.
Feb 20 06:37:23 localhost kernel: RPC: Registered tcp transport module.
Feb 20 06:37:23 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 20 06:37:23 localhost rpc.statd[405]: Version 2.5.4 starting
Feb 20 06:37:23 localhost rpc.statd[405]: Initializing NSM state
Feb 20 06:37:23 localhost rpc.idmapd[410]: Setting log level to 0
Feb 20 06:37:23 localhost systemd[1]: Finished dracut pre-udev hook.
Feb 20 06:37:23 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 20 06:37:23 localhost systemd-udevd[423]: Using default interface naming scheme 'rhel-9.0'.
Feb 20 06:37:23 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 20 06:37:23 localhost systemd[1]: Starting dracut pre-trigger hook...
Feb 20 06:37:23 localhost systemd[1]: Finished dracut pre-trigger hook.
Feb 20 06:37:23 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 20 06:37:23 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 20 06:37:23 localhost systemd[1]: Reached target System Initialization.
Feb 20 06:37:23 localhost systemd[1]: Reached target Basic System.
Feb 20 06:37:23 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 20 06:37:23 localhost systemd[1]: Reached target Network.
Feb 20 06:37:23 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Feb 20 06:37:23 localhost systemd[1]: Starting dracut initqueue hook...
Feb 20 06:37:23 localhost kernel: virtio_blk virtio2: [vda] 838860800 512-byte logical blocks (429 GB/400 GiB)
Feb 20 06:37:23 localhost kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 20 06:37:23 localhost kernel: GPT:20971519 != 838860799
Feb 20 06:37:23 localhost kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 20 06:37:23 localhost kernel: GPT:20971519 != 838860799
Feb 20 06:37:23 localhost kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 20 06:37:23 localhost kernel:  vda: vda1 vda2 vda3 vda4
Feb 20 06:37:23 localhost kernel: libata version 3.00 loaded.
Feb 20 06:37:23 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Feb 20 06:37:23 localhost kernel: scsi host0: ata_piix
Feb 20 06:37:23 localhost kernel: scsi host1: ata_piix
Feb 20 06:37:23 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14
Feb 20 06:37:23 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15
Feb 20 06:37:23 localhost systemd-udevd[449]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 06:37:23 localhost systemd[1]: Found device /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a.
Feb 20 06:37:24 localhost systemd[1]: Reached target Initrd Root Device.
Feb 20 06:37:24 localhost kernel: ata1: found unknown device (class 0)
Feb 20 06:37:24 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Feb 20 06:37:24 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Feb 20 06:37:24 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Feb 20 06:37:24 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Feb 20 06:37:24 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Feb 20 06:37:24 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Feb 20 06:37:24 localhost systemd[1]: Finished dracut initqueue hook.
Feb 20 06:37:24 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Feb 20 06:37:24 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Feb 20 06:37:24 localhost systemd[1]: Reached target Remote File Systems.
Feb 20 06:37:24 localhost systemd[1]: Starting dracut pre-mount hook...
Feb 20 06:37:24 localhost systemd[1]: Finished dracut pre-mount hook.
Feb 20 06:37:24 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a...
Feb 20 06:37:24 localhost systemd-fsck[511]: /usr/sbin/fsck.xfs: XFS file system.
Feb 20 06:37:24 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a.
Feb 20 06:37:24 localhost systemd[1]: Mounting /sysroot...
Feb 20 06:37:24 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Feb 20 06:37:24 localhost kernel: XFS (vda4): Mounting V5 Filesystem
Feb 20 06:37:24 localhost kernel: XFS (vda4): Ending clean mount
Feb 20 06:37:24 localhost systemd[1]: Mounted /sysroot.
Feb 20 06:37:24 localhost systemd[1]: Reached target Initrd Root File System.
Feb 20 06:37:24 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Feb 20 06:37:24 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Feb 20 06:37:24 localhost systemd[1]: Reached target Initrd File Systems.
Feb 20 06:37:24 localhost systemd[1]: Reached target Initrd Default Target.
Feb 20 06:37:24 localhost systemd[1]: Starting dracut mount hook...
Feb 20 06:37:24 localhost systemd[1]: Finished dracut mount hook.
Feb 20 06:37:24 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Feb 20 06:37:24 localhost rpc.idmapd[410]: exiting on signal 15
Feb 20 06:37:24 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Feb 20 06:37:24 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Feb 20 06:37:24 localhost systemd[1]: Stopped target Network.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Timer Units.
Feb 20 06:37:24 localhost systemd[1]: dbus.socket: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Feb 20 06:37:24 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Initrd Default Target.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Basic System.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Initrd Root Device.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Initrd /usr File System.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Path Units.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Remote File Systems.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Slice Units.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Socket Units.
Feb 20 06:37:24 localhost systemd[1]: Stopped target System Initialization.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Local File Systems.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Swaps.
Feb 20 06:37:24 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped dracut mount hook.
Feb 20 06:37:24 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped dracut pre-mount hook.
Feb 20 06:37:24 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Feb 20 06:37:24 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Feb 20 06:37:24 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped dracut initqueue hook.
Feb 20 06:37:24 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped Apply Kernel Variables.
Feb 20 06:37:24 localhost systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped Load Kernel Modules.
Feb 20 06:37:24 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Feb 20 06:37:24 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped Coldplug All udev Devices.
Feb 20 06:37:24 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped dracut pre-trigger hook.
Feb 20 06:37:24 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 20 06:37:24 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped Setup Virtual Console.
Feb 20 06:37:24 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Feb 20 06:37:24 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 20 06:37:24 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Closed udev Control Socket.
Feb 20 06:37:24 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Closed udev Kernel Socket.
Feb 20 06:37:24 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 20 06:37:24 localhost systemd[1]: Stopped dracut pre-udev hook.
Feb 20 06:37:25 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 20 06:37:25 localhost systemd[1]: Stopped dracut cmdline hook.
Feb 20 06:37:25 localhost systemd[1]: Starting Cleanup udev Database...
Feb 20 06:37:25 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 20 06:37:25 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Feb 20 06:37:25 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 20 06:37:25 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Feb 20 06:37:25 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Feb 20 06:37:25 localhost systemd[1]: Stopped Create System Users.
Feb 20 06:37:25 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 20 06:37:25 localhost systemd[1]: Finished Cleanup udev Database.
Feb 20 06:37:25 localhost systemd[1]: Reached target Switch Root.
Feb 20 06:37:25 localhost systemd[1]: Starting Switch Root...
Feb 20 06:37:25 localhost systemd[1]: Switching root.
Feb 20 06:37:25 localhost systemd-journald[283]: Journal stopped
Feb 20 06:37:26 localhost systemd-journald[283]: Received SIGTERM from PID 1 (systemd).
Feb 20 06:37:26 localhost kernel: audit: type=1404 audit(1771569445.186:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Feb 20 06:37:26 localhost kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 06:37:26 localhost kernel: SELinux:  policy capability open_perms=1
Feb 20 06:37:26 localhost kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 06:37:26 localhost kernel: SELinux:  policy capability always_check_network=0
Feb 20 06:37:26 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 06:37:26 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 06:37:26 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 06:37:26 localhost kernel: audit: type=1403 audit(1771569445.317:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 20 06:37:26 localhost systemd[1]: Successfully loaded SELinux policy in 134.368ms.
Feb 20 06:37:26 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.211ms.
Feb 20 06:37:26 localhost systemd[1]: systemd 252-13.el9_2 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 20 06:37:26 localhost systemd[1]: Detected virtualization kvm.
Feb 20 06:37:26 localhost systemd[1]: Detected architecture x86-64.
Feb 20 06:37:26 localhost systemd-rc-local-generator[582]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 06:37:26 localhost systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 06:37:26 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 20 06:37:26 localhost systemd[1]: Stopped Switch Root.
Feb 20 06:37:26 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 20 06:37:26 localhost systemd[1]: Created slice Slice /system/getty.
Feb 20 06:37:26 localhost systemd[1]: Created slice Slice /system/modprobe.
Feb 20 06:37:26 localhost systemd[1]: Created slice Slice /system/serial-getty.
Feb 20 06:37:26 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Feb 20 06:37:26 localhost systemd[1]: Created slice Slice /system/systemd-fsck.
Feb 20 06:37:26 localhost systemd[1]: Created slice User and Session Slice.
Feb 20 06:37:26 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Feb 20 06:37:26 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Feb 20 06:37:26 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Feb 20 06:37:26 localhost systemd[1]: Reached target Local Encrypted Volumes.
Feb 20 06:37:26 localhost systemd[1]: Stopped target Switch Root.
Feb 20 06:37:26 localhost systemd[1]: Stopped target Initrd File Systems.
Feb 20 06:37:26 localhost systemd[1]: Stopped target Initrd Root File System.
Feb 20 06:37:26 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Feb 20 06:37:26 localhost systemd[1]: Reached target Path Units.
Feb 20 06:37:26 localhost systemd[1]: Reached target rpc_pipefs.target.
Feb 20 06:37:26 localhost systemd[1]: Reached target Slice Units.
Feb 20 06:37:26 localhost systemd[1]: Reached target Swaps.
Feb 20 06:37:26 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Feb 20 06:37:26 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Feb 20 06:37:26 localhost systemd[1]: Reached target RPC Port Mapper.
Feb 20 06:37:26 localhost systemd[1]: Listening on Process Core Dump Socket.
Feb 20 06:37:26 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Feb 20 06:37:26 localhost systemd[1]: Listening on udev Control Socket.
Feb 20 06:37:26 localhost systemd[1]: Listening on udev Kernel Socket.
Feb 20 06:37:26 localhost systemd[1]: Mounting Huge Pages File System...
Feb 20 06:37:26 localhost systemd[1]: Mounting POSIX Message Queue File System...
Feb 20 06:37:26 localhost systemd[1]: Mounting Kernel Debug File System...
Feb 20 06:37:26 localhost systemd[1]: Mounting Kernel Trace File System...
Feb 20 06:37:26 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 20 06:37:26 localhost systemd[1]: Starting Create List of Static Device Nodes...
Feb 20 06:37:26 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 20 06:37:26 localhost systemd[1]: Starting Load Kernel Module drm...
Feb 20 06:37:26 localhost systemd[1]: Starting Load Kernel Module fuse...
Feb 20 06:37:26 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Feb 20 06:37:26 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 20 06:37:26 localhost systemd[1]: Stopped File System Check on Root Device.
Feb 20 06:37:26 localhost systemd[1]: Stopped Journal Service.
Feb 20 06:37:26 localhost systemd[1]: Starting Journal Service...
Feb 20 06:37:26 localhost systemd[1]: Starting Load Kernel Modules...
Feb 20 06:37:26 localhost kernel: fuse: init (API version 7.36)
Feb 20 06:37:26 localhost systemd[1]: Starting Generate network units from Kernel command line...
Feb 20 06:37:26 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Feb 20 06:37:26 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 20 06:37:26 localhost systemd-journald[618]: Journal started
Feb 20 06:37:26 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/01f46965e72fd8a157841feaa66c8d52) is 8.0M, max 314.7M, 306.7M free.
Feb 20 06:37:25 localhost systemd[1]: Queued start job for default target Multi-User System.
Feb 20 06:37:25 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 20 06:37:26 localhost systemd-modules-load[619]: Module 'msr' is built in
Feb 20 06:37:26 localhost systemd[1]: Starting Coldplug All udev Devices...
Feb 20 06:37:26 localhost systemd[1]: Started Journal Service.
Feb 20 06:37:26 localhost systemd[1]: Mounted Huge Pages File System.
Feb 20 06:37:26 localhost systemd[1]: Mounted POSIX Message Queue File System.
Feb 20 06:37:26 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Feb 20 06:37:26 localhost systemd[1]: Mounted Kernel Debug File System.
Feb 20 06:37:26 localhost systemd[1]: Mounted Kernel Trace File System.
Feb 20 06:37:26 localhost systemd[1]: Finished Create List of Static Device Nodes.
Feb 20 06:37:26 localhost kernel: ACPI: bus type drm_connector registered
Feb 20 06:37:26 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 20 06:37:26 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 20 06:37:26 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 20 06:37:26 localhost systemd[1]: Finished Load Kernel Module drm.
Feb 20 06:37:26 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 20 06:37:26 localhost systemd[1]: Finished Load Kernel Module fuse.
Feb 20 06:37:26 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Feb 20 06:37:26 localhost systemd[1]: Finished Load Kernel Modules.
Feb 20 06:37:26 localhost systemd[1]: Finished Generate network units from Kernel command line.
Feb 20 06:37:26 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Feb 20 06:37:26 localhost systemd[1]: Mounting FUSE Control File System...
Feb 20 06:37:26 localhost systemd[1]: Mounting Kernel Configuration File System...
Feb 20 06:37:26 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 20 06:37:26 localhost systemd[1]: Starting Rebuild Hardware Database...
Feb 20 06:37:26 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Feb 20 06:37:26 localhost systemd[1]: Starting Load/Save Random Seed...
Feb 20 06:37:26 localhost systemd[1]: Starting Apply Kernel Variables...
Feb 20 06:37:26 localhost systemd[1]: Starting Create System Users...
Feb 20 06:37:26 localhost systemd-journald[618]: Runtime Journal (/run/log/journal/01f46965e72fd8a157841feaa66c8d52) is 8.0M, max 314.7M, 306.7M free.
Feb 20 06:37:26 localhost systemd-journald[618]: Received client request to flush runtime journal.
Feb 20 06:37:26 localhost systemd[1]: Mounted FUSE Control File System.
Feb 20 06:37:26 localhost systemd[1]: Mounted Kernel Configuration File System.
Feb 20 06:37:26 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Feb 20 06:37:26 localhost systemd[1]: Finished Load/Save Random Seed.
Feb 20 06:37:26 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Feb 20 06:37:26 localhost systemd[1]: Finished Coldplug All udev Devices.
Feb 20 06:37:26 localhost systemd[1]: Finished Apply Kernel Variables.
Feb 20 06:37:26 localhost systemd-sysusers[630]: Creating group 'sgx' with GID 989.
Feb 20 06:37:26 localhost systemd-sysusers[630]: Creating group 'systemd-oom' with GID 988.
Feb 20 06:37:26 localhost systemd-sysusers[630]: Creating user 'systemd-oom' (systemd Userspace OOM Killer) with UID 988 and GID 988.
Feb 20 06:37:26 localhost systemd[1]: Finished Create System Users.
Feb 20 06:37:26 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Feb 20 06:37:26 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Feb 20 06:37:26 localhost systemd[1]: Reached target Preparation for Local File Systems.
Feb 20 06:37:26 localhost systemd[1]: Set up automount EFI System Partition Automount.
Feb 20 06:37:26 localhost systemd[1]: Finished Rebuild Hardware Database.
Feb 20 06:37:26 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 20 06:37:26 localhost systemd-udevd[635]: Using default interface naming scheme 'rhel-9.0'.
Feb 20 06:37:26 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 20 06:37:26 localhost systemd[1]: Starting Load Kernel Module configfs...
Feb 20 06:37:26 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 20 06:37:26 localhost systemd[1]: Finished Load Kernel Module configfs.
Feb 20 06:37:26 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Feb 20 06:37:26 localhost systemd-udevd[639]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 06:37:26 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/b141154b-6a70-437a-a97f-d160c9ba37eb being skipped.
Feb 20 06:37:26 localhost systemd[1]: Condition check resulted in /dev/disk/by-uuid/7B77-95E7 being skipped.
Feb 20 06:37:26 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/7B77-95E7...
Feb 20 06:37:26 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Feb 20 06:37:26 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Feb 20 06:37:26 localhost systemd-fsck[678]: fsck.fat 4.2 (2021-01-31)
Feb 20 06:37:26 localhost systemd-fsck[678]: /dev/vda2: 12 files, 1782/51145 clusters
Feb 20 06:37:26 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/7B77-95E7.
Feb 20 06:37:26 localhost kernel: SVM: TSC scaling supported
Feb 20 06:37:26 localhost kernel: kvm: Nested Virtualization enabled
Feb 20 06:37:26 localhost kernel: SVM: kvm: Nested Paging enabled
Feb 20 06:37:26 localhost kernel: SVM: LBR virtualization supported
Feb 20 06:37:26 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Feb 20 06:37:26 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Feb 20 06:37:26 localhost kernel: Console: switching to colour dummy device 80x25
Feb 20 06:37:26 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Feb 20 06:37:26 localhost kernel: [drm] features: -context_init
Feb 20 06:37:26 localhost kernel: [drm] number of scanouts: 1
Feb 20 06:37:26 localhost kernel: [drm] number of cap sets: 0
Feb 20 06:37:26 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 0 for virtio0 on minor 0
Feb 20 06:37:26 localhost kernel: virtio_gpu virtio0: [drm] drm_plane_enable_fb_damage_clips() not called
Feb 20 06:37:26 localhost kernel: Console: switching to colour frame buffer device 128x48
Feb 20 06:37:26 localhost kernel: virtio_gpu virtio0: [drm] fb0: virtio_gpudrmfb frame buffer device
Feb 20 06:37:27 localhost systemd[1]: Mounting /boot...
Feb 20 06:37:27 localhost kernel: XFS (vda3): Mounting V5 Filesystem
Feb 20 06:37:27 localhost kernel: XFS (vda3): Ending clean mount
Feb 20 06:37:27 localhost kernel: xfs filesystem being mounted at /boot supports timestamps until 2038 (0x7fffffff)
Feb 20 06:37:27 localhost systemd[1]: Mounted /boot.
Feb 20 06:37:27 localhost systemd[1]: Mounting /boot/efi...
Feb 20 06:37:27 localhost systemd[1]: Mounted /boot/efi.
Feb 20 06:37:27 localhost systemd[1]: Reached target Local File Systems.
Feb 20 06:37:27 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Feb 20 06:37:27 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Feb 20 06:37:27 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 20 06:37:27 localhost systemd[1]: Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 20 06:37:27 localhost systemd[1]: Starting Automatic Boot Loader Update...
Feb 20 06:37:27 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Feb 20 06:37:27 localhost systemd[1]: Starting Create Volatile Files and Directories...
Feb 20 06:37:27 localhost systemd[1]: efi.automount: Got automount request for /efi, triggered by 712 (bootctl)
Feb 20 06:37:27 localhost systemd[1]: Starting File System Check on /dev/vda2...
Feb 20 06:37:27 localhost systemd[1]: Finished File System Check on /dev/vda2.
Feb 20 06:37:27 localhost systemd[1]: Mounting EFI System Partition Automount...
Feb 20 06:37:27 localhost systemd[1]: Mounted EFI System Partition Automount.
Feb 20 06:37:27 localhost systemd[1]: Finished Automatic Boot Loader Update.
Feb 20 06:37:27 localhost systemd[1]: Finished Create Volatile Files and Directories.
Feb 20 06:37:27 localhost systemd[1]: Starting Security Auditing Service...
Feb 20 06:37:27 localhost systemd[1]: Starting RPC Bind...
Feb 20 06:37:27 localhost systemd[1]: Starting Rebuild Journal Catalog...
Feb 20 06:37:27 localhost auditd[725]: audit dispatcher initialized with q_depth=1200 and 1 active plugins
Feb 20 06:37:27 localhost auditd[725]: Init complete, auditd 3.0.7 listening for events (startup state enable)
Feb 20 06:37:27 localhost systemd[1]: Finished Rebuild Journal Catalog.
Feb 20 06:37:27 localhost systemd[1]: Started RPC Bind.
Feb 20 06:37:27 localhost augenrules[730]: /sbin/augenrules: No change
Feb 20 06:37:27 localhost augenrules[740]: No rules
Feb 20 06:37:27 localhost augenrules[740]: enabled 1
Feb 20 06:37:27 localhost augenrules[740]: failure 1
Feb 20 06:37:27 localhost augenrules[740]: pid 725
Feb 20 06:37:27 localhost augenrules[740]: rate_limit 0
Feb 20 06:37:27 localhost augenrules[740]: backlog_limit 8192
Feb 20 06:37:27 localhost augenrules[740]: lost 0
Feb 20 06:37:27 localhost augenrules[740]: backlog 1
Feb 20 06:37:27 localhost augenrules[740]: backlog_wait_time 60000
Feb 20 06:37:27 localhost augenrules[740]: backlog_wait_time_actual 0
Feb 20 06:37:27 localhost augenrules[740]: enabled 1
Feb 20 06:37:27 localhost augenrules[740]: failure 1
Feb 20 06:37:27 localhost augenrules[740]: pid 725
Feb 20 06:37:27 localhost augenrules[740]: rate_limit 0
Feb 20 06:37:27 localhost augenrules[740]: backlog_limit 8192
Feb 20 06:37:27 localhost augenrules[740]: lost 0
Feb 20 06:37:27 localhost augenrules[740]: backlog 0
Feb 20 06:37:27 localhost augenrules[740]: backlog_wait_time 60000
Feb 20 06:37:27 localhost augenrules[740]: backlog_wait_time_actual 0
Feb 20 06:37:27 localhost augenrules[740]: enabled 1
Feb 20 06:37:27 localhost augenrules[740]: failure 1
Feb 20 06:37:27 localhost augenrules[740]: pid 725
Feb 20 06:37:27 localhost augenrules[740]: rate_limit 0
Feb 20 06:37:27 localhost augenrules[740]: backlog_limit 8192
Feb 20 06:37:27 localhost augenrules[740]: lost 0
Feb 20 06:37:27 localhost augenrules[740]: backlog 0
Feb 20 06:37:27 localhost augenrules[740]: backlog_wait_time 60000
Feb 20 06:37:27 localhost augenrules[740]: backlog_wait_time_actual 0
Feb 20 06:37:27 localhost systemd[1]: Started Security Auditing Service.
Feb 20 06:37:27 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Feb 20 06:37:27 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Feb 20 06:37:27 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Feb 20 06:37:27 localhost systemd[1]: Starting Update is Completed...
Feb 20 06:37:27 localhost systemd[1]: Finished Update is Completed.
Feb 20 06:37:27 localhost systemd[1]: Reached target System Initialization.
Feb 20 06:37:27 localhost systemd[1]: Started dnf makecache --timer.
Feb 20 06:37:27 localhost systemd[1]: Started Daily rotation of log files.
Feb 20 06:37:27 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Feb 20 06:37:27 localhost systemd[1]: Reached target Timer Units.
Feb 20 06:37:27 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Feb 20 06:37:27 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Feb 20 06:37:27 localhost systemd[1]: Reached target Socket Units.
Feb 20 06:37:27 localhost systemd[1]: Starting Initial cloud-init job (pre-networking)...
Feb 20 06:37:27 localhost systemd[1]: Starting D-Bus System Message Bus...
Feb 20 06:37:27 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 20 06:37:27 localhost systemd[1]: Started D-Bus System Message Bus.
Feb 20 06:37:27 localhost systemd[1]: Reached target Basic System.
Feb 20 06:37:27 localhost dbus-broker-lau[750]: Ready
Feb 20 06:37:27 localhost systemd[1]: Starting NTP client/server...
Feb 20 06:37:27 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Feb 20 06:37:27 localhost systemd[1]: Started irqbalance daemon.
Feb 20 06:37:27 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Feb 20 06:37:27 localhost systemd[1]: Starting System Logging Service...
Feb 20 06:37:27 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 06:37:27 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 06:37:27 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 06:37:27 localhost systemd[1]: Reached target sshd-keygen.target.
Feb 20 06:37:27 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Feb 20 06:37:27 localhost systemd[1]: Reached target User and Group Name Lookups.
Feb 20 06:37:27 localhost systemd[1]: Starting User Login Management...
Feb 20 06:37:27 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Feb 20 06:37:27 localhost rsyslogd[758]: [origin software="rsyslogd" swVersion="8.2102.0-111.el9" x-pid="758" x-info="https://www.rsyslog.com"] start
Feb 20 06:37:27 localhost rsyslogd[758]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2040 ]
Feb 20 06:37:27 localhost systemd[1]: Started System Logging Service.
Feb 20 06:37:27 localhost chronyd[765]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Feb 20 06:37:27 localhost chronyd[765]: Using right/UTC timezone to obtain leap second data
Feb 20 06:37:27 localhost chronyd[765]: Loaded seccomp filter (level 2)
Feb 20 06:37:27 localhost systemd[1]: Started NTP client/server.
Feb 20 06:37:27 localhost systemd-logind[759]: New seat seat0.
Feb 20 06:37:27 localhost systemd-logind[759]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 20 06:37:27 localhost systemd-logind[759]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 20 06:37:27 localhost systemd[1]: Started User Login Management.
Feb 20 06:37:27 localhost rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 06:37:28 localhost cloud-init[769]: Cloud-init v. 22.1-9.el9 running 'init-local' at Fri, 20 Feb 2026 06:37:28 +0000. Up 6.43 seconds.
Feb 20 06:37:28 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Feb 20 06:37:28 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Feb 20 06:37:28 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpvjjtxb2_.mount: Deactivated successfully.
Feb 20 06:37:28 localhost systemd[1]: Starting Hostname Service...
Feb 20 06:37:28 localhost systemd[1]: Started Hostname Service.
Feb 20 06:37:28 np0005625203.novalocal systemd-hostnamed[783]: Hostname set to <np0005625203.novalocal> (static)
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Finished Initial cloud-init job (pre-networking).
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Reached target Preparation for Network.
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Starting Network Manager...
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.6996] NetworkManager (version 1.42.2-1.el9) is starting... (boot:1eccdeca-d6e6-4e77-a783-9cc9caeaa1c0)
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7002] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf)
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Started Network Manager.
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7040] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Reached target Network.
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Starting Network Manager Wait Online...
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7144] manager[0x556ff0d7e020]: monitoring kernel firmware directory '/lib/firmware'.
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Starting Enable periodic update of entitlement certificates....
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7214] hostname: hostname: using hostnamed
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7215] hostname: static hostname changed from (none) to "np0005625203.novalocal"
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Started Enable periodic update of entitlement certificates..
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7234] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Reached target NFS client services.
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Reached target Remote File Systems.
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7382] manager[0x556ff0d7e020]: rfkill: Wi-Fi hardware radio set enabled
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7385] manager[0x556ff0d7e020]: rfkill: WWAN hardware radio set enabled
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7466] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so)
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7466] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7474] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7475] manager: Networking is enabled by state file
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7512] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7512] settings: Loaded settings plugin: keyfile (internal)
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7538] dhcp: init: Using DHCP client 'internal'
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7540] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7563] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7576] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7589] device (lo): Activation: starting connection 'lo' (770919d7-cf31-4f24-b382-145ee7444fec)
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7601] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7606] device (eth0): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7648] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7655] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7657] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7659] device (eth0): carrier: link connected
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7662] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7679] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7720] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7723] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7726] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7726] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7734] device (lo): Activation: successful, device activated.
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7742] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7744] manager: NetworkManager state is now CONNECTING
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7745] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7752] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.7758] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.8746] dhcp4 (eth0): state changed new lease, address=38.102.83.53
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.8758] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.8793] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.8817] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'managed')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.8820] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.8826] manager: NetworkManager state is now CONNECTED_SITE
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.8833] device (eth0): Activation: successful, device activated.
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.8841] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 20 06:37:28 np0005625203.novalocal NetworkManager[788]: <info>  [1771569448.8848] manager: startup complete
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Finished Network Manager Wait Online.
Feb 20 06:37:28 np0005625203.novalocal systemd[1]: Starting Initial cloud-init job (metadata service crawler)...
Feb 20 06:37:29 np0005625203.novalocal systemd[1]: Starting Authorization Manager...
Feb 20 06:37:29 np0005625203.novalocal systemd[1]: Started Dynamic System Tuning Daemon.
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: Cloud-init v. 22.1-9.el9 running 'init' at Fri, 20 Feb 2026 06:37:29 +0000. Up 7.41 seconds.
Feb 20 06:37:29 np0005625203.novalocal polkitd[1028]: Started polkitd version 0.117
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: |  eth0  | True |         38.102.83.53         | 255.255.255.0 | global | fa:16:3e:b2:b9:37 |
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: |  eth0  | True | fe80::f816:3eff:feb2:b937/64 |       .       |  link  | fa:16:3e:b2:b9:37 |
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Feb 20 06:37:29 np0005625203.novalocal cloud-init[1032]: ci-info: +-------+-------------+---------+-----------+-------+
Feb 20 06:37:29 np0005625203.novalocal polkitd[1028]: Loading rules from directory /etc/polkit-1/rules.d
Feb 20 06:37:29 np0005625203.novalocal polkitd[1028]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 20 06:37:29 np0005625203.novalocal polkitd[1028]: Finished loading, compiling and executing 4 rules
Feb 20 06:37:29 np0005625203.novalocal systemd[1]: Started Authorization Manager.
Feb 20 06:37:29 np0005625203.novalocal polkitd[1028]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 20 06:37:31 np0005625203.novalocal useradd[1118]: new group: name=cloud-user, GID=1001
Feb 20 06:37:31 np0005625203.novalocal useradd[1118]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Feb 20 06:37:31 np0005625203.novalocal useradd[1118]: add 'cloud-user' to group 'adm'
Feb 20 06:37:31 np0005625203.novalocal useradd[1118]: add 'cloud-user' to group 'systemd-journal'
Feb 20 06:37:31 np0005625203.novalocal useradd[1118]: add 'cloud-user' to shadow group 'adm'
Feb 20 06:37:31 np0005625203.novalocal useradd[1118]: add 'cloud-user' to shadow group 'systemd-journal'
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: Generating public/private rsa key pair.
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: The key fingerprint is:
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: SHA256:inTb0PkiKY54jveHUeMPxWYjIw15B5UfuSVZBxiHvDw root@np0005625203.novalocal
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: The key's randomart image is:
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: +---[RSA 3072]----+
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |     ..o.o.B+..  |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |    o . o O...   |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |     + o o *     |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |    . *.*.E      |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |    .+oBS. .     |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |   ..oo* .       |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |    oo=oo .      |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: | ooo......       |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |o+o.o.           |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: +----[SHA256]-----+
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: Generating public/private ecdsa key pair.
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: The key fingerprint is:
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: SHA256:83tCJClMH/syg/ijIUmhyg2K2LsKUUuYUZ3P1kvpdmo root@np0005625203.novalocal
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: The key's randomart image is:
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: +---[ECDSA 256]---+
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |.... .           |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: | +  o . .        |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |o +  = o =       |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: | + o  * O .      |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |o.o  o =S=       |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |*+o.. . Bo+      |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |*.+... . B.      |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |.  o .o E ...    |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |..o... o  .o     |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: +----[SHA256]-----+
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: Generating public/private ed25519 key pair.
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: The key fingerprint is:
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: SHA256:Jyw6f2agrJjpye4AhpkNr8ET1C/ECwrdYvX6zi/AblQ root@np0005625203.novalocal
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: The key's randomart image is:
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: +--[ED25519 256]--+
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: | ooo.            |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |o.++..           |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |+oo.o .          |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |+*.o oE.         |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |*+o.o.. S .      |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |oo. +o.. o       |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |o  +oo..         |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |o+. ==. +        |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: |B*.o  +*.        |
Feb 20 06:37:31 np0005625203.novalocal cloud-init[1032]: +----[SHA256]-----+
Feb 20 06:37:31 np0005625203.novalocal sm-notify[1131]: Version 2.5.4 starting
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Finished Initial cloud-init job (metadata service crawler).
Feb 20 06:37:31 np0005625203.novalocal sshd[1132]: Server listening on 0.0.0.0 port 22.
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Reached target Cloud-config availability.
Feb 20 06:37:31 np0005625203.novalocal sshd[1132]: Server listening on :: port 22.
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Reached target Network is Online.
Feb 20 06:37:31 np0005625203.novalocal sshd[1141]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Starting Apply the settings specified in cloud-config...
Feb 20 06:37:31 np0005625203.novalocal crond[1147]: (CRON) STARTUP (1.5.7)
Feb 20 06:37:31 np0005625203.novalocal sshd[1132]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Run Insights Client at boot was skipped because of an unmet condition check (ConditionPathExists=/etc/insights-client/.run_insights_client_next_boot).
Feb 20 06:37:31 np0005625203.novalocal crond[1147]: (CRON) INFO (Syslog will be used instead of sendmail.)
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Starting Crash recovery kernel arming...
Feb 20 06:37:31 np0005625203.novalocal crond[1147]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 31% if used.)
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Feb 20 06:37:31 np0005625203.novalocal crond[1147]: (CRON) INFO (running with inotify support)
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Starting OpenSSH server daemon...
Feb 20 06:37:31 np0005625203.novalocal sshd[1156]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Starting Permit User Sessions...
Feb 20 06:37:31 np0005625203.novalocal sshd[1156]: Unable to negotiate with 38.102.83.114 port 60132: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Started Notify NFS peers of a restart.
Feb 20 06:37:31 np0005625203.novalocal sshd[1167]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Finished Permit User Sessions.
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Started OpenSSH server daemon.
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Started Command Scheduler.
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Started Getty on tty1.
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Started Serial Getty on ttyS0.
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Reached target Login Prompts.
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Reached target Multi-User System.
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Feb 20 06:37:31 np0005625203.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Feb 20 06:37:31 np0005625203.novalocal sshd[1167]: Connection reset by 38.102.83.114 port 60140 [preauth]
Feb 20 06:37:32 np0005625203.novalocal sshd[1172]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:37:32 np0005625203.novalocal sshd[1172]: Unable to negotiate with 38.102.83.114 port 60144: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Feb 20 06:37:32 np0005625203.novalocal sshd[1177]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:37:32 np0005625203.novalocal sshd[1177]: Unable to negotiate with 38.102.83.114 port 60152: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Feb 20 06:37:32 np0005625203.novalocal sshd[1193]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:37:32 np0005625203.novalocal kdumpctl[1135]: kdump: No kdump initial ramdisk found.
Feb 20 06:37:32 np0005625203.novalocal kdumpctl[1135]: kdump: Rebuilding /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img
Feb 20 06:37:32 np0005625203.novalocal sshd[1141]: Connection closed by 38.102.83.114 port 60126 [preauth]
Feb 20 06:37:32 np0005625203.novalocal sshd[1199]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:37:32 np0005625203.novalocal sshd[1199]: Connection reset by 38.102.83.114 port 60168 [preauth]
Feb 20 06:37:32 np0005625203.novalocal sshd[1206]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:37:32 np0005625203.novalocal sshd[1206]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 06:37:32 np0005625203.novalocal sshd[1228]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:37:32 np0005625203.novalocal sshd[1228]: Unable to negotiate with 38.102.83.114 port 60180: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Feb 20 06:37:32 np0005625203.novalocal cloud-init[1265]: Cloud-init v. 22.1-9.el9 running 'modules:config' at Fri, 20 Feb 2026 06:37:32 +0000. Up 10.39 seconds.
Feb 20 06:37:32 np0005625203.novalocal sshd[1193]: Connection closed by 38.102.83.114 port 60164 [preauth]
Feb 20 06:37:32 np0005625203.novalocal systemd[1]: Finished Apply the settings specified in cloud-config.
Feb 20 06:37:32 np0005625203.novalocal systemd[1]: Starting Execute cloud user/final scripts...
Feb 20 06:37:32 np0005625203.novalocal dracut[1434]: dracut-057-21.git20230214.el9
Feb 20 06:37:32 np0005625203.novalocal cloud-init[1452]: Cloud-init v. 22.1-9.el9 running 'modules:final' at Fri, 20 Feb 2026 06:37:32 +0000. Up 10.76 seconds.
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: Executing: /usr/bin/dracut --add kdumpbase --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  -o "plymouth resume ifcfg earlykdump" --mount "/dev/disk/by-uuid/a3dd82de-ffc6-4652-88b9-80e003b8f20a /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device -f /boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img 5.14.0-284.11.1.el9_2.x86_64
Feb 20 06:37:32 np0005625203.novalocal cloud-init[1483]: #############################################################
Feb 20 06:37:32 np0005625203.novalocal cloud-init[1487]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Feb 20 06:37:32 np0005625203.novalocal cloud-init[1496]: 256 SHA256:83tCJClMH/syg/ijIUmhyg2K2LsKUUuYUZ3P1kvpdmo root@np0005625203.novalocal (ECDSA)
Feb 20 06:37:32 np0005625203.novalocal cloud-init[1502]: 256 SHA256:Jyw6f2agrJjpye4AhpkNr8ET1C/ECwrdYvX6zi/AblQ root@np0005625203.novalocal (ED25519)
Feb 20 06:37:32 np0005625203.novalocal cloud-init[1507]: 3072 SHA256:inTb0PkiKY54jveHUeMPxWYjIw15B5UfuSVZBxiHvDw root@np0005625203.novalocal (RSA)
Feb 20 06:37:32 np0005625203.novalocal cloud-init[1510]: -----END SSH HOST KEY FINGERPRINTS-----
Feb 20 06:37:32 np0005625203.novalocal cloud-init[1512]: #############################################################
Feb 20 06:37:32 np0005625203.novalocal cloud-init[1452]: Cloud-init v. 22.1-9.el9 finished at Fri, 20 Feb 2026 06:37:32 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.99 seconds
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 20 06:37:32 np0005625203.novalocal systemd[1]: Reloading Network Manager...
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 20 06:37:32 np0005625203.novalocal NetworkManager[788]: <info>  [1771569452.8285] audit: op="reload" arg="0" pid=1598 uid=0 result="success"
Feb 20 06:37:32 np0005625203.novalocal NetworkManager[788]: <info>  [1771569452.8298] config: signal: SIGHUP (no changes from disk)
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 20 06:37:32 np0005625203.novalocal systemd[1]: Reloaded Network Manager.
Feb 20 06:37:32 np0005625203.novalocal systemd[1]: Finished Execute cloud user/final scripts.
Feb 20 06:37:32 np0005625203.novalocal systemd[1]: Reached target Cloud-init target.
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'ifcfg' will not be installed, because it's in the list to be omitted!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'plymouth' will not be installed, because it's in the list to be omitted!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 20 06:37:32 np0005625203.novalocal dracut[1436]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'resume' will not be installed, because it's in the list to be omitted!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'earlykdump' will not be installed, because it's in the list to be omitted!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: memstrack is not available
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: memstrack is not available
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Feb 20 06:37:33 np0005625203.novalocal dracut[1436]: *** Including module: systemd ***
Feb 20 06:37:33 np0005625203.novalocal chronyd[765]: Selected source 199.182.221.110 (2.rhel.pool.ntp.org)
Feb 20 06:37:33 np0005625203.novalocal chronyd[765]: System clock TAI offset set to 37 seconds
Feb 20 06:37:34 np0005625203.novalocal dracut[1436]: *** Including module: systemd-initrd ***
Feb 20 06:37:34 np0005625203.novalocal dracut[1436]: *** Including module: i18n ***
Feb 20 06:37:34 np0005625203.novalocal dracut[1436]: No KEYMAP configured.
Feb 20 06:37:34 np0005625203.novalocal dracut[1436]: *** Including module: drm ***
Feb 20 06:37:34 np0005625203.novalocal dracut[1436]: *** Including module: prefixdevname ***
Feb 20 06:37:34 np0005625203.novalocal dracut[1436]: *** Including module: kernel-modules ***
Feb 20 06:37:35 np0005625203.novalocal chronyd[765]: Selected source 149.56.19.163 (2.rhel.pool.ntp.org)
Feb 20 06:37:35 np0005625203.novalocal dracut[1436]: *** Including module: kernel-modules-extra ***
Feb 20 06:37:35 np0005625203.novalocal dracut[1436]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Feb 20 06:37:35 np0005625203.novalocal dracut[1436]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Feb 20 06:37:35 np0005625203.novalocal dracut[1436]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Feb 20 06:37:35 np0005625203.novalocal dracut[1436]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Feb 20 06:37:35 np0005625203.novalocal dracut[1436]: *** Including module: qemu ***
Feb 20 06:37:35 np0005625203.novalocal dracut[1436]: *** Including module: fstab-sys ***
Feb 20 06:37:35 np0005625203.novalocal dracut[1436]: *** Including module: rootfs-block ***
Feb 20 06:37:35 np0005625203.novalocal dracut[1436]: *** Including module: terminfo ***
Feb 20 06:37:35 np0005625203.novalocal dracut[1436]: *** Including module: udev-rules ***
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]: Skipping udev rule: 91-permissions.rules
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]: Skipping udev rule: 80-drivers-modprobe.rules
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]: *** Including module: virtiofs ***
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]: *** Including module: dracut-systemd ***
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]: *** Including module: usrmount ***
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]: *** Including module: base ***
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]: *** Including module: fs-lib ***
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]: *** Including module: kdumpbase ***
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]: *** Including module: microcode_ctl-fw_dir_override ***
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]:   microcode_ctl module: mangling fw_dir
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]:     microcode_ctl: configuration "intel" is ignored
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Feb 20 06:37:36 np0005625203.novalocal dracut[1436]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]:     microcode_ctl: final fw_dir: "/lib/firmware/updates/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware/updates /lib/firmware/5.14.0-284.11.1.el9_2.x86_64 /lib/firmware"
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]: *** Including module: shutdown ***
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]: *** Including module: squash ***
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]: *** Including modules done ***
Feb 20 06:37:37 np0005625203.novalocal dracut[1436]: *** Installing kernel module dependencies ***
Feb 20 06:37:38 np0005625203.novalocal dracut[1436]: *** Installing kernel module dependencies done ***
Feb 20 06:37:38 np0005625203.novalocal dracut[1436]: *** Resolving executable dependencies ***
Feb 20 06:37:38 np0005625203.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: *** Resolving executable dependencies done ***
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: *** Hardlinking files ***
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: Mode:           real
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: Files:          1099
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: Linked:         3 files
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: Compared:       0 xattrs
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: Compared:       373 files
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: Saved:          61.04 KiB
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: Duration:       0.051418 seconds
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: *** Hardlinking files done ***
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: Could not find 'strip'. Not stripping the initramfs.
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: *** Generating early-microcode cpio image ***
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: *** Constructing AuthenticAMD.bin ***
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: *** Store current command line parameters ***
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: Stored kernel commandline:
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: No dracut internal kernel commandline stored in the initramfs
Feb 20 06:37:39 np0005625203.novalocal dracut[1436]: *** Install squash loader ***
Feb 20 06:37:40 np0005625203.novalocal dracut[1436]: *** Squashing the files inside the initramfs ***
Feb 20 06:37:41 np0005625203.novalocal dracut[1436]: *** Squashing the files inside the initramfs done ***
Feb 20 06:37:41 np0005625203.novalocal dracut[1436]: *** Creating image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' ***
Feb 20 06:37:41 np0005625203.novalocal dracut[1436]: *** Creating initramfs image file '/boot/initramfs-5.14.0-284.11.1.el9_2.x86_64kdump.img' done ***
Feb 20 06:37:41 np0005625203.novalocal kdumpctl[1135]: kdump: kexec: loaded kdump kernel
Feb 20 06:37:41 np0005625203.novalocal kdumpctl[1135]: kdump: Starting kdump: [OK]
Feb 20 06:37:41 np0005625203.novalocal systemd[1]: Finished Crash recovery kernel arming.
Feb 20 06:37:41 np0005625203.novalocal systemd[1]: Startup finished in 1.365s (kernel) + 2.103s (initrd) + 16.809s (userspace) = 20.278s.
Feb 20 06:37:58 np0005625203.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 20 06:38:17 np0005625203.novalocal sshd[4175]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:38:17 np0005625203.novalocal sshd[4175]: Accepted publickey for zuul from 38.102.83.114 port 60390 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Feb 20 06:38:17 np0005625203.novalocal systemd[1]: Created slice User Slice of UID 1000.
Feb 20 06:38:17 np0005625203.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Feb 20 06:38:17 np0005625203.novalocal systemd-logind[759]: New session 1 of user zuul.
Feb 20 06:38:17 np0005625203.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Feb 20 06:38:17 np0005625203.novalocal systemd[1]: Starting User Manager for UID 1000...
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Queued start job for default target Main User Target.
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Created slice User Application Slice.
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Started Daily Cleanup of User's Temporary Directories.
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Reached target Paths.
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Reached target Timers.
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Starting D-Bus User Message Bus Socket...
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Starting Create User's Volatile Files and Directories...
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Finished Create User's Volatile Files and Directories.
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Listening on D-Bus User Message Bus Socket.
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Reached target Sockets.
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Reached target Basic System.
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Reached target Main User Target.
Feb 20 06:38:17 np0005625203.novalocal systemd[4179]: Startup finished in 130ms.
Feb 20 06:38:17 np0005625203.novalocal systemd[1]: Started User Manager for UID 1000.
Feb 20 06:38:17 np0005625203.novalocal systemd[1]: Started Session 1 of User zuul.
Feb 20 06:38:17 np0005625203.novalocal sshd[4175]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 06:38:18 np0005625203.novalocal python3[4231]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 06:38:26 np0005625203.novalocal python3[4249]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 06:38:33 np0005625203.novalocal python3[4302]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 06:38:34 np0005625203.novalocal python3[4332]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Feb 20 06:38:37 np0005625203.novalocal python3[4348]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:38:38 np0005625203.novalocal python3[4362]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:38:39 np0005625203.novalocal python3[4421]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:38:40 np0005625203.novalocal python3[4462]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771569519.408956-392-145026573871640/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=2f497d714b8e492188c98067e59a9994_id_rsa follow=False checksum=1ede725f5cdca64ff103c7e62f7bb7b42f0b9244 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:38:41 np0005625203.novalocal python3[4535]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:38:41 np0005625203.novalocal python3[4576]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771569521.175617-492-63477904534716/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=2f497d714b8e492188c98067e59a9994_id_rsa.pub follow=False checksum=d5896bb6dcd221ffe99ce3acccb68a5152af8369 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:38:43 np0005625203.novalocal python3[4604]: ansible-ping Invoked with data=pong
Feb 20 06:38:45 np0005625203.novalocal python3[4618]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 06:38:49 np0005625203.novalocal python3[4672]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Feb 20 06:38:51 np0005625203.novalocal python3[4694]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:38:51 np0005625203.novalocal python3[4708]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:38:52 np0005625203.novalocal python3[4722]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:38:53 np0005625203.novalocal python3[4736]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:38:53 np0005625203.novalocal python3[4750]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:38:53 np0005625203.novalocal python3[4764]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:38:56 np0005625203.novalocal sudo[4778]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqajecqprrtknklkirpisytyphuxmzyt ; /usr/bin/python3
Feb 20 06:38:56 np0005625203.novalocal sudo[4778]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:38:56 np0005625203.novalocal python3[4780]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:38:56 np0005625203.novalocal sudo[4778]: pam_unix(sudo:session): session closed for user root
Feb 20 06:38:57 np0005625203.novalocal sudo[4826]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aygwadrkqylwtzmecwzcvfytbtjypdjn ; /usr/bin/python3
Feb 20 06:38:57 np0005625203.novalocal sudo[4826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:38:58 np0005625203.novalocal python3[4828]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:38:58 np0005625203.novalocal sudo[4826]: pam_unix(sudo:session): session closed for user root
Feb 20 06:38:58 np0005625203.novalocal sudo[4869]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfrphdschidpehzhoarpfjwydbrdmlzj ; /usr/bin/python3
Feb 20 06:38:58 np0005625203.novalocal sudo[4869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:38:58 np0005625203.novalocal python3[4871]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1771569537.8353457-100-200278883922492/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:38:58 np0005625203.novalocal sudo[4869]: pam_unix(sudo:session): session closed for user root
Feb 20 06:39:05 np0005625203.novalocal python3[4899]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:06 np0005625203.novalocal python3[4913]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:06 np0005625203.novalocal python3[4927]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:06 np0005625203.novalocal python3[4941]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:07 np0005625203.novalocal python3[4955]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:07 np0005625203.novalocal python3[4969]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:07 np0005625203.novalocal python3[4983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:07 np0005625203.novalocal python3[4997]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:08 np0005625203.novalocal python3[5011]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:08 np0005625203.novalocal python3[5025]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:08 np0005625203.novalocal python3[5039]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:09 np0005625203.novalocal python3[5053]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:09 np0005625203.novalocal python3[5067]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICWBreHW95Wz2Toz5YwCGQwFcUG8oFYkienDh9tntmDc ralfieri@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:09 np0005625203.novalocal python3[5081]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:09 np0005625203.novalocal python3[5095]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:10 np0005625203.novalocal python3[5109]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:10 np0005625203.novalocal python3[5123]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:10 np0005625203.novalocal python3[5137]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:10 np0005625203.novalocal python3[5151]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:11 np0005625203.novalocal python3[5165]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:11 np0005625203.novalocal python3[5179]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:11 np0005625203.novalocal python3[5193]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:12 np0005625203.novalocal python3[5207]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:12 np0005625203.novalocal python3[5221]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:12 np0005625203.novalocal python3[5235]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:12 np0005625203.novalocal python3[5249]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:39:13 np0005625203.novalocal sudo[5263]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yttbzvtrkugzhiofkefntabesgfsbzsz ; /usr/bin/python3
Feb 20 06:39:13 np0005625203.novalocal sudo[5263]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:39:13 np0005625203.novalocal python3[5265]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 20 06:39:13 np0005625203.novalocal systemd[1]: Starting Time & Date Service...
Feb 20 06:39:13 np0005625203.novalocal systemd[1]: Started Time & Date Service.
Feb 20 06:39:13 np0005625203.novalocal systemd-timedated[5267]: Changed time zone to 'UTC' (UTC).
Feb 20 06:39:14 np0005625203.novalocal sudo[5263]: pam_unix(sudo:session): session closed for user root
Feb 20 06:39:14 np0005625203.novalocal sudo[5284]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-msbdtbfhuzuyjkirijmmhekbsdrctmhd ; /usr/bin/python3
Feb 20 06:39:14 np0005625203.novalocal sudo[5284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:39:14 np0005625203.novalocal python3[5286]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:39:14 np0005625203.novalocal sudo[5284]: pam_unix(sudo:session): session closed for user root
Feb 20 06:39:15 np0005625203.novalocal python3[5332]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:39:16 np0005625203.novalocal python3[5373]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1771569555.6290832-495-156356872618505/source _original_basename=tmpfj9yc_gd follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:39:17 np0005625203.novalocal python3[5433]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:39:17 np0005625203.novalocal python3[5474]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1771569557.162269-586-24077753330232/source _original_basename=tmpo2rq3bx6 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:39:19 np0005625203.novalocal sudo[5534]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ffrigsazkgawiyrwwgbxrzpwfsretjns ; /usr/bin/python3
Feb 20 06:39:19 np0005625203.novalocal sudo[5534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:39:19 np0005625203.novalocal python3[5536]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:39:19 np0005625203.novalocal sudo[5534]: pam_unix(sudo:session): session closed for user root
Feb 20 06:39:19 np0005625203.novalocal sudo[5577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqxnuzsypvfyawwrgjfxugpizhtgwwko ; /usr/bin/python3
Feb 20 06:39:19 np0005625203.novalocal sudo[5577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:39:19 np0005625203.novalocal python3[5579]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1771569559.1722383-730-68332562810850/source _original_basename=tmpdkhi9ufg follow=False checksum=1cc2ea2b76967ada2d4710a35e138c3751da2100 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:39:19 np0005625203.novalocal sudo[5577]: pam_unix(sudo:session): session closed for user root
Feb 20 06:39:20 np0005625203.novalocal python3[5607]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 06:39:21 np0005625203.novalocal python3[5623]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 06:39:22 np0005625203.novalocal sudo[5671]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zatbpbvesffaxfhyistgshzbixfmcspm ; /usr/bin/python3
Feb 20 06:39:22 np0005625203.novalocal sudo[5671]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:39:22 np0005625203.novalocal python3[5673]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:39:22 np0005625203.novalocal sudo[5671]: pam_unix(sudo:session): session closed for user root
Feb 20 06:39:22 np0005625203.novalocal sudo[5714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hpdkohjvuheyidhhrjluywsdmdwbusbr ; /usr/bin/python3
Feb 20 06:39:22 np0005625203.novalocal sudo[5714]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:39:22 np0005625203.novalocal python3[5716]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1771569562.1104398-857-218050887515134/source _original_basename=tmp_g0ilpvl follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:39:22 np0005625203.novalocal sudo[5714]: pam_unix(sudo:session): session closed for user root
Feb 20 06:39:23 np0005625203.novalocal sudo[5745]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-llbbbbtobjakyixaswjfgduohiuhcmqn ; /usr/bin/python3
Feb 20 06:39:23 np0005625203.novalocal sudo[5745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:39:23 np0005625203.novalocal python3[5747]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-ff2a-a63c-000000000023-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 06:39:24 np0005625203.novalocal sudo[5745]: pam_unix(sudo:session): session closed for user root
Feb 20 06:39:25 np0005625203.novalocal python3[5765]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-ff2a-a63c-000000000024-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Feb 20 06:39:26 np0005625203.novalocal python3[5783]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:39:44 np0005625203.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 20 06:39:46 np0005625203.novalocal sudo[5800]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxzlmoflaseetpthffkovjuukhgjzzzl ; /usr/bin/python3
Feb 20 06:39:46 np0005625203.novalocal sudo[5800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:39:46 np0005625203.novalocal python3[5802]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:39:46 np0005625203.novalocal sudo[5800]: pam_unix(sudo:session): session closed for user root
Feb 20 06:40:46 np0005625203.novalocal sshd[4188]: Received disconnect from 38.102.83.114 port 60390:11: disconnected by user
Feb 20 06:40:46 np0005625203.novalocal sshd[4188]: Disconnected from user zuul 38.102.83.114 port 60390
Feb 20 06:40:46 np0005625203.novalocal sshd[4175]: pam_unix(sshd:session): session closed for user zuul
Feb 20 06:40:46 np0005625203.novalocal systemd-logind[759]: Session 1 logged out. Waiting for processes to exit.
Feb 20 06:40:49 np0005625203.novalocal systemd[4179]: Starting Mark boot as successful...
Feb 20 06:40:49 np0005625203.novalocal systemd[4179]: Finished Mark boot as successful.
Feb 20 06:41:28 np0005625203.novalocal systemd[1]: Unmounting EFI System Partition Automount...
Feb 20 06:41:28 np0005625203.novalocal systemd[1]: efi.mount: Deactivated successfully.
Feb 20 06:41:28 np0005625203.novalocal systemd[1]: Unmounted EFI System Partition Automount.
Feb 20 06:43:19 np0005625203.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000
Feb 20 06:43:19 np0005625203.novalocal kernel: pci 0000:00:07.0: reg 0x10: [io  0x0000-0x003f]
Feb 20 06:43:19 np0005625203.novalocal kernel: pci 0000:00:07.0: reg 0x14: [mem 0x00000000-0x00000fff]
Feb 20 06:43:19 np0005625203.novalocal kernel: pci 0000:00:07.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref]
Feb 20 06:43:19 np0005625203.novalocal kernel: pci 0000:00:07.0: reg 0x30: [mem 0x00000000-0x0007ffff pref]
Feb 20 06:43:19 np0005625203.novalocal kernel: pci 0000:00:07.0: BAR 6: assigned [mem 0xc0000000-0xc007ffff pref]
Feb 20 06:43:19 np0005625203.novalocal kernel: pci 0000:00:07.0: BAR 4: assigned [mem 0x440000000-0x440003fff 64bit pref]
Feb 20 06:43:19 np0005625203.novalocal kernel: pci 0000:00:07.0: BAR 1: assigned [mem 0xc0080000-0xc0080fff]
Feb 20 06:43:19 np0005625203.novalocal kernel: pci 0000:00:07.0: BAR 0: assigned [io  0x1000-0x103f]
Feb 20 06:43:19 np0005625203.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Feb 20 06:43:19 np0005625203.novalocal NetworkManager[788]: <info>  [1771569799.0693] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 20 06:43:19 np0005625203.novalocal systemd-udevd[5809]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 06:43:19 np0005625203.novalocal NetworkManager[788]: <info>  [1771569799.0822] device (eth1): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external')
Feb 20 06:43:19 np0005625203.novalocal NetworkManager[788]: <info>  [1771569799.0856] settings: (eth1): created default wired connection 'Wired connection 1'
Feb 20 06:43:19 np0005625203.novalocal systemd[4179]: Created slice User Background Tasks Slice.
Feb 20 06:43:19 np0005625203.novalocal NetworkManager[788]: <info>  [1771569799.0864] device (eth1): carrier: link connected
Feb 20 06:43:19 np0005625203.novalocal NetworkManager[788]: <info>  [1771569799.0867] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', sys-iface-state: 'managed')
Feb 20 06:43:19 np0005625203.novalocal NetworkManager[788]: <info>  [1771569799.0876] policy: auto-activating connection 'Wired connection 1' (2ba74377-63d0-3206-8fe0-ecc8ab2abf91)
Feb 20 06:43:19 np0005625203.novalocal NetworkManager[788]: <info>  [1771569799.0883] device (eth1): Activation: starting connection 'Wired connection 1' (2ba74377-63d0-3206-8fe0-ecc8ab2abf91)
Feb 20 06:43:19 np0005625203.novalocal NetworkManager[788]: <info>  [1771569799.0884] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'managed')
Feb 20 06:43:19 np0005625203.novalocal systemd[4179]: Starting Cleanup of User's Temporary Files and Directories...
Feb 20 06:43:19 np0005625203.novalocal NetworkManager[788]: <info>  [1771569799.0892] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'managed')
Feb 20 06:43:19 np0005625203.novalocal NetworkManager[788]: <info>  [1771569799.0899] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'managed')
Feb 20 06:43:19 np0005625203.novalocal NetworkManager[788]: <info>  [1771569799.0905] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 20 06:43:19 np0005625203.novalocal systemd[4179]: Finished Cleanup of User's Temporary Files and Directories.
Feb 20 06:43:19 np0005625203.novalocal sshd[5813]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:43:20 np0005625203.novalocal sshd[5813]: Accepted publickey for zuul from 38.102.83.114 port 36446 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 06:43:20 np0005625203.novalocal systemd-logind[759]: New session 3 of user zuul.
Feb 20 06:43:20 np0005625203.novalocal systemd[1]: Started Session 3 of User zuul.
Feb 20 06:43:20 np0005625203.novalocal sshd[5813]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 06:43:20 np0005625203.novalocal kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth1: link becomes ready
Feb 20 06:43:20 np0005625203.novalocal python3[5830]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-fb18-e746-000000000408-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 06:43:33 np0005625203.novalocal sudo[5878]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bxmttbyocwygseglardznxfrcvaciehc ; OS_CLOUD=vexxhost /usr/bin/python3
Feb 20 06:43:33 np0005625203.novalocal sudo[5878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:43:33 np0005625203.novalocal python3[5880]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:43:33 np0005625203.novalocal sudo[5878]: pam_unix(sudo:session): session closed for user root
Feb 20 06:43:33 np0005625203.novalocal sudo[5921]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-suszgomxpzrynrdoohvfibteyuwautfa ; OS_CLOUD=vexxhost /usr/bin/python3
Feb 20 06:43:33 np0005625203.novalocal sudo[5921]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:43:33 np0005625203.novalocal python3[5923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771569813.1389546-486-72960172612674/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=1eb0c8019c85c43f83665a7bd8398bec863c8a06 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:43:33 np0005625203.novalocal sudo[5921]: pam_unix(sudo:session): session closed for user root
Feb 20 06:43:34 np0005625203.novalocal sudo[5951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ycznaythzcbnbgtopqjxzohtaewgzclo ; OS_CLOUD=vexxhost /usr/bin/python3
Feb 20 06:43:34 np0005625203.novalocal sudo[5951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:43:34 np0005625203.novalocal python3[5953]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: Stopped Network Manager Wait Online.
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: Stopping Network Manager Wait Online...
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: Stopping Network Manager...
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[788]: <info>  [1771569815.4373] caught SIGTERM, shutting down normally.
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[788]: <info>  [1771569815.4498] dhcp4 (eth0): canceled DHCP transaction
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[788]: <info>  [1771569815.4499] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[788]: <info>  [1771569815.4499] dhcp4 (eth0): state changed no lease
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[788]: <info>  [1771569815.4505] manager: NetworkManager state is now CONNECTING
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[788]: <info>  [1771569815.4628] dhcp4 (eth1): canceled DHCP transaction
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[788]: <info>  [1771569815.4629] dhcp4 (eth1): state changed no lease
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[788]: <info>  [1771569815.4728] exiting (success)
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: Stopped Network Manager.
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: NetworkManager.service: Consumed 1.880s CPU time.
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: Starting Network Manager...
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.5299] NetworkManager (version 1.42.2-1.el9) is starting... (after a restart, boot:1eccdeca-d6e6-4e77-a783-9cc9caeaa1c0)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.5302] Read config: /etc/NetworkManager/NetworkManager.conf (run: 15-carrier-timeout.conf)
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: Started Network Manager.
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.5324] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: Starting Network Manager Wait Online...
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.5372] manager[0x563a57c8b090]: monitoring kernel firmware directory '/lib/firmware'.
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: Starting Hostname Service...
Feb 20 06:43:35 np0005625203.novalocal sudo[5951]: pam_unix(sudo:session): session closed for user root
Feb 20 06:43:35 np0005625203.novalocal systemd[1]: Started Hostname Service.
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.5993] hostname: hostname: using hostnamed
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.5993] hostname: static hostname changed from (none) to "np0005625203.novalocal"
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.5999] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6005] manager[0x563a57c8b090]: rfkill: Wi-Fi hardware radio set enabled
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6005] manager[0x563a57c8b090]: rfkill: WWAN hardware radio set enabled
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6042] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-device-plugin-team.so)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6043] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6043] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6044] manager: Networking is enabled by state file
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6051] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.42.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6052] settings: Loaded settings plugin: keyfile (internal)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6091] dhcp: init: Using DHCP client 'internal'
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6095] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6102] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6108] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'external')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6119] device (lo): Activation: starting connection 'lo' (770919d7-cf31-4f24-b382-145ee7444fec)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6127] device (eth0): carrier: link connected
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6133] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6140] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6140] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6149] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6159] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6166] device (eth1): carrier: link connected
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6172] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6179] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (2ba74377-63d0-3206-8fe0-ecc8ab2abf91) (indicated)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6179] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6186] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6196] device (eth1): Activation: starting connection 'Wired connection 1' (2ba74377-63d0-3206-8fe0-ecc8ab2abf91)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6223] device (lo): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'external')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6226] device (lo): state change: prepare -> config (reason 'none', sys-iface-state: 'external')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6230] device (lo): state change: config -> ip-config (reason 'none', sys-iface-state: 'external')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6235] device (eth0): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6239] device (eth0): state change: prepare -> config (reason 'none', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6242] device (eth1): state change: disconnected -> prepare (reason 'none', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6245] device (eth1): state change: prepare -> config (reason 'none', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6248] device (lo): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'external')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6256] device (eth0): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6260] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6271] device (eth1): state change: config -> ip-config (reason 'none', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6274] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6314] device (lo): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'external')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6320] device (lo): state change: secondaries -> activated (reason 'none', sys-iface-state: 'external')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6327] device (lo): Activation: successful, device activated.
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6334] dhcp4 (eth0): state changed new lease, address=38.102.83.53
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6339] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6443] device (eth0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6483] device (eth0): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6485] device (eth0): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume')
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6491] manager: NetworkManager state is now CONNECTED_SITE
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6495] device (eth0): Activation: successful, device activated.
Feb 20 06:43:35 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569815.6501] manager: NetworkManager state is now CONNECTED_GLOBAL
Feb 20 06:43:35 np0005625203.novalocal python3[6035]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-fb18-e746-00000000012b-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 06:43:45 np0005625203.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 20 06:44:05 np0005625203.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 20 06:44:20 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569860.7069] device (eth1): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'assume')
Feb 20 06:44:20 np0005625203.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 20 06:44:20 np0005625203.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 20 06:44:20 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569860.7290] device (eth1): state change: ip-check -> secondaries (reason 'none', sys-iface-state: 'assume')
Feb 20 06:44:20 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569860.7294] device (eth1): state change: secondaries -> activated (reason 'none', sys-iface-state: 'assume')
Feb 20 06:44:20 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569860.7303] device (eth1): Activation: successful, device activated.
Feb 20 06:44:20 np0005625203.novalocal NetworkManager[5968]: <info>  [1771569860.7311] manager: startup complete
Feb 20 06:44:20 np0005625203.novalocal systemd[1]: Finished Network Manager Wait Online.
Feb 20 06:44:30 np0005625203.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 20 06:44:35 np0005625203.novalocal sshd[5816]: Received disconnect from 38.102.83.114 port 36446:11: disconnected by user
Feb 20 06:44:35 np0005625203.novalocal sshd[5816]: Disconnected from user zuul 38.102.83.114 port 36446
Feb 20 06:44:35 np0005625203.novalocal sshd[5813]: pam_unix(sshd:session): session closed for user zuul
Feb 20 06:44:36 np0005625203.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Feb 20 06:44:36 np0005625203.novalocal systemd[1]: session-3.scope: Consumed 1.466s CPU time.
Feb 20 06:44:36 np0005625203.novalocal systemd-logind[759]: Session 3 logged out. Waiting for processes to exit.
Feb 20 06:44:36 np0005625203.novalocal systemd-logind[759]: Removed session 3.
Feb 20 06:45:24 np0005625203.novalocal sshd[6054]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:45:24 np0005625203.novalocal sshd[6054]: Accepted publickey for zuul from 38.102.83.114 port 58442 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 06:45:24 np0005625203.novalocal systemd-logind[759]: New session 4 of user zuul.
Feb 20 06:45:24 np0005625203.novalocal systemd[1]: Started Session 4 of User zuul.
Feb 20 06:45:24 np0005625203.novalocal sshd[6054]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 06:45:24 np0005625203.novalocal sudo[6103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uzwqtkxazkbqoavktncmqpwqhercjbpy ; OS_CLOUD=vexxhost /usr/bin/python3
Feb 20 06:45:24 np0005625203.novalocal sudo[6103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:45:24 np0005625203.novalocal python3[6105]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:45:24 np0005625203.novalocal sudo[6103]: pam_unix(sudo:session): session closed for user root
Feb 20 06:45:24 np0005625203.novalocal sudo[6146]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cbrrnvnyqrvohttrxtjtgvimqsvhnykp ; OS_CLOUD=vexxhost /usr/bin/python3
Feb 20 06:45:24 np0005625203.novalocal sudo[6146]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:45:24 np0005625203.novalocal python3[6148]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771569924.264516-628-94158237831966/source _original_basename=tmp80viu0cx follow=False checksum=1adafc0c3cabf5458281c7d741082eddefa40194 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:45:24 np0005625203.novalocal sudo[6146]: pam_unix(sudo:session): session closed for user root
Feb 20 06:45:28 np0005625203.novalocal sshd[6054]: pam_unix(sshd:session): session closed for user zuul
Feb 20 06:45:28 np0005625203.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Feb 20 06:45:28 np0005625203.novalocal systemd-logind[759]: Session 4 logged out. Waiting for processes to exit.
Feb 20 06:45:28 np0005625203.novalocal systemd-logind[759]: Removed session 4.
Feb 20 06:47:51 np0005625203.novalocal sshd[6163]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:47:51 np0005625203.novalocal sshd[6163]: Received disconnect from 143.198.161.12 port 60452:11: Bye Bye [preauth]
Feb 20 06:47:51 np0005625203.novalocal sshd[6163]: Disconnected from authenticating user root 143.198.161.12 port 60452 [preauth]
Feb 20 06:48:04 np0005625203.novalocal sshd[6165]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:48:05 np0005625203.novalocal sshd[6165]: Invalid user titu from 144.91.127.158 port 39180
Feb 20 06:48:05 np0005625203.novalocal sshd[6165]: Received disconnect from 144.91.127.158 port 39180:11: Bye Bye [preauth]
Feb 20 06:48:05 np0005625203.novalocal sshd[6165]: Disconnected from invalid user titu 144.91.127.158 port 39180 [preauth]
Feb 20 06:50:04 np0005625203.novalocal sshd[6168]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:50:05 np0005625203.novalocal sshd[6168]: Invalid user n8n from 45.246.55.249 port 35796
Feb 20 06:50:05 np0005625203.novalocal sshd[6168]: Received disconnect from 45.246.55.249 port 35796:11: Bye Bye [preauth]
Feb 20 06:50:05 np0005625203.novalocal sshd[6168]: Disconnected from invalid user n8n 45.246.55.249 port 35796 [preauth]
Feb 20 06:50:17 np0005625203.novalocal sshd[6171]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:50:18 np0005625203.novalocal sshd[6171]: Invalid user titu from 172.203.58.203 port 41260
Feb 20 06:50:18 np0005625203.novalocal sshd[6171]: Received disconnect from 172.203.58.203 port 41260:11: Bye Bye [preauth]
Feb 20 06:50:18 np0005625203.novalocal sshd[6171]: Disconnected from invalid user titu 172.203.58.203 port 41260 [preauth]
Feb 20 06:50:32 np0005625203.novalocal sshd[6174]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:50:32 np0005625203.novalocal sshd[6174]: Received disconnect from 170.254.229.191 port 42444:11: Bye Bye [preauth]
Feb 20 06:50:32 np0005625203.novalocal sshd[6174]: Disconnected from authenticating user root 170.254.229.191 port 42444 [preauth]
Feb 20 06:50:38 np0005625203.novalocal sshd[6176]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:50:38 np0005625203.novalocal sshd[6176]: Invalid user deploy from 192.99.169.99 port 41218
Feb 20 06:50:38 np0005625203.novalocal sshd[6176]: Received disconnect from 192.99.169.99 port 41218:11: Bye Bye [preauth]
Feb 20 06:50:38 np0005625203.novalocal sshd[6176]: Disconnected from invalid user deploy 192.99.169.99 port 41218 [preauth]
Feb 20 06:51:22 np0005625203.novalocal sshd[6178]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:51:22 np0005625203.novalocal sshd[6179]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:51:22 np0005625203.novalocal sshd[6179]: error: kex_exchange_identification: read: Connection reset by peer
Feb 20 06:51:22 np0005625203.novalocal sshd[6179]: Connection reset by 176.120.22.52 port 43497
Feb 20 06:51:43 np0005625203.novalocal sshd[6180]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:51:44 np0005625203.novalocal sshd[6180]: Invalid user centos from 118.193.43.244 port 41584
Feb 20 06:51:44 np0005625203.novalocal sshd[6180]: Received disconnect from 118.193.43.244 port 41584:11: Bye Bye [preauth]
Feb 20 06:51:44 np0005625203.novalocal sshd[6180]: Disconnected from invalid user centos 118.193.43.244 port 41584 [preauth]
Feb 20 06:52:27 np0005625203.novalocal sshd[6184]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:52:27 np0005625203.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Feb 20 06:52:27 np0005625203.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Feb 20 06:52:27 np0005625203.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Feb 20 06:52:27 np0005625203.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Feb 20 06:52:27 np0005625203.novalocal sshd[6184]: Accepted publickey for zuul from 38.102.83.114 port 45404 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 06:52:27 np0005625203.novalocal systemd-logind[759]: New session 5 of user zuul.
Feb 20 06:52:27 np0005625203.novalocal systemd[1]: Started Session 5 of User zuul.
Feb 20 06:52:27 np0005625203.novalocal sshd[6184]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 06:52:27 np0005625203.novalocal sudo[6203]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cjttmwmsasqnnczwrbtiofybonqvlopg ; /usr/bin/python3
Feb 20 06:52:27 np0005625203.novalocal sudo[6203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:27 np0005625203.novalocal python3[6205]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-12ca-cc53-00000000219f-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 06:52:27 np0005625203.novalocal sudo[6203]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:29 np0005625203.novalocal sudo[6221]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kainxvqtnndplnzcteqccabrbjqsovzu ; /usr/bin/python3
Feb 20 06:52:29 np0005625203.novalocal sudo[6221]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:29 np0005625203.novalocal python3[6223]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:52:29 np0005625203.novalocal sudo[6221]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:29 np0005625203.novalocal sudo[6237]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vyvgcnxtndzkhehdfkzxgqwjxbqauyre ; /usr/bin/python3
Feb 20 06:52:29 np0005625203.novalocal sudo[6237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:29 np0005625203.novalocal python3[6239]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:52:29 np0005625203.novalocal sudo[6237]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:29 np0005625203.novalocal sudo[6253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbagwnrvntaindypyzobpfuimkhybmkn ; /usr/bin/python3
Feb 20 06:52:29 np0005625203.novalocal sudo[6253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:29 np0005625203.novalocal python3[6255]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:52:29 np0005625203.novalocal sudo[6253]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:29 np0005625203.novalocal sudo[6269]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vjhinzkmzipxubkrioppeartbyqqallu ; /usr/bin/python3
Feb 20 06:52:29 np0005625203.novalocal sudo[6269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:29 np0005625203.novalocal python3[6271]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:52:29 np0005625203.novalocal sudo[6269]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:30 np0005625203.novalocal sudo[6285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwhzyajcgibrbkqsknemzzlytewzeorp ; /usr/bin/python3
Feb 20 06:52:30 np0005625203.novalocal sudo[6285]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:30 np0005625203.novalocal python3[6287]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:52:30 np0005625203.novalocal sudo[6285]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:31 np0005625203.novalocal sudo[6333]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kzaglfmeghwmzvhoetgtfsqvzpsclovn ; /usr/bin/python3
Feb 20 06:52:31 np0005625203.novalocal sudo[6333]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:32 np0005625203.novalocal python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:52:32 np0005625203.novalocal sudo[6333]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:32 np0005625203.novalocal sudo[6376]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cahbpaekrwtjhakcmsvztweohdywgksq ; /usr/bin/python3
Feb 20 06:52:32 np0005625203.novalocal sudo[6376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:32 np0005625203.novalocal python3[6378]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771570351.756487-663-63477327007304/source _original_basename=tmp1dlo9li0 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:52:32 np0005625203.novalocal sudo[6376]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:33 np0005625203.novalocal sudo[6406]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-muunsovuepvawgksdhwzhprmzbgnqnio ; /usr/bin/python3
Feb 20 06:52:33 np0005625203.novalocal sudo[6406]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:34 np0005625203.novalocal python3[6408]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 06:52:34 np0005625203.novalocal systemd[1]: Reloading.
Feb 20 06:52:34 np0005625203.novalocal systemd-rc-local-generator[6427]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 06:52:34 np0005625203.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 06:52:34 np0005625203.novalocal sudo[6406]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:35 np0005625203.novalocal sudo[6452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wpipvdfzuykojzocdpyaphhpuakedpou ; /usr/bin/python3
Feb 20 06:52:35 np0005625203.novalocal sudo[6452]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:35 np0005625203.novalocal python3[6454]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Feb 20 06:52:35 np0005625203.novalocal sudo[6452]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:36 np0005625203.novalocal sudo[6468]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lhqmyjccbwuxaxclhrogieiqzcsrwrxr ; /usr/bin/python3
Feb 20 06:52:36 np0005625203.novalocal sudo[6468]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:36 np0005625203.novalocal python3[6470]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 06:52:36 np0005625203.novalocal sudo[6468]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:37 np0005625203.novalocal sudo[6486]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cyycnpiognruttkefatftyqznhyxcemf ; /usr/bin/python3
Feb 20 06:52:37 np0005625203.novalocal sudo[6486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:37 np0005625203.novalocal python3[6488]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 06:52:37 np0005625203.novalocal sudo[6486]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:37 np0005625203.novalocal sudo[6504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scfyrvnnumngccqsierwdovkhohcensq ; /usr/bin/python3
Feb 20 06:52:37 np0005625203.novalocal sudo[6504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:37 np0005625203.novalocal python3[6506]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 06:52:37 np0005625203.novalocal sudo[6504]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:37 np0005625203.novalocal sudo[6522]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bsowykhiihekvjvhtwrrpjslusgfsfnf ; /usr/bin/python3
Feb 20 06:52:37 np0005625203.novalocal sudo[6522]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:52:37 np0005625203.novalocal python3[6524]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 06:52:37 np0005625203.novalocal sudo[6522]: pam_unix(sudo:session): session closed for user root
Feb 20 06:52:48 np0005625203.novalocal python3[6542]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-12ca-cc53-0000000021a6-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 06:52:49 np0005625203.novalocal sshd[6548]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:52:49 np0005625203.novalocal python3[6562]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 06:52:52 np0005625203.novalocal sshd[6184]: pam_unix(sshd:session): session closed for user zuul
Feb 20 06:52:52 np0005625203.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Feb 20 06:52:52 np0005625203.novalocal systemd[1]: session-5.scope: Consumed 3.934s CPU time.
Feb 20 06:52:52 np0005625203.novalocal systemd-logind[759]: Session 5 logged out. Waiting for processes to exit.
Feb 20 06:52:52 np0005625203.novalocal systemd-logind[759]: Removed session 5.
Feb 20 06:53:47 np0005625203.novalocal sshd[6571]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:53:47 np0005625203.novalocal sshd[6571]: Accepted publickey for zuul from 38.102.83.114 port 36064 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 06:53:47 np0005625203.novalocal systemd-logind[759]: New session 6 of user zuul.
Feb 20 06:53:47 np0005625203.novalocal systemd[1]: Started Session 6 of User zuul.
Feb 20 06:53:47 np0005625203.novalocal sshd[6571]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 06:53:48 np0005625203.novalocal sudo[6588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gyrfdpmizkjtwzcwsabmgmyazitlcxbd ; /usr/bin/python3
Feb 20 06:53:48 np0005625203.novalocal sudo[6588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:53:48 np0005625203.novalocal systemd[1]: Starting RHSM dbus service...
Feb 20 06:53:49 np0005625203.novalocal systemd[1]: Started RHSM dbus service.
Feb 20 06:53:49 np0005625203.novalocal rhsm-service[6595]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Feb 20 06:53:49 np0005625203.novalocal rhsm-service[6595]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Feb 20 06:53:49 np0005625203.novalocal rhsm-service[6595]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Feb 20 06:53:49 np0005625203.novalocal rhsm-service[6595]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Feb 20 06:53:50 np0005625203.novalocal rhsm-service[6595]:  INFO [subscription_manager.managerlib:90] Consumer created: np0005625203.novalocal (7c76e446-72cd-40ed-9df9-5912b758c267)
Feb 20 06:53:50 np0005625203.novalocal subscription-manager[6595]: Registered system with identity: 7c76e446-72cd-40ed-9df9-5912b758c267
Feb 20 06:53:51 np0005625203.novalocal rhsm-service[6595]:  INFO [subscription_manager.entcertlib:131] certs updated:
Feb 20 06:53:51 np0005625203.novalocal rhsm-service[6595]: Total updates: 1
Feb 20 06:53:51 np0005625203.novalocal rhsm-service[6595]: Found (local) serial# []
Feb 20 06:53:51 np0005625203.novalocal rhsm-service[6595]: Expected (UEP) serial# [5872573106437544831]
Feb 20 06:53:51 np0005625203.novalocal rhsm-service[6595]: Added (new)
Feb 20 06:53:51 np0005625203.novalocal rhsm-service[6595]:   [sn:5872573106437544831 ( Content Access,) @ /etc/pki/entitlement/5872573106437544831.pem]
Feb 20 06:53:51 np0005625203.novalocal rhsm-service[6595]: Deleted (rogue):
Feb 20 06:53:51 np0005625203.novalocal rhsm-service[6595]:   <NONE>
Feb 20 06:53:51 np0005625203.novalocal subscription-manager[6595]: Added subscription for 'Content Access' contract 'None'
Feb 20 06:53:51 np0005625203.novalocal subscription-manager[6595]: Added subscription for product ' Content Access'
Feb 20 06:53:52 np0005625203.novalocal rhsm-service[6595]:  INFO [subscription_manager.i18n:169] Could not import locale for C: [Errno 2] No translation file found for domain: 'rhsm'
Feb 20 06:53:52 np0005625203.novalocal rhsm-service[6595]:  INFO [subscription_manager.i18n:139] Could not import locale either for C_C: [Errno 2] No translation file found for domain: 'rhsm'
Feb 20 06:53:52 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 06:53:52 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 06:53:52 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 06:53:53 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 06:53:53 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 06:53:53 np0005625203.novalocal sudo[6588]: pam_unix(sudo:session): session closed for user root
Feb 20 06:54:00 np0005625203.novalocal python3[6686]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-d2eb-5884-00000000000d-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 06:54:11 np0005625203.novalocal sudo[6703]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zxkdvrhmwkimszpsxsawnkwywkzdlmka ; /usr/bin/python3
Feb 20 06:54:11 np0005625203.novalocal sudo[6703]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:54:11 np0005625203.novalocal python3[6705]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 06:54:42 np0005625203.novalocal setsebool[6780]: The virt_use_nfs policy boolean was changed to 1 by root
Feb 20 06:54:42 np0005625203.novalocal setsebool[6780]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Feb 20 06:54:49 np0005625203.novalocal sshd[6548]: fatal: Timeout before authentication for 132.248.44.87 port 47170
Feb 20 06:54:51 np0005625203.novalocal kernel: SELinux:  Converting 406 SID table entries...
Feb 20 06:54:51 np0005625203.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 06:54:51 np0005625203.novalocal kernel: SELinux:  policy capability open_perms=1
Feb 20 06:54:51 np0005625203.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 06:54:51 np0005625203.novalocal kernel: SELinux:  policy capability always_check_network=0
Feb 20 06:54:51 np0005625203.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 06:54:51 np0005625203.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 06:54:51 np0005625203.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 06:55:04 np0005625203.novalocal dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=3 res=1
Feb 20 06:55:04 np0005625203.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 06:55:04 np0005625203.novalocal systemd[1]: Starting man-db-cache-update.service...
Feb 20 06:55:04 np0005625203.novalocal systemd[1]: Reloading.
Feb 20 06:55:04 np0005625203.novalocal systemd-rc-local-generator[7650]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 06:55:04 np0005625203.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 06:55:04 np0005625203.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Feb 20 06:55:05 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 06:55:06 np0005625203.novalocal sudo[6703]: pam_unix(sudo:session): session closed for user root
Feb 20 06:55:13 np0005625203.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 06:55:13 np0005625203.novalocal systemd[1]: Finished man-db-cache-update.service.
Feb 20 06:55:13 np0005625203.novalocal systemd[1]: man-db-cache-update.service: Consumed 11.100s CPU time.
Feb 20 06:55:13 np0005625203.novalocal systemd[1]: run-r236c945138de41ff9a80e99218d36978.service: Deactivated successfully.
Feb 20 06:55:31 np0005625203.novalocal sshd[18353]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:55:31 np0005625203.novalocal sshd[18353]: Invalid user httpd from 192.99.169.99 port 54576
Feb 20 06:55:31 np0005625203.novalocal sshd[18353]: Received disconnect from 192.99.169.99 port 54576:11: Bye Bye [preauth]
Feb 20 06:55:31 np0005625203.novalocal sshd[18353]: Disconnected from invalid user httpd 192.99.169.99 port 54576 [preauth]
Feb 20 06:55:58 np0005625203.novalocal sudo[18368]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdbjheziqctljtwvwfbgrsicdftperia ; /usr/bin/python3
Feb 20 06:55:58 np0005625203.novalocal sudo[18368]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:55:58 np0005625203.novalocal podman[18371]: 2026-02-20 06:55:58.656293494 +0000 UTC m=+0.105920305 system refresh
Feb 20 06:55:59 np0005625203.novalocal sudo[18368]: pam_unix(sudo:session): session closed for user root
Feb 20 06:55:59 np0005625203.novalocal sshd[18400]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:55:59 np0005625203.novalocal sshd[18400]: Received disconnect from 172.203.58.203 port 44882:11: Bye Bye [preauth]
Feb 20 06:55:59 np0005625203.novalocal sshd[18400]: Disconnected from authenticating user root 172.203.58.203 port 44882 [preauth]
Feb 20 06:55:59 np0005625203.novalocal systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 06:55:59 np0005625203.novalocal systemd[4179]: Starting D-Bus User Message Bus...
Feb 20 06:55:59 np0005625203.novalocal dbus-broker-launch[18429]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb 20 06:55:59 np0005625203.novalocal dbus-broker-launch[18429]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb 20 06:55:59 np0005625203.novalocal systemd[4179]: Started D-Bus User Message Bus.
Feb 20 06:55:59 np0005625203.novalocal dbus-broker-lau[18429]: Ready
Feb 20 06:55:59 np0005625203.novalocal systemd[4179]: selinux: avc:  op=load_policy lsm=selinux seqno=3 res=1
Feb 20 06:55:59 np0005625203.novalocal systemd[4179]: Created slice Slice /user.
Feb 20 06:55:59 np0005625203.novalocal systemd[4179]: podman-18413.scope: unit configures an IP firewall, but not running as root.
Feb 20 06:55:59 np0005625203.novalocal systemd[4179]: (This warning is only shown for the first unit using IP firewalling.)
Feb 20 06:55:59 np0005625203.novalocal systemd[4179]: Started podman-18413.scope.
Feb 20 06:56:00 np0005625203.novalocal systemd[4179]: Started podman-pause-464ab13c.scope.
Feb 20 06:56:01 np0005625203.novalocal sshd[18434]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:56:02 np0005625203.novalocal sshd[18434]: Invalid user n8n from 45.246.55.249 port 52406
Feb 20 06:56:02 np0005625203.novalocal sshd[18434]: Received disconnect from 45.246.55.249 port 52406:11: Bye Bye [preauth]
Feb 20 06:56:02 np0005625203.novalocal sshd[18434]: Disconnected from invalid user n8n 45.246.55.249 port 52406 [preauth]
Feb 20 06:56:03 np0005625203.novalocal sshd[6571]: pam_unix(sshd:session): session closed for user zuul
Feb 20 06:56:03 np0005625203.novalocal systemd[1]: session-6.scope: Deactivated successfully.
Feb 20 06:56:03 np0005625203.novalocal systemd[1]: session-6.scope: Consumed 51.487s CPU time.
Feb 20 06:56:03 np0005625203.novalocal systemd-logind[759]: Session 6 logged out. Waiting for processes to exit.
Feb 20 06:56:03 np0005625203.novalocal systemd-logind[759]: Removed session 6.
Feb 20 06:56:09 np0005625203.novalocal sshd[18436]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:56:09 np0005625203.novalocal sshd[18436]: Invalid user sangoma from 144.91.127.158 port 60568
Feb 20 06:56:10 np0005625203.novalocal sshd[18436]: Received disconnect from 144.91.127.158 port 60568:11: Bye Bye [preauth]
Feb 20 06:56:10 np0005625203.novalocal sshd[18436]: Disconnected from invalid user sangoma 144.91.127.158 port 60568 [preauth]
Feb 20 06:56:18 np0005625203.novalocal sshd[18438]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:56:18 np0005625203.novalocal sshd[18439]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:56:18 np0005625203.novalocal sshd[18441]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:56:18 np0005625203.novalocal sshd[18440]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:56:18 np0005625203.novalocal sshd[18442]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:56:18 np0005625203.novalocal sshd[18439]: Unable to negotiate with 38.102.83.74 port 52224: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Feb 20 06:56:18 np0005625203.novalocal sshd[18440]: Connection closed by 38.102.83.74 port 52186 [preauth]
Feb 20 06:56:18 np0005625203.novalocal sshd[18441]: Connection closed by 38.102.83.74 port 52188 [preauth]
Feb 20 06:56:18 np0005625203.novalocal sshd[18442]: Unable to negotiate with 38.102.83.74 port 52196: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Feb 20 06:56:18 np0005625203.novalocal sshd[18438]: Unable to negotiate with 38.102.83.74 port 52212: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Feb 20 06:56:23 np0005625203.novalocal sshd[18448]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:56:23 np0005625203.novalocal sshd[18448]: Accepted publickey for zuul from 38.102.83.114 port 47530 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 06:56:23 np0005625203.novalocal systemd-logind[759]: New session 7 of user zuul.
Feb 20 06:56:23 np0005625203.novalocal systemd[1]: Started Session 7 of User zuul.
Feb 20 06:56:23 np0005625203.novalocal sshd[18448]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 06:56:23 np0005625203.novalocal python3[18465]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHF6ws6TTGIgpcynk+zfDmAiKAngdz4qTSYI5OZYL/Nj9dQsVH9D0sSlKxQpeRN7puQyuA81owKWTQGJzf43DRQ= zuul@np0005625196.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:56:24 np0005625203.novalocal sudo[18479]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yaknfelrgzvflrviijitahswgzgardal ; /usr/bin/python3
Feb 20 06:56:24 np0005625203.novalocal sudo[18479]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:56:24 np0005625203.novalocal python3[18481]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHF6ws6TTGIgpcynk+zfDmAiKAngdz4qTSYI5OZYL/Nj9dQsVH9D0sSlKxQpeRN7puQyuA81owKWTQGJzf43DRQ= zuul@np0005625196.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:56:24 np0005625203.novalocal sudo[18479]: pam_unix(sudo:session): session closed for user root
Feb 20 06:56:26 np0005625203.novalocal sshd[18448]: pam_unix(sshd:session): session closed for user zuul
Feb 20 06:56:26 np0005625203.novalocal systemd[1]: session-7.scope: Deactivated successfully.
Feb 20 06:56:26 np0005625203.novalocal systemd-logind[759]: Session 7 logged out. Waiting for processes to exit.
Feb 20 06:56:26 np0005625203.novalocal systemd-logind[759]: Removed session 7.
Feb 20 06:56:56 np0005625203.novalocal sshd[18482]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:56:57 np0005625203.novalocal sshd[18482]: Invalid user admin from 118.193.43.244 port 60890
Feb 20 06:56:57 np0005625203.novalocal sshd[18482]: Received disconnect from 118.193.43.244 port 60890:11: Bye Bye [preauth]
Feb 20 06:56:57 np0005625203.novalocal sshd[18482]: Disconnected from invalid user admin 118.193.43.244 port 60890 [preauth]
Feb 20 06:57:02 np0005625203.novalocal sshd[18484]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:57:03 np0005625203.novalocal sshd[18484]: Invalid user user1 from 170.254.229.191 port 50928
Feb 20 06:57:03 np0005625203.novalocal sshd[18484]: Received disconnect from 170.254.229.191 port 50928:11: Bye Bye [preauth]
Feb 20 06:57:03 np0005625203.novalocal sshd[18484]: Disconnected from invalid user user1 170.254.229.191 port 50928 [preauth]
Feb 20 06:57:45 np0005625203.novalocal sshd[18487]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:57:45 np0005625203.novalocal sshd[18487]: Accepted publickey for zuul from 38.102.83.114 port 45802 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 06:57:45 np0005625203.novalocal systemd-logind[759]: New session 8 of user zuul.
Feb 20 06:57:45 np0005625203.novalocal systemd[1]: Started Session 8 of User zuul.
Feb 20 06:57:45 np0005625203.novalocal sshd[18487]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 06:57:45 np0005625203.novalocal sudo[18504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zwdritabmjcbbokolxuzwawsukairpyx ; /usr/bin/python3
Feb 20 06:57:45 np0005625203.novalocal sudo[18504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:57:45 np0005625203.novalocal python3[18506]: ansible-authorized_key Invoked with user=root manage_dir=True key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 06:57:45 np0005625203.novalocal sudo[18504]: pam_unix(sudo:session): session closed for user root
Feb 20 06:57:46 np0005625203.novalocal sudo[18520]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ufxfpbazhderbgyzqywalmesbnyallrx ; /usr/bin/python3
Feb 20 06:57:46 np0005625203.novalocal sudo[18520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:57:46 np0005625203.novalocal python3[18522]: ansible-user Invoked with name=root state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005625203.novalocal update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb 20 06:57:46 np0005625203.novalocal sudo[18520]: pam_unix(sudo:session): session closed for user root
Feb 20 06:57:47 np0005625203.novalocal sudo[18570]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zkxadwufzvqhhpggehklszqwupypyqui ; /usr/bin/python3
Feb 20 06:57:47 np0005625203.novalocal sudo[18570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:57:48 np0005625203.novalocal python3[18572]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:57:48 np0005625203.novalocal sudo[18570]: pam_unix(sudo:session): session closed for user root
Feb 20 06:57:48 np0005625203.novalocal sudo[18613]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gecfpnrrwkpmxyycijlcrpnfwepfznmq ; /usr/bin/python3
Feb 20 06:57:48 np0005625203.novalocal sudo[18613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:57:48 np0005625203.novalocal python3[18615]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771570667.791831-137-112569374942569/source dest=/root/.ssh/id_rsa mode=384 owner=root force=False _original_basename=2f497d714b8e492188c98067e59a9994_id_rsa follow=False checksum=1ede725f5cdca64ff103c7e62f7bb7b42f0b9244 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:57:48 np0005625203.novalocal sudo[18613]: pam_unix(sudo:session): session closed for user root
Feb 20 06:57:49 np0005625203.novalocal sudo[18675]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlrjrpmecdmngdudkieqlkdtbgjmmpni ; /usr/bin/python3
Feb 20 06:57:49 np0005625203.novalocal sudo[18675]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:57:49 np0005625203.novalocal python3[18677]: ansible-ansible.legacy.stat Invoked with path=/root/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:57:49 np0005625203.novalocal sudo[18675]: pam_unix(sudo:session): session closed for user root
Feb 20 06:57:49 np0005625203.novalocal sudo[18718]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ndlactybtffwonfonuoxlpknbgeqevqx ; /usr/bin/python3
Feb 20 06:57:49 np0005625203.novalocal sudo[18718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:57:50 np0005625203.novalocal python3[18720]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771570669.4366164-225-22825077244376/source dest=/root/.ssh/id_rsa.pub mode=420 owner=root force=False _original_basename=2f497d714b8e492188c98067e59a9994_id_rsa.pub follow=False checksum=d5896bb6dcd221ffe99ce3acccb68a5152af8369 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:57:50 np0005625203.novalocal sudo[18718]: pam_unix(sudo:session): session closed for user root
Feb 20 06:57:51 np0005625203.novalocal sudo[18748]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rnfxlrvhidfeyutclmjbbpjpwbfurhdm ; /usr/bin/python3
Feb 20 06:57:51 np0005625203.novalocal sudo[18748]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 06:57:52 np0005625203.novalocal python3[18750]: ansible-ansible.builtin.file Invoked with path=/etc/nodepool state=directory mode=0777 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:57:52 np0005625203.novalocal sudo[18748]: pam_unix(sudo:session): session closed for user root
Feb 20 06:57:53 np0005625203.novalocal python3[18796]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:57:53 np0005625203.novalocal python3[18812]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes _original_basename=tmp67zkc753 recurse=False state=file path=/etc/nodepool/sub_nodes force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:57:54 np0005625203.novalocal python3[18872]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:57:54 np0005625203.novalocal python3[18888]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/sub_nodes_private _original_basename=tmpv14kmn5r recurse=False state=file path=/etc/nodepool/sub_nodes_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:57:56 np0005625203.novalocal python3[18948]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 06:57:56 np0005625203.novalocal python3[18964]: ansible-ansible.legacy.file Invoked with dest=/etc/nodepool/node_private _original_basename=tmpwujj_dgn recurse=False state=file path=/etc/nodepool/node_private force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 06:57:57 np0005625203.novalocal sshd[18487]: pam_unix(sshd:session): session closed for user zuul
Feb 20 06:57:57 np0005625203.novalocal systemd[1]: session-8.scope: Deactivated successfully.
Feb 20 06:57:57 np0005625203.novalocal systemd[1]: session-8.scope: Consumed 3.750s CPU time.
Feb 20 06:57:57 np0005625203.novalocal systemd-logind[759]: Session 8 logged out. Waiting for processes to exit.
Feb 20 06:57:57 np0005625203.novalocal systemd-logind[759]: Removed session 8.
Feb 20 06:58:42 np0005625203.novalocal sshd[18981]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:58:42 np0005625203.novalocal sshd[18981]: Received disconnect from 192.99.169.99 port 47948:11: Bye Bye [preauth]
Feb 20 06:58:42 np0005625203.novalocal sshd[18981]: Disconnected from authenticating user root 192.99.169.99 port 47948 [preauth]
Feb 20 06:59:16 np0005625203.novalocal sshd[18983]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:59:16 np0005625203.novalocal sshd[18983]: Invalid user dixi from 172.203.58.203 port 48370
Feb 20 06:59:16 np0005625203.novalocal sshd[18983]: Received disconnect from 172.203.58.203 port 48370:11: Bye Bye [preauth]
Feb 20 06:59:16 np0005625203.novalocal sshd[18983]: Disconnected from invalid user dixi 172.203.58.203 port 48370 [preauth]
Feb 20 06:59:41 np0005625203.novalocal sshd[18985]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:59:42 np0005625203.novalocal sshd[18985]: Invalid user x from 144.91.127.158 port 56278
Feb 20 06:59:42 np0005625203.novalocal sshd[18985]: Received disconnect from 144.91.127.158 port 56278:11: Bye Bye [preauth]
Feb 20 06:59:42 np0005625203.novalocal sshd[18985]: Disconnected from invalid user x 144.91.127.158 port 56278 [preauth]
Feb 20 06:59:47 np0005625203.novalocal sshd[18987]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 06:59:48 np0005625203.novalocal sshd[18987]: Received disconnect from 45.246.55.249 port 59696:11: Bye Bye [preauth]
Feb 20 06:59:48 np0005625203.novalocal sshd[18987]: Disconnected from authenticating user root 45.246.55.249 port 59696 [preauth]
Feb 20 07:00:06 np0005625203.novalocal sshd[18989]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:00:06 np0005625203.novalocal sshd[18989]: Accepted publickey for zuul from 38.102.83.74 port 47288 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 07:00:06 np0005625203.novalocal systemd-logind[759]: New session 9 of user zuul.
Feb 20 07:00:06 np0005625203.novalocal systemd[1]: Started Session 9 of User zuul.
Feb 20 07:00:06 np0005625203.novalocal sshd[18989]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 07:00:06 np0005625203.novalocal python3[19035]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:00:37 np0005625203.novalocal sshd[19037]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:00:37 np0005625203.novalocal sshd[19037]: Received disconnect from 170.254.229.191 port 54194:11: Bye Bye [preauth]
Feb 20 07:00:37 np0005625203.novalocal sshd[19037]: Disconnected from authenticating user root 170.254.229.191 port 54194 [preauth]
Feb 20 07:00:53 np0005625203.novalocal sshd[19039]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:00:54 np0005625203.novalocal sshd[19039]: Invalid user admin from 118.193.43.244 port 58586
Feb 20 07:00:54 np0005625203.novalocal sshd[19039]: Received disconnect from 118.193.43.244 port 58586:11: Bye Bye [preauth]
Feb 20 07:00:54 np0005625203.novalocal sshd[19039]: Disconnected from invalid user admin 118.193.43.244 port 58586 [preauth]
Feb 20 07:01:01 np0005625203.novalocal CROND[19042]: (root) CMD (run-parts /etc/cron.hourly)
Feb 20 07:01:02 np0005625203.novalocal run-parts[19045]: (/etc/cron.hourly) starting 0anacron
Feb 20 07:01:02 np0005625203.novalocal anacron[19053]: Anacron started on 2026-02-20
Feb 20 07:01:02 np0005625203.novalocal anacron[19053]: Will run job `cron.daily' in 37 min.
Feb 20 07:01:02 np0005625203.novalocal anacron[19053]: Will run job `cron.weekly' in 57 min.
Feb 20 07:01:02 np0005625203.novalocal anacron[19053]: Will run job `cron.monthly' in 77 min.
Feb 20 07:01:02 np0005625203.novalocal anacron[19053]: Jobs will be executed sequentially
Feb 20 07:01:02 np0005625203.novalocal run-parts[19055]: (/etc/cron.hourly) finished 0anacron
Feb 20 07:01:02 np0005625203.novalocal CROND[19041]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 20 07:01:36 np0005625203.novalocal sshd[19056]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:01:36 np0005625203.novalocal sshd[19056]: Invalid user systemd from 192.99.169.99 port 39518
Feb 20 07:01:36 np0005625203.novalocal sshd[19056]: Received disconnect from 192.99.169.99 port 39518:11: Bye Bye [preauth]
Feb 20 07:01:36 np0005625203.novalocal sshd[19056]: Disconnected from invalid user systemd 192.99.169.99 port 39518 [preauth]
Feb 20 07:02:30 np0005625203.novalocal sshd[19058]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:02:30 np0005625203.novalocal sshd[19058]: Received disconnect from 172.203.58.203 port 41620:11: Bye Bye [preauth]
Feb 20 07:02:30 np0005625203.novalocal sshd[19058]: Disconnected from authenticating user root 172.203.58.203 port 41620 [preauth]
Feb 20 07:03:14 np0005625203.novalocal sshd[19061]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:03:14 np0005625203.novalocal sshd[19061]: Invalid user dixi from 144.91.127.158 port 43618
Feb 20 07:03:14 np0005625203.novalocal sshd[19061]: Received disconnect from 144.91.127.158 port 43618:11: Bye Bye [preauth]
Feb 20 07:03:14 np0005625203.novalocal sshd[19061]: Disconnected from invalid user dixi 144.91.127.158 port 43618 [preauth]
Feb 20 07:03:26 np0005625203.novalocal sshd[19063]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:03:26 np0005625203.novalocal sshd[19063]: Invalid user ubuntu from 45.246.55.249 port 54528
Feb 20 07:03:27 np0005625203.novalocal sshd[19063]: Received disconnect from 45.246.55.249 port 54528:11: Bye Bye [preauth]
Feb 20 07:03:27 np0005625203.novalocal sshd[19063]: Disconnected from invalid user ubuntu 45.246.55.249 port 54528 [preauth]
Feb 20 07:04:13 np0005625203.novalocal sshd[19065]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:04:14 np0005625203.novalocal sshd[19065]: Received disconnect from 170.254.229.191 port 42580:11: Bye Bye [preauth]
Feb 20 07:04:14 np0005625203.novalocal sshd[19065]: Disconnected from authenticating user root 170.254.229.191 port 42580 [preauth]
Feb 20 07:04:37 np0005625203.novalocal sshd[19067]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:04:37 np0005625203.novalocal sshd[19067]: Invalid user test from 192.99.169.99 port 51704
Feb 20 07:04:37 np0005625203.novalocal sshd[19067]: Received disconnect from 192.99.169.99 port 51704:11: Bye Bye [preauth]
Feb 20 07:04:37 np0005625203.novalocal sshd[19067]: Disconnected from invalid user test 192.99.169.99 port 51704 [preauth]
Feb 20 07:04:54 np0005625203.novalocal sshd[19069]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:04:55 np0005625203.novalocal sshd[19069]: Invalid user x from 118.193.43.244 port 57526
Feb 20 07:04:55 np0005625203.novalocal sshd[19069]: Received disconnect from 118.193.43.244 port 57526:11: Bye Bye [preauth]
Feb 20 07:04:55 np0005625203.novalocal sshd[19069]: Disconnected from invalid user x 118.193.43.244 port 57526 [preauth]
Feb 20 07:05:06 np0005625203.novalocal sshd[18992]: Received disconnect from 38.102.83.74 port 47288:11: disconnected by user
Feb 20 07:05:06 np0005625203.novalocal sshd[18992]: Disconnected from user zuul 38.102.83.74 port 47288
Feb 20 07:05:06 np0005625203.novalocal sshd[18989]: pam_unix(sshd:session): session closed for user zuul
Feb 20 07:05:06 np0005625203.novalocal systemd[1]: session-9.scope: Deactivated successfully.
Feb 20 07:05:06 np0005625203.novalocal systemd-logind[759]: Session 9 logged out. Waiting for processes to exit.
Feb 20 07:05:06 np0005625203.novalocal systemd-logind[759]: Removed session 9.
Feb 20 07:05:59 np0005625203.novalocal sshd[19073]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:06:00 np0005625203.novalocal sshd[19073]: Invalid user iksi from 172.203.58.203 port 58258
Feb 20 07:06:00 np0005625203.novalocal sshd[19073]: Received disconnect from 172.203.58.203 port 58258:11: Bye Bye [preauth]
Feb 20 07:06:00 np0005625203.novalocal sshd[19073]: Disconnected from invalid user iksi 172.203.58.203 port 58258 [preauth]
Feb 20 07:06:51 np0005625203.novalocal sshd[19075]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:06:52 np0005625203.novalocal sshd[19075]: Invalid user claude from 45.246.55.249 port 59324
Feb 20 07:06:52 np0005625203.novalocal sshd[19075]: Received disconnect from 45.246.55.249 port 59324:11: Bye Bye [preauth]
Feb 20 07:06:52 np0005625203.novalocal sshd[19075]: Disconnected from invalid user claude 45.246.55.249 port 59324 [preauth]
Feb 20 07:07:28 np0005625203.novalocal sshd[19077]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:07:28 np0005625203.novalocal sshd[19077]: Invalid user parth from 192.99.169.99 port 58016
Feb 20 07:07:29 np0005625203.novalocal sshd[19077]: Received disconnect from 192.99.169.99 port 58016:11: Bye Bye [preauth]
Feb 20 07:07:29 np0005625203.novalocal sshd[19077]: Disconnected from invalid user parth 192.99.169.99 port 58016 [preauth]
Feb 20 07:07:49 np0005625203.novalocal sshd[19079]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:07:50 np0005625203.novalocal sshd[19079]: Invalid user admin from 170.254.229.191 port 56832
Feb 20 07:07:50 np0005625203.novalocal sshd[19079]: Received disconnect from 170.254.229.191 port 56832:11: Bye Bye [preauth]
Feb 20 07:07:50 np0005625203.novalocal sshd[19079]: Disconnected from invalid user admin 170.254.229.191 port 56832 [preauth]
Feb 20 07:09:03 np0005625203.novalocal sshd[19082]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:09:04 np0005625203.novalocal sshd[19082]: Invalid user titu from 118.193.43.244 port 48306
Feb 20 07:09:04 np0005625203.novalocal sshd[19082]: Received disconnect from 118.193.43.244 port 48306:11: Bye Bye [preauth]
Feb 20 07:09:04 np0005625203.novalocal sshd[19082]: Disconnected from invalid user titu 118.193.43.244 port 48306 [preauth]
Feb 20 07:09:44 np0005625203.novalocal sshd[19084]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:09:44 np0005625203.novalocal sshd[19084]: error: kex_exchange_identification: Connection closed by remote host
Feb 20 07:09:44 np0005625203.novalocal sshd[19084]: Connection closed by 147.182.212.230 port 43442
Feb 20 07:10:08 np0005625203.novalocal sshd[19085]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:10:08 np0005625203.novalocal sshd[19085]: Invalid user n8n from 144.91.127.158 port 37778
Feb 20 07:10:08 np0005625203.novalocal sshd[19085]: Received disconnect from 144.91.127.158 port 37778:11: Bye Bye [preauth]
Feb 20 07:10:08 np0005625203.novalocal sshd[19085]: Disconnected from invalid user n8n 144.91.127.158 port 37778 [preauth]
Feb 20 07:10:16 np0005625203.novalocal sshd[19087]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:10:17 np0005625203.novalocal sshd[19087]: Invalid user appuser from 45.246.55.249 port 55000
Feb 20 07:10:17 np0005625203.novalocal sshd[19087]: Received disconnect from 45.246.55.249 port 55000:11: Bye Bye [preauth]
Feb 20 07:10:17 np0005625203.novalocal sshd[19087]: Disconnected from invalid user appuser 45.246.55.249 port 55000 [preauth]
Feb 20 07:10:30 np0005625203.novalocal sshd[19089]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:10:30 np0005625203.novalocal sshd[19089]: Invalid user nutanix from 192.99.169.99 port 38100
Feb 20 07:10:30 np0005625203.novalocal sshd[19089]: Received disconnect from 192.99.169.99 port 38100:11: Bye Bye [preauth]
Feb 20 07:10:30 np0005625203.novalocal sshd[19089]: Disconnected from invalid user nutanix 192.99.169.99 port 38100 [preauth]
Feb 20 07:11:22 np0005625203.novalocal sshd[19091]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:11:23 np0005625203.novalocal sshd[19094]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:11:23 np0005625203.novalocal sshd[19091]: Invalid user n8n from 170.254.229.191 port 60936
Feb 20 07:11:23 np0005625203.novalocal sshd[19094]: Accepted publickey for zuul from 38.102.83.114 port 55636 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 07:11:23 np0005625203.novalocal systemd-logind[759]: New session 10 of user zuul.
Feb 20 07:11:23 np0005625203.novalocal systemd[1]: Started Session 10 of User zuul.
Feb 20 07:11:23 np0005625203.novalocal sshd[19094]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 07:11:23 np0005625203.novalocal sshd[19091]: Received disconnect from 170.254.229.191 port 60936:11: Bye Bye [preauth]
Feb 20 07:11:23 np0005625203.novalocal sshd[19091]: Disconnected from invalid user n8n 170.254.229.191 port 60936 [preauth]
Feb 20 07:11:23 np0005625203.novalocal python3[19111]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/redhat-release zuul_log_id=fa163ef9-e89a-064b-165c-00000000000c-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:11:25 np0005625203.novalocal sudo[19129]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cesyxivdupygztwlagacbuuybokrotuh ; /usr/bin/python3
Feb 20 07:11:25 np0005625203.novalocal sudo[19129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:11:25 np0005625203.novalocal python3[19131]: ansible-ansible.legacy.command Invoked with _raw_params=yum clean all zuul_log_id=fa163ef9-e89a-064b-165c-00000000000d-1-overcloudnovacompute1 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:11:27 np0005625203.novalocal sudo[19129]: pam_unix(sudo:session): session closed for user root
Feb 20 07:11:30 np0005625203.novalocal sudo[19148]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfeqlxnpurenddwskxghxxqtcuhkflwv ; /usr/bin/python3
Feb 20 07:11:30 np0005625203.novalocal sudo[19148]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:11:30 np0005625203.novalocal python3[19150]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-baseos-eus-rpms'] state=enabled purge=False
Feb 20 07:11:33 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:11:33 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:11:36 np0005625203.novalocal sshd[19280]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:11:36 np0005625203.novalocal sshd[19280]: Invalid user titu from 143.198.161.12 port 50360
Feb 20 07:11:37 np0005625203.novalocal sshd[19280]: Received disconnect from 143.198.161.12 port 50360:11: Bye Bye [preauth]
Feb 20 07:11:37 np0005625203.novalocal sshd[19280]: Disconnected from invalid user titu 143.198.161.12 port 50360 [preauth]
Feb 20 07:11:59 np0005625203.novalocal sudo[19148]: pam_unix(sudo:session): session closed for user root
Feb 20 07:12:25 np0005625203.novalocal sshd[19295]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:12:25 np0005625203.novalocal sudo[19310]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oreozjjkiyirwyvfvvlanfrrfibbzodh ; /usr/bin/python3
Feb 20 07:12:25 np0005625203.novalocal sudo[19310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:12:25 np0005625203.novalocal python3[19312]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-appstream-eus-rpms'] state=enabled purge=False
Feb 20 07:12:26 np0005625203.novalocal sshd[19295]: Received disconnect from 118.193.43.244 port 48226:11: Bye Bye [preauth]
Feb 20 07:12:26 np0005625203.novalocal sshd[19295]: Disconnected from authenticating user root 118.193.43.244 port 48226 [preauth]
Feb 20 07:12:29 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:12:31 np0005625203.novalocal sudo[19310]: pam_unix(sudo:session): session closed for user root
Feb 20 07:12:36 np0005625203.novalocal sudo[19509]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qcawgbnrxsugqapyvjcobjxaacknbdde ; /usr/bin/python3
Feb 20 07:12:36 np0005625203.novalocal sudo[19509]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:12:36 np0005625203.novalocal python3[19511]: ansible-community.general.rhsm_repository Invoked with name=['rhel-9-for-x86_64-highavailability-eus-rpms'] state=enabled purge=False
Feb 20 07:12:39 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:12:39 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:12:44 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:12:51 np0005625203.novalocal sudo[19509]: pam_unix(sudo:session): session closed for user root
Feb 20 07:13:04 np0005625203.novalocal sudo[19843]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbhhmfokahmoeijtsnuobcsdrruhxxhp ; /usr/bin/python3
Feb 20 07:13:04 np0005625203.novalocal sudo[19843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:13:04 np0005625203.novalocal python3[19845]: ansible-community.general.rhsm_repository Invoked with name=['fast-datapath-for-rhel-9-x86_64-rpms'] state=enabled purge=False
Feb 20 07:13:04 np0005625203.novalocal sshd[19847]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:13:07 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:13:08 np0005625203.novalocal sshd[19847]: Invalid user 0 from 185.246.128.171 port 62966
Feb 20 07:13:11 np0005625203.novalocal sshd[19974]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:13:11 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:13:12 np0005625203.novalocal sshd[19974]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:13:14 np0005625203.novalocal sshd[19847]: Disconnecting invalid user 0 185.246.128.171 port 62966: Change of username or service not allowed: (0,ssh-connection) -> (delegate,ssh-connection) [preauth]
Feb 20 07:13:18 np0005625203.novalocal sudo[19843]: pam_unix(sudo:session): session closed for user root
Feb 20 07:13:19 np0005625203.novalocal sshd[20168]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:13:24 np0005625203.novalocal sshd[20168]: Invalid user delegate from 185.246.128.171 port 57589
Feb 20 07:13:25 np0005625203.novalocal sshd[20168]: Disconnecting invalid user delegate 185.246.128.171 port 57589: Change of username or service not allowed: (delegate,ssh-connection) -> (redhat,ssh-connection) [preauth]
Feb 20 07:13:29 np0005625203.novalocal sshd[20170]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:13:29 np0005625203.novalocal sshd[20171]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:13:29 np0005625203.novalocal sshd[20171]: Invalid user n8n from 192.99.169.99 port 41216
Feb 20 07:13:29 np0005625203.novalocal sshd[20171]: Received disconnect from 192.99.169.99 port 41216:11: Bye Bye [preauth]
Feb 20 07:13:29 np0005625203.novalocal sshd[20171]: Disconnected from invalid user n8n 192.99.169.99 port 41216 [preauth]
Feb 20 07:13:31 np0005625203.novalocal sshd[20170]: Invalid user redhat from 185.246.128.171 port 31059
Feb 20 07:13:32 np0005625203.novalocal sudo[20187]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nxgijmtmzxwknylpvqhswxqeksgjbniz ; /usr/bin/python3
Feb 20 07:13:32 np0005625203.novalocal sudo[20187]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:13:32 np0005625203.novalocal python3[20189]: ansible-community.general.rhsm_repository Invoked with name=['openstack-17.1-for-rhel-9-x86_64-rpms'] state=enabled purge=False
Feb 20 07:13:34 np0005625203.novalocal sshd[20170]: Disconnecting invalid user redhat 185.246.128.171 port 31059: Change of username or service not allowed: (redhat,ssh-connection) -> (wangqi,ssh-connection) [preauth]
Feb 20 07:13:35 np0005625203.novalocal sshd[20310]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:13:35 np0005625203.novalocal sshd[20310]: Invalid user n8n from 172.203.58.203 port 43668
Feb 20 07:13:35 np0005625203.novalocal sshd[20310]: Received disconnect from 172.203.58.203 port 43668:11: Bye Bye [preauth]
Feb 20 07:13:35 np0005625203.novalocal sshd[20310]: Disconnected from invalid user n8n 172.203.58.203 port 43668 [preauth]
Feb 20 07:13:36 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:13:36 np0005625203.novalocal sshd[20318]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:13:37 np0005625203.novalocal sshd[20318]: Invalid user iksi from 144.91.127.158 port 42482
Feb 20 07:13:37 np0005625203.novalocal sshd[20318]: Received disconnect from 144.91.127.158 port 42482:11: Bye Bye [preauth]
Feb 20 07:13:37 np0005625203.novalocal sshd[20318]: Disconnected from invalid user iksi 144.91.127.158 port 42482 [preauth]
Feb 20 07:13:39 np0005625203.novalocal sshd[20379]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:13:41 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:13:41 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:13:41 np0005625203.novalocal sshd[20379]: Invalid user wangqi from 185.246.128.171 port 9215
Feb 20 07:13:42 np0005625203.novalocal sshd[20379]: Disconnecting invalid user wangqi 185.246.128.171 port 9215: Change of username or service not allowed: (wangqi,ssh-connection) -> (telegram,ssh-connection) [preauth]
Feb 20 07:13:43 np0005625203.novalocal sshd[20449]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:13:46 np0005625203.novalocal sshd[20518]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:13:46 np0005625203.novalocal sshd[20449]: Invalid user telegram from 185.246.128.171 port 24066
Feb 20 07:13:47 np0005625203.novalocal sshd[20518]: Invalid user autologin from 45.246.55.249 port 53984
Feb 20 07:13:47 np0005625203.novalocal sshd[20518]: Received disconnect from 45.246.55.249 port 53984:11: Bye Bye [preauth]
Feb 20 07:13:47 np0005625203.novalocal sshd[20518]: Disconnected from invalid user autologin 45.246.55.249 port 53984 [preauth]
Feb 20 07:13:48 np0005625203.novalocal sudo[20187]: pam_unix(sudo:session): session closed for user root
Feb 20 07:13:48 np0005625203.novalocal sshd[20449]: Disconnecting invalid user telegram 185.246.128.171 port 24066: Change of username or service not allowed: (telegram,ssh-connection) -> (ts,ssh-connection) [preauth]
Feb 20 07:13:52 np0005625203.novalocal sshd[20520]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:13:55 np0005625203.novalocal sshd[20520]: Invalid user ts from 185.246.128.171 port 59943
Feb 20 07:13:55 np0005625203.novalocal sshd[20520]: Disconnecting invalid user ts 185.246.128.171 port 59943: Change of username or service not allowed: (ts,ssh-connection) -> (user1,ssh-connection) [preauth]
Feb 20 07:13:59 np0005625203.novalocal sshd[20522]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:14:00 np0005625203.novalocal sshd[20522]: Invalid user user1 from 185.246.128.171 port 24653
Feb 20 07:14:02 np0005625203.novalocal sudo[20538]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fjdkjctqoclagqclqxemxukhrbjomsch ; /usr/bin/python3
Feb 20 07:14:02 np0005625203.novalocal sudo[20538]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:14:02 np0005625203.novalocal python3[20540]: ansible-ansible.legacy.command Invoked with _raw_params=yum repolist --enabled
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-064b-165c-000000000013-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:14:02 np0005625203.novalocal sshd[20522]: error: maximum authentication attempts exceeded for invalid user user1 from 185.246.128.171 port 24653 ssh2 [preauth]
Feb 20 07:14:02 np0005625203.novalocal sshd[20522]: Disconnecting invalid user user1 185.246.128.171 port 24653: Too many authentication failures [preauth]
Feb 20 07:14:04 np0005625203.novalocal sshd[20544]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:14:04 np0005625203.novalocal sudo[20538]: pam_unix(sudo:session): session closed for user root
Feb 20 07:14:07 np0005625203.novalocal sudo[20560]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kzbvbtwwviosbljrmfdawxixkjifetyq ; /usr/bin/python3
Feb 20 07:14:07 np0005625203.novalocal sudo[20560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:14:07 np0005625203.novalocal python3[20562]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch', 'os-net-config', 'ansible-core'] state=present update_cache=True allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:14:09 np0005625203.novalocal sshd[20544]: Invalid user user1 from 185.246.128.171 port 43711
Feb 20 07:14:19 np0005625203.novalocal groupadd[20648]: group added to /etc/group: name=unbound, GID=987
Feb 20 07:14:19 np0005625203.novalocal groupadd[20648]: group added to /etc/gshadow: name=unbound
Feb 20 07:14:19 np0005625203.novalocal groupadd[20648]: new group: name=unbound, GID=987
Feb 20 07:14:19 np0005625203.novalocal useradd[20655]: new user: name=unbound, UID=987, GID=987, home=/etc/unbound, shell=/sbin/nologin, from=none
Feb 20 07:14:19 np0005625203.novalocal systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Feb 20 07:14:21 np0005625203.novalocal sshd[20544]: error: maximum authentication attempts exceeded for invalid user user1 from 185.246.128.171 port 43711 ssh2 [preauth]
Feb 20 07:14:21 np0005625203.novalocal sshd[20544]: Disconnecting invalid user user1 185.246.128.171 port 43711: Too many authentication failures [preauth]
Feb 20 07:14:24 np0005625203.novalocal sshd[20674]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:14:27 np0005625203.novalocal sshd[20674]: Invalid user user1 from 185.246.128.171 port 58820
Feb 20 07:14:27 np0005625203.novalocal kernel: SELinux:  Converting 497 SID table entries...
Feb 20 07:14:27 np0005625203.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 07:14:27 np0005625203.novalocal kernel: SELinux:  policy capability open_perms=1
Feb 20 07:14:27 np0005625203.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 07:14:27 np0005625203.novalocal kernel: SELinux:  policy capability always_check_network=0
Feb 20 07:14:27 np0005625203.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 07:14:27 np0005625203.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 07:14:27 np0005625203.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 07:14:28 np0005625203.novalocal groupadd[20684]: group added to /etc/group: name=openvswitch, GID=986
Feb 20 07:14:28 np0005625203.novalocal groupadd[20684]: group added to /etc/gshadow: name=openvswitch
Feb 20 07:14:28 np0005625203.novalocal groupadd[20684]: new group: name=openvswitch, GID=986
Feb 20 07:14:28 np0005625203.novalocal useradd[20691]: new user: name=openvswitch, UID=986, GID=986, home=/, shell=/sbin/nologin, from=none
Feb 20 07:14:28 np0005625203.novalocal groupadd[20699]: group added to /etc/group: name=hugetlbfs, GID=985
Feb 20 07:14:28 np0005625203.novalocal groupadd[20699]: group added to /etc/gshadow: name=hugetlbfs
Feb 20 07:14:28 np0005625203.novalocal groupadd[20699]: new group: name=hugetlbfs, GID=985
Feb 20 07:14:28 np0005625203.novalocal usermod[20707]: add 'openvswitch' to group 'hugetlbfs'
Feb 20 07:14:28 np0005625203.novalocal usermod[20707]: add 'openvswitch' to shadow group 'hugetlbfs'
Feb 20 07:14:31 np0005625203.novalocal dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Feb 20 07:14:31 np0005625203.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 07:14:31 np0005625203.novalocal systemd[1]: Starting man-db-cache-update.service...
Feb 20 07:14:31 np0005625203.novalocal systemd[1]: Reloading.
Feb 20 07:14:31 np0005625203.novalocal systemd-rc-local-generator[21227]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:14:31 np0005625203.novalocal systemd-sysv-generator[21230]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:14:31 np0005625203.novalocal systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:14:31 np0005625203.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Feb 20 07:14:32 np0005625203.novalocal systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 07:14:32 np0005625203.novalocal systemd[1]: Finished man-db-cache-update.service.
Feb 20 07:14:32 np0005625203.novalocal systemd[1]: run-r212f4e7862d4460298d53cb79d4ba1ad.service: Deactivated successfully.
Feb 20 07:14:32 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:14:32 np0005625203.novalocal sudo[20560]: pam_unix(sudo:session): session closed for user root
Feb 20 07:14:32 np0005625203.novalocal rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 07:14:37 np0005625203.novalocal sshd[20674]: error: maximum authentication attempts exceeded for invalid user user1 from 185.246.128.171 port 58820 ssh2 [preauth]
Feb 20 07:14:37 np0005625203.novalocal sshd[20674]: Disconnecting invalid user user1 185.246.128.171 port 58820: Too many authentication failures [preauth]
Feb 20 07:14:37 np0005625203.novalocal sshd[21758]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:14:37 np0005625203.novalocal sshd[21758]: Invalid user centos from 170.254.229.191 port 36188
Feb 20 07:14:38 np0005625203.novalocal sshd[21758]: Received disconnect from 170.254.229.191 port 36188:11: Bye Bye [preauth]
Feb 20 07:14:38 np0005625203.novalocal sshd[21758]: Disconnected from invalid user centos 170.254.229.191 port 36188 [preauth]
Feb 20 07:14:40 np0005625203.novalocal sshd[21760]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:14:42 np0005625203.novalocal sshd[21760]: Invalid user user1 from 185.246.128.171 port 57368
Feb 20 07:14:43 np0005625203.novalocal sshd[21760]: Disconnecting invalid user user1 185.246.128.171 port 57368: Change of username or service not allowed: (user1,ssh-connection) -> (bbs,ssh-connection) [preauth]
Feb 20 07:14:44 np0005625203.novalocal sshd[21762]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:14:47 np0005625203.novalocal sshd[21762]: Invalid user bbs from 185.246.128.171 port 10234
Feb 20 07:14:47 np0005625203.novalocal sshd[21762]: Disconnecting invalid user bbs 185.246.128.171 port 10234: Change of username or service not allowed: (bbs,ssh-connection) -> (utente,ssh-connection) [preauth]
Feb 20 07:14:49 np0005625203.novalocal sshd[21764]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:14:51 np0005625203.novalocal sshd[21764]: Invalid user utente from 185.246.128.171 port 30887
Feb 20 07:14:52 np0005625203.novalocal sshd[21764]: Disconnecting invalid user utente 185.246.128.171 port 30887: Change of username or service not allowed: (utente,ssh-connection) -> (anthony,ssh-connection) [preauth]
Feb 20 07:14:54 np0005625203.novalocal sshd[21766]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:14:57 np0005625203.novalocal sshd[21766]: Invalid user anthony from 185.246.128.171 port 50314
Feb 20 07:14:58 np0005625203.novalocal sudo[21782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gsyejmtqwvzlbraytaqxbwuagljpiqkl ; /usr/bin/python3
Feb 20 07:14:58 np0005625203.novalocal sudo[21782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:14:58 np0005625203.novalocal sshd[21766]: Disconnecting invalid user anthony 185.246.128.171 port 50314: Change of username or service not allowed: (anthony,ssh-connection) -> (username1,ssh-connection) [preauth]
Feb 20 07:14:58 np0005625203.novalocal python3[21784]: ansible-ansible.legacy.command Invoked with _raw_params=ansible-galaxy collection install ansible.posix
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-064b-165c-000000000015-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:14:58 np0005625203.novalocal sshd[21788]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:14:59 np0005625203.novalocal sshd[21788]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:15:02 np0005625203.novalocal sshd[21790]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:15:05 np0005625203.novalocal sshd[21790]: Invalid user username1 from 185.246.128.171 port 17316
Feb 20 07:15:07 np0005625203.novalocal sshd[21790]: Disconnecting invalid user username1 185.246.128.171 port 17316: Change of username or service not allowed: (username1,ssh-connection) -> (ftpusr,ssh-connection) [preauth]
Feb 20 07:15:10 np0005625203.novalocal sshd[21793]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:15:13 np0005625203.novalocal sshd[21793]: Invalid user ftpusr from 185.246.128.171 port 53091
Feb 20 07:15:14 np0005625203.novalocal sudo[21782]: pam_unix(sudo:session): session closed for user root
Feb 20 07:15:15 np0005625203.novalocal sshd[21793]: Disconnecting invalid user ftpusr 185.246.128.171 port 53091: Change of username or service not allowed: (ftpusr,ssh-connection) -> (hadoop,ssh-connection) [preauth]
Feb 20 07:15:19 np0005625203.novalocal sshd[21796]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:15:22 np0005625203.novalocal sshd[21796]: Invalid user hadoop from 185.246.128.171 port 21655
Feb 20 07:15:22 np0005625203.novalocal sshd[21796]: Disconnecting invalid user hadoop 185.246.128.171 port 21655: Change of username or service not allowed: (hadoop,ssh-connection) -> (oozie,ssh-connection) [preauth]
Feb 20 07:15:24 np0005625203.novalocal sshd[21798]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:15:27 np0005625203.novalocal sshd[21798]: Invalid user oozie from 185.246.128.171 port 41827
Feb 20 07:15:28 np0005625203.novalocal sshd[21798]: Disconnecting invalid user oozie 185.246.128.171 port 41827: Change of username or service not allowed: (oozie,ssh-connection) -> (monitoring,ssh-connection) [preauth]
Feb 20 07:15:30 np0005625203.novalocal sshd[21800]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:15:33 np0005625203.novalocal sshd[21800]: Invalid user monitoring from 185.246.128.171 port 2713
Feb 20 07:15:34 np0005625203.novalocal sshd[21800]: Disconnecting invalid user monitoring 185.246.128.171 port 2713: Change of username or service not allowed: (monitoring,ssh-connection) -> (default,ssh-connection) [preauth]
Feb 20 07:15:34 np0005625203.novalocal sudo[21815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fyijdbihmykowxgsigvzddoandxnthbj ; /usr/bin/python3
Feb 20 07:15:34 np0005625203.novalocal sudo[21815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:15:34 np0005625203.novalocal python3[21817]: ansible-ansible.builtin.file Invoked with path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:15:34 np0005625203.novalocal sudo[21815]: pam_unix(sudo:session): session closed for user root
Feb 20 07:15:35 np0005625203.novalocal sudo[21863]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-prvfmowujvacxdqrvpezxthqcfvfsrcu ; /usr/bin/python3
Feb 20 07:15:35 np0005625203.novalocal sudo[21863]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:15:35 np0005625203.novalocal python3[21865]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/tripleo_config.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:15:35 np0005625203.novalocal sudo[21863]: pam_unix(sudo:session): session closed for user root
Feb 20 07:15:35 np0005625203.novalocal sudo[21906]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwhkpjwnappgrbrbagwltyjccbtzxgdf ; /usr/bin/python3
Feb 20 07:15:35 np0005625203.novalocal sudo[21906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:15:35 np0005625203.novalocal python3[21908]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771571734.989325-293-147822632617429/source dest=/etc/os-net-config/tripleo_config.yaml mode=None follow=False _original_basename=overcloud_net_config.j2 checksum=9333f42ac4b9baf349a5c32f7bcba3335b5912e0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:15:35 np0005625203.novalocal sudo[21906]: pam_unix(sudo:session): session closed for user root
Feb 20 07:15:35 np0005625203.novalocal sshd[21909]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:15:36 np0005625203.novalocal sudo[21937]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qhksuypijthcvhhjkmknuvtgtecsuicu ; /usr/bin/python3
Feb 20 07:15:36 np0005625203.novalocal sudo[21937]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:15:37 np0005625203.novalocal python3[21939]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network  state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Feb 20 07:15:37 np0005625203.novalocal sudo[21937]: pam_unix(sudo:session): session closed for user root
Feb 20 07:15:37 np0005625203.novalocal systemd-journald[618]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 89.2 (297 of 333 items), suggesting rotation.
Feb 20 07:15:37 np0005625203.novalocal systemd-journald[618]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating.
Feb 20 07:15:37 np0005625203.novalocal rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 07:15:37 np0005625203.novalocal rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 07:15:37 np0005625203.novalocal sudo[21959]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cqspshoowzbeyiriukoxoxgswyeesabs ; /usr/bin/python3
Feb 20 07:15:37 np0005625203.novalocal sudo[21959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:15:37 np0005625203.novalocal python3[21961]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-20 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Feb 20 07:15:37 np0005625203.novalocal sudo[21959]: pam_unix(sudo:session): session closed for user root
Feb 20 07:15:37 np0005625203.novalocal sudo[21979]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rnizwrrgkvdqqnmghhismklnzneedgej ; /usr/bin/python3
Feb 20 07:15:37 np0005625203.novalocal sudo[21979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:15:37 np0005625203.novalocal python3[21981]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-21 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Feb 20 07:15:37 np0005625203.novalocal sudo[21979]: pam_unix(sudo:session): session closed for user root
Feb 20 07:15:37 np0005625203.novalocal sudo[21999]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckwqqdauvketbccfbdxmknzmdklhnbbl ; /usr/bin/python3
Feb 20 07:15:37 np0005625203.novalocal sudo[21999]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:15:37 np0005625203.novalocal python3[22001]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-22 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Feb 20 07:15:37 np0005625203.novalocal sudo[21999]: pam_unix(sudo:session): session closed for user root
Feb 20 07:15:38 np0005625203.novalocal sudo[22019]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-indphjqugnginvwqakmvrprefltgzybi ; /usr/bin/python3
Feb 20 07:15:38 np0005625203.novalocal sudo[22019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:15:38 np0005625203.novalocal python3[22021]: ansible-community.general.nmcli Invoked with conn_name=ci-private-network-23 state=absent ignore_unsupported_suboptions=False autoconnect=True gw4_ignore_auto=False never_default4=False dns4_ignore_auto=False may_fail4=True gw6_ignore_auto=False dns6_ignore_auto=False mode=balance-rr stp=True priority=128 slavepriority=32 forwarddelay=15 hellotime=2 maxage=20 ageingtime=300 hairpin=False path_cost=100 runner=roundrobin master=None slave_type=None ifname=None type=None ip4=None gw4=None routes4=None routes4_extended=None route_metric4=None routing_rules4=None dns4=None dns4_search=None dns4_options=None method4=None dhcp_client_id=None ip6=None gw6=None dns6=None dns6_search=None dns6_options=None routes6=None routes6_extended=None route_metric6=None method6=None ip_privacy6=None addr_gen_mode6=None miimon=None downdelay=None updelay=None xmit_hash_policy=None arp_interval=None arp_ip_target=None primary=None mtu=None mac=None zone=None runner_hwaddr_policy=None runner_fast_rate=None vlanid=None vlandev=None flags=None ingress=None egress=None vxlan_id=None vxlan_local=None vxlan_remote=None ip_tunnel_dev=None ip_tunnel_local=None ip_tunnel_remote=None ip_tunnel_input_key=NOT_LOGGING_PARAMETER ip_tunnel_output_key=NOT_LOGGING_PARAMETER ssid=None wifi=None wifi_sec=NOT_LOGGING_PARAMETER gsm=None macvlan=None wireguard=None vpn=None transport_mode=None
Feb 20 07:15:38 np0005625203.novalocal sudo[22019]: pam_unix(sudo:session): session closed for user root
Feb 20 07:15:40 np0005625203.novalocal sudo[22039]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bissprnxkhzwqjnzvrkrtbtxjamytlrr ; /usr/bin/python3
Feb 20 07:15:40 np0005625203.novalocal sudo[22039]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:15:40 np0005625203.novalocal python3[22041]: ansible-ansible.builtin.systemd Invoked with name=network state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 07:15:40 np0005625203.novalocal systemd[1]: Starting LSB: Bring up/down networking...
Feb 20 07:15:40 np0005625203.novalocal sshd[21909]: Invalid user default from 185.246.128.171 port 23424
Feb 20 07:15:40 np0005625203.novalocal network[22044]: WARN      : [network] You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 07:15:40 np0005625203.novalocal network[22055]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 07:15:40 np0005625203.novalocal network[22044]: WARN      : [network] 'network-scripts' will be removed from distribution in near future.
Feb 20 07:15:40 np0005625203.novalocal network[22056]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:15:40 np0005625203.novalocal network[22044]: WARN      : [network] It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 07:15:40 np0005625203.novalocal network[22057]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 07:15:40 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571740.7570] audit: op="connections-reload" pid=22085 uid=0 result="success"
Feb 20 07:15:40 np0005625203.novalocal network[22044]: Bringing up loopback interface:  [  OK  ]
Feb 20 07:15:40 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571740.9372] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth0" pid=22173 uid=0 result="success"
Feb 20 07:15:40 np0005625203.novalocal network[22044]: Bringing up interface eth0:  [  OK  ]
Feb 20 07:15:41 np0005625203.novalocal systemd[1]: Started LSB: Bring up/down networking.
Feb 20 07:15:41 np0005625203.novalocal sudo[22039]: pam_unix(sudo:session): session closed for user root
Feb 20 07:15:41 np0005625203.novalocal sudo[22212]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orusakfkvmpuafditaxuzfhowkkwjoma ; /usr/bin/python3
Feb 20 07:15:41 np0005625203.novalocal sudo[22212]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:15:41 np0005625203.novalocal python3[22214]: ansible-ansible.builtin.systemd Invoked with name=openvswitch state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 07:15:42 np0005625203.novalocal systemd[1]: Starting Open vSwitch Database Unit...
Feb 20 07:15:42 np0005625203.novalocal chown[22218]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Feb 20 07:15:42 np0005625203.novalocal ovs-ctl[22223]: /etc/openvswitch/conf.db does not exist ... (warning).
Feb 20 07:15:42 np0005625203.novalocal ovs-ctl[22223]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Feb 20 07:15:42 np0005625203.novalocal ovs-ctl[22223]: Starting ovsdb-server [  OK  ]
Feb 20 07:15:42 np0005625203.novalocal ovs-vsctl[22273]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Feb 20 07:15:42 np0005625203.novalocal ovs-vsctl[22294]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.6-141.el9fdp "external-ids:system-id=\"1e4d60e6-0be0-4143-b488-1b391fbc71ef\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"rhel\"" "system-version=\"9.2\""
Feb 20 07:15:42 np0005625203.novalocal ovs-ctl[22223]: Configuring Open vSwitch system IDs [  OK  ]
Feb 20 07:15:42 np0005625203.novalocal ovs-ctl[22223]: Enabling remote OVSDB managers [  OK  ]
Feb 20 07:15:42 np0005625203.novalocal systemd[1]: Started Open vSwitch Database Unit.
Feb 20 07:15:42 np0005625203.novalocal ovs-vsctl[22300]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005625203.novalocal
Feb 20 07:15:42 np0005625203.novalocal systemd[1]: Starting Open vSwitch Delete Transient Ports...
Feb 20 07:15:42 np0005625203.novalocal systemd[1]: Finished Open vSwitch Delete Transient Ports.
Feb 20 07:15:42 np0005625203.novalocal systemd[1]: Starting Open vSwitch Forwarding Unit...
Feb 20 07:15:42 np0005625203.novalocal kernel: openvswitch: Open vSwitch switching datapath
Feb 20 07:15:42 np0005625203.novalocal ovs-ctl[22344]: Inserting openvswitch module [  OK  ]
Feb 20 07:15:43 np0005625203.novalocal ovs-ctl[22313]: Starting ovs-vswitchd [  OK  ]
Feb 20 07:15:43 np0005625203.novalocal ovs-vsctl[22362]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=np0005625203.novalocal
Feb 20 07:15:43 np0005625203.novalocal ovs-ctl[22313]: Enabling remote OVSDB managers [  OK  ]
Feb 20 07:15:43 np0005625203.novalocal systemd[1]: Started Open vSwitch Forwarding Unit.
Feb 20 07:15:43 np0005625203.novalocal systemd[1]: Starting Open vSwitch...
Feb 20 07:15:43 np0005625203.novalocal systemd[1]: Finished Open vSwitch.
Feb 20 07:15:43 np0005625203.novalocal sudo[22212]: pam_unix(sudo:session): session closed for user root
Feb 20 07:15:44 np0005625203.novalocal sshd[22364]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:15:45 np0005625203.novalocal sshd[22364]: Invalid user iksi from 118.193.43.244 port 41716
Feb 20 07:15:45 np0005625203.novalocal sudo[22380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ropnlgheckbmesjwvothzfntvuifxgsx ; /usr/bin/python3
Feb 20 07:15:45 np0005625203.novalocal sudo[22380]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:15:45 np0005625203.novalocal sshd[22364]: Received disconnect from 118.193.43.244 port 41716:11: Bye Bye [preauth]
Feb 20 07:15:45 np0005625203.novalocal sshd[22364]: Disconnected from invalid user iksi 118.193.43.244 port 41716 [preauth]
Feb 20 07:15:45 np0005625203.novalocal python3[22382]: ansible-ansible.legacy.command Invoked with _raw_params=os-net-config -c /etc/os-net-config/tripleo_config.yaml
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-064b-165c-00000000001a-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:15:46 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571746.7427] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22578 uid=0 result="success"
Feb 20 07:15:46 np0005625203.novalocal ifup[22579]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:15:46 np0005625203.novalocal ifup[22580]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:15:46 np0005625203.novalocal ifup[22581]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:15:46 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571746.7691] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22587 uid=0 result="success"
Feb 20 07:15:46 np0005625203.novalocal ovs-vsctl[22589]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --may-exist add-br br-ex -- set bridge br-ex other-config:mac-table-size=50000 -- set bridge br-ex other-config:hwaddr=fa:16:3e:cd:74:b1 -- set bridge br-ex fail_mode=standalone -- del-controller br-ex
Feb 20 07:15:46 np0005625203.novalocal kernel: device ovs-system entered promiscuous mode
Feb 20 07:15:46 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571746.7957] manager: (ovs-system): new Generic device (/org/freedesktop/NetworkManager/Devices/4)
Feb 20 07:15:46 np0005625203.novalocal kernel: Timeout policy base is empty
Feb 20 07:15:46 np0005625203.novalocal kernel: Failed to associated timeout policy `ovs_test_tp'
Feb 20 07:15:46 np0005625203.novalocal systemd-udevd[22591]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 07:15:46 np0005625203.novalocal systemd-udevd[22606]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 07:15:46 np0005625203.novalocal kernel: device br-ex entered promiscuous mode
Feb 20 07:15:46 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571746.8460] manager: (br-ex): new Generic device (/org/freedesktop/NetworkManager/Devices/5)
Feb 20 07:15:46 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571746.8722] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22616 uid=0 result="success"
Feb 20 07:15:46 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571746.8950] device (br-ex): carrier: link connected
Feb 20 07:15:48 np0005625203.novalocal sshd[21909]: error: maximum authentication attempts exceeded for invalid user default from 185.246.128.171 port 23424 ssh2 [preauth]
Feb 20 07:15:48 np0005625203.novalocal sshd[21909]: Disconnecting invalid user default 185.246.128.171 port 23424: Too many authentication failures [preauth]
Feb 20 07:15:49 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571749.9526] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22645 uid=0 result="success"
Feb 20 07:15:50 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571750.0016] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22660 uid=0 result="success"
Feb 20 07:15:50 np0005625203.novalocal NET[22685]: /etc/sysconfig/network-scripts/ifup-post : updated /etc/resolv.conf
Feb 20 07:15:50 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571750.0988] device (eth1): state change: activated -> unmanaged (reason 'unmanaged', sys-iface-state: 'managed')
Feb 20 07:15:50 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571750.1108] dhcp4 (eth1): canceled DHCP transaction
Feb 20 07:15:50 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571750.1109] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Feb 20 07:15:50 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571750.1109] dhcp4 (eth1): state changed no lease
Feb 20 07:15:50 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571750.1152] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22694 uid=0 result="success"
Feb 20 07:15:50 np0005625203.novalocal ifup[22695]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:15:50 np0005625203.novalocal ifup[22696]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:15:50 np0005625203.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 20 07:15:50 np0005625203.novalocal ifup[22698]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:15:50 np0005625203.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 20 07:15:50 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571750.1521] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22711 uid=0 result="success"
Feb 20 07:15:50 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571750.1922] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22722 uid=0 result="success"
Feb 20 07:15:50 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571750.1983] device (eth1): carrier: link connected
Feb 20 07:15:50 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571750.2245] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22731 uid=0 result="success"
Feb 20 07:15:50 np0005625203.novalocal ipv6_wait_tentative[22743]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Feb 20 07:15:51 np0005625203.novalocal sshd[22745]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:15:51 np0005625203.novalocal ipv6_wait_tentative[22749]: Waiting for interface eth1 IPv6 address(es) to leave the 'tentative' state
Feb 20 07:15:52 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571752.2986] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-eth1" pid=22759 uid=0 result="success"
Feb 20 07:15:52 np0005625203.novalocal ovs-vsctl[22774]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex eth1 -- add-port br-ex eth1
Feb 20 07:15:52 np0005625203.novalocal kernel: device eth1 entered promiscuous mode
Feb 20 07:15:52 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571752.3736] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22782 uid=0 result="success"
Feb 20 07:15:52 np0005625203.novalocal ifup[22783]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:15:52 np0005625203.novalocal ifup[22784]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:15:52 np0005625203.novalocal ifup[22785]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:15:52 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571752.4044] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-br-ex" pid=22791 uid=0 result="success"
Feb 20 07:15:52 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571752.4472] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22801 uid=0 result="success"
Feb 20 07:15:52 np0005625203.novalocal ifup[22802]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:15:52 np0005625203.novalocal ifup[22803]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:15:52 np0005625203.novalocal ifup[22804]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:15:52 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571752.4791] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22810 uid=0 result="success"
Feb 20 07:15:52 np0005625203.novalocal ovs-vsctl[22813]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal
Feb 20 07:15:52 np0005625203.novalocal kernel: device vlan44 entered promiscuous mode
Feb 20 07:15:52 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571752.5190] manager: (vlan44): new Generic device (/org/freedesktop/NetworkManager/Devices/6)
Feb 20 07:15:52 np0005625203.novalocal systemd-udevd[22815]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 07:15:52 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571752.5494] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22824 uid=0 result="success"
Feb 20 07:15:52 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571752.5709] device (vlan44): carrier: link connected
Feb 20 07:15:52 np0005625203.novalocal sshd[22745]: Invalid user default from 185.246.128.171 port 19472
Feb 20 07:15:52 np0005625203.novalocal sshd[22745]: Disconnecting invalid user default 185.246.128.171 port 19472: Change of username or service not allowed: (default,ssh-connection) -> (redis,ssh-connection) [preauth]
Feb 20 07:15:54 np0005625203.novalocal sshd[22844]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:15:55 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571755.6213] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22855 uid=0 result="success"
Feb 20 07:15:55 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571755.6734] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=22870 uid=0 result="success"
Feb 20 07:15:55 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571755.7378] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22891 uid=0 result="success"
Feb 20 07:15:55 np0005625203.novalocal ifup[22892]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:15:55 np0005625203.novalocal ifup[22893]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:15:55 np0005625203.novalocal ifup[22894]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:15:55 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571755.7682] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22900 uid=0 result="success"
Feb 20 07:15:55 np0005625203.novalocal ovs-vsctl[22903]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal
Feb 20 07:15:55 np0005625203.novalocal kernel: device vlan20 entered promiscuous mode
Feb 20 07:15:55 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571755.8103] manager: (vlan20): new Generic device (/org/freedesktop/NetworkManager/Devices/7)
Feb 20 07:15:55 np0005625203.novalocal systemd-udevd[22905]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 07:15:55 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571755.8311] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22915 uid=0 result="success"
Feb 20 07:15:55 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571755.8494] device (vlan20): carrier: link connected
Feb 20 07:15:57 np0005625203.novalocal sshd[22844]: Invalid user redis from 185.246.128.171 port 33085
Feb 20 07:15:58 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571758.9024] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22945 uid=0 result="success"
Feb 20 07:15:58 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571758.9531] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=22960 uid=0 result="success"
Feb 20 07:15:59 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571759.0119] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22981 uid=0 result="success"
Feb 20 07:15:59 np0005625203.novalocal ifup[22982]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:15:59 np0005625203.novalocal ifup[22983]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:15:59 np0005625203.novalocal ifup[22984]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:15:59 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571759.0451] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=22990 uid=0 result="success"
Feb 20 07:15:59 np0005625203.novalocal ovs-vsctl[22993]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal
Feb 20 07:15:59 np0005625203.novalocal kernel: device vlan21 entered promiscuous mode
Feb 20 07:15:59 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571759.0835] manager: (vlan21): new Generic device (/org/freedesktop/NetworkManager/Devices/8)
Feb 20 07:15:59 np0005625203.novalocal systemd-udevd[22995]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 07:15:59 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571759.1097] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23005 uid=0 result="success"
Feb 20 07:15:59 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571759.1309] device (vlan21): carrier: link connected
Feb 20 07:16:00 np0005625203.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 20 07:16:02 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571762.1831] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23035 uid=0 result="success"
Feb 20 07:16:02 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571762.2302] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23050 uid=0 result="success"
Feb 20 07:16:02 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571762.2864] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23071 uid=0 result="success"
Feb 20 07:16:02 np0005625203.novalocal ifup[23072]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:16:02 np0005625203.novalocal ifup[23073]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:16:02 np0005625203.novalocal ifup[23074]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:16:02 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571762.3169] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23080 uid=0 result="success"
Feb 20 07:16:02 np0005625203.novalocal ovs-vsctl[23083]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal
Feb 20 07:16:02 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571762.3906] manager: (vlan23): new Generic device (/org/freedesktop/NetworkManager/Devices/9)
Feb 20 07:16:02 np0005625203.novalocal kernel: device vlan23 entered promiscuous mode
Feb 20 07:16:02 np0005625203.novalocal systemd-udevd[23085]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 07:16:02 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571762.4114] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23095 uid=0 result="success"
Feb 20 07:16:02 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571762.4282] device (vlan23): carrier: link connected
Feb 20 07:16:02 np0005625203.novalocal sshd[22844]: Disconnecting invalid user redis 185.246.128.171 port 33085: Change of username or service not allowed: (redis,ssh-connection) -> (telecom,ssh-connection) [preauth]
Feb 20 07:16:05 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571765.4815] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23125 uid=0 result="success"
Feb 20 07:16:05 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571765.5281] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23140 uid=0 result="success"
Feb 20 07:16:05 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571765.5874] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23161 uid=0 result="success"
Feb 20 07:16:05 np0005625203.novalocal ifup[23162]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:16:05 np0005625203.novalocal ifup[23163]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:16:05 np0005625203.novalocal ifup[23164]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:16:05 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571765.6204] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23170 uid=0 result="success"
Feb 20 07:16:05 np0005625203.novalocal ovs-vsctl[23173]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal
Feb 20 07:16:05 np0005625203.novalocal kernel: device vlan22 entered promiscuous mode
Feb 20 07:16:05 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571765.6813] manager: (vlan22): new Generic device (/org/freedesktop/NetworkManager/Devices/10)
Feb 20 07:16:05 np0005625203.novalocal systemd-udevd[23175]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 07:16:05 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571765.7074] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23185 uid=0 result="success"
Feb 20 07:16:05 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571765.7277] device (vlan22): carrier: link connected
Feb 20 07:16:06 np0005625203.novalocal sshd[23203]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:16:08 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571768.7726] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23217 uid=0 result="success"
Feb 20 07:16:08 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571768.8112] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23232 uid=0 result="success"
Feb 20 07:16:08 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571768.8721] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23253 uid=0 result="success"
Feb 20 07:16:08 np0005625203.novalocal ifup[23254]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:16:08 np0005625203.novalocal ifup[23255]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:16:08 np0005625203.novalocal ifup[23256]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:16:08 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571768.8974] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23262 uid=0 result="success"
Feb 20 07:16:08 np0005625203.novalocal ovs-vsctl[23265]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan44 -- add-port br-ex vlan44 tag=44 -- set Interface vlan44 type=internal
Feb 20 07:16:08 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571768.9514] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23272 uid=0 result="success"
Feb 20 07:16:10 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571770.0075] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23299 uid=0 result="success"
Feb 20 07:16:10 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571770.0535] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan44" pid=23314 uid=0 result="success"
Feb 20 07:16:10 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571770.1106] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23335 uid=0 result="success"
Feb 20 07:16:10 np0005625203.novalocal ifup[23336]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:16:10 np0005625203.novalocal ifup[23337]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:16:10 np0005625203.novalocal ifup[23338]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:16:10 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571770.1423] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23344 uid=0 result="success"
Feb 20 07:16:10 np0005625203.novalocal ovs-vsctl[23347]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan20 -- add-port br-ex vlan20 tag=20 -- set Interface vlan20 type=internal
Feb 20 07:16:10 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571770.1978] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23354 uid=0 result="success"
Feb 20 07:16:10 np0005625203.novalocal sshd[23203]: Invalid user telecom from 185.246.128.171 port 15823
Feb 20 07:16:11 np0005625203.novalocal sshd[23203]: Disconnecting invalid user telecom 185.246.128.171 port 15823: Change of username or service not allowed: (telecom,ssh-connection) -> (prometheus,ssh-connection) [preauth]
Feb 20 07:16:11 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571771.2550] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23382 uid=0 result="success"
Feb 20 07:16:11 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571771.3012] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan20" pid=23397 uid=0 result="success"
Feb 20 07:16:11 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571771.3573] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23418 uid=0 result="success"
Feb 20 07:16:11 np0005625203.novalocal ifup[23419]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:16:11 np0005625203.novalocal ifup[23420]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:16:11 np0005625203.novalocal ifup[23421]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:16:11 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571771.3876] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23427 uid=0 result="success"
Feb 20 07:16:11 np0005625203.novalocal ovs-vsctl[23430]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan21 -- add-port br-ex vlan21 tag=21 -- set Interface vlan21 type=internal
Feb 20 07:16:11 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571771.4376] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23437 uid=0 result="success"
Feb 20 07:16:12 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571772.4929] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23465 uid=0 result="success"
Feb 20 07:16:12 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571772.5392] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan21" pid=23480 uid=0 result="success"
Feb 20 07:16:12 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571772.5961] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23501 uid=0 result="success"
Feb 20 07:16:12 np0005625203.novalocal ifup[23502]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:16:12 np0005625203.novalocal ifup[23503]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:16:12 np0005625203.novalocal ifup[23504]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:16:12 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571772.6265] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23510 uid=0 result="success"
Feb 20 07:16:12 np0005625203.novalocal ovs-vsctl[23513]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan23 -- add-port br-ex vlan23 tag=23 -- set Interface vlan23 type=internal
Feb 20 07:16:12 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571772.6840] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23520 uid=0 result="success"
Feb 20 07:16:13 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571773.7402] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23548 uid=0 result="success"
Feb 20 07:16:13 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571773.7827] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan23" pid=23563 uid=0 result="success"
Feb 20 07:16:13 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571773.8362] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23584 uid=0 result="success"
Feb 20 07:16:13 np0005625203.novalocal ifup[23585]: You are using 'ifup' script provided by 'network-scripts', which are now deprecated.
Feb 20 07:16:13 np0005625203.novalocal ifup[23586]: 'network-scripts' will be removed from distribution in near future.
Feb 20 07:16:13 np0005625203.novalocal ifup[23587]: It is advised to switch to 'NetworkManager' instead - it provides 'ifup/ifdown' scripts as well.
Feb 20 07:16:13 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571773.8676] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23593 uid=0 result="success"
Feb 20 07:16:13 np0005625203.novalocal ovs-vsctl[23596]: ovs|00001|vsctl|INFO|Called as ovs-vsctl -t 10 -- --if-exists del-port br-ex vlan22 -- add-port br-ex vlan22 tag=22 -- set Interface vlan22 type=internal
Feb 20 07:16:13 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571773.9175] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23603 uid=0 result="success"
Feb 20 07:16:14 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571774.9704] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23631 uid=0 result="success"
Feb 20 07:16:15 np0005625203.novalocal NetworkManager[5968]: <info>  [1771571775.0099] audit: op="connections-load" args="/etc/sysconfig/network-scripts/ifcfg-vlan22" pid=23646 uid=0 result="success"
Feb 20 07:16:15 np0005625203.novalocal sudo[22380]: pam_unix(sudo:session): session closed for user root
Feb 20 07:16:15 np0005625203.novalocal sshd[23663]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:16:17 np0005625203.novalocal sshd[23663]: Invalid user prometheus from 185.246.128.171 port 52528
Feb 20 07:16:18 np0005625203.novalocal sshd[23663]: Disconnecting invalid user prometheus 185.246.128.171 port 52528: Change of username or service not allowed: (prometheus,ssh-connection) -> (app,ssh-connection) [preauth]
Feb 20 07:16:20 np0005625203.novalocal sshd[23666]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:16:25 np0005625203.novalocal sshd[23666]: Invalid user app from 185.246.128.171 port 7862
Feb 20 07:16:26 np0005625203.novalocal sshd[23666]: Disconnecting invalid user app 185.246.128.171 port 7862: Change of username or service not allowed: (app,ssh-connection) -> (ganesh,ssh-connection) [preauth]
Feb 20 07:16:27 np0005625203.novalocal sshd[23668]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:16:29 np0005625203.novalocal sshd[23669]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:16:29 np0005625203.novalocal sshd[23669]: Received disconnect from 143.198.161.12 port 47004:11: Bye Bye [preauth]
Feb 20 07:16:29 np0005625203.novalocal sshd[23669]: Disconnected from authenticating user root 143.198.161.12 port 47004 [preauth]
Feb 20 07:16:31 np0005625203.novalocal sshd[23668]: Invalid user ganesh from 185.246.128.171 port 36967
Feb 20 07:16:33 np0005625203.novalocal sshd[23668]: Disconnecting invalid user ganesh 185.246.128.171 port 36967: Change of username or service not allowed: (ganesh,ssh-connection) -> (sys,ssh-connection) [preauth]
Feb 20 07:16:34 np0005625203.novalocal sshd[23672]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:16:34 np0005625203.novalocal sshd[23672]: Invalid user n8n from 192.99.169.99 port 39672
Feb 20 07:16:34 np0005625203.novalocal sshd[23672]: Received disconnect from 192.99.169.99 port 39672:11: Bye Bye [preauth]
Feb 20 07:16:34 np0005625203.novalocal sshd[23672]: Disconnected from invalid user n8n 192.99.169.99 port 39672 [preauth]
Feb 20 07:16:37 np0005625203.novalocal sshd[23674]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:16:41 np0005625203.novalocal sshd[23674]: Invalid user sys from 185.246.128.171 port 13356
Feb 20 07:16:41 np0005625203.novalocal sshd[23676]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:16:41 np0005625203.novalocal sshd[23676]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:16:43 np0005625203.novalocal sshd[23674]: Disconnecting invalid user sys 185.246.128.171 port 13356: Change of username or service not allowed: (sys,ssh-connection) -> (dev,ssh-connection) [preauth]
Feb 20 07:16:45 np0005625203.novalocal sshd[23678]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:16:51 np0005625203.novalocal sshd[23678]: Invalid user dev from 185.246.128.171 port 42241
Feb 20 07:16:56 np0005625203.novalocal sshd[23680]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:16:56 np0005625203.novalocal sshd[23680]: Invalid user centos from 172.203.58.203 port 46962
Feb 20 07:16:56 np0005625203.novalocal sshd[23680]: Received disconnect from 172.203.58.203 port 46962:11: Bye Bye [preauth]
Feb 20 07:16:56 np0005625203.novalocal sshd[23680]: Disconnected from invalid user centos 172.203.58.203 port 46962 [preauth]
Feb 20 07:16:59 np0005625203.novalocal sshd[23678]: Disconnecting invalid user dev 185.246.128.171 port 42241: Change of username or service not allowed: (dev,ssh-connection) -> (jimmy,ssh-connection) [preauth]
Feb 20 07:17:03 np0005625203.novalocal sshd[23682]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:17:05 np0005625203.novalocal sshd[23682]: Invalid user jimmy from 185.246.128.171 port 51438
Feb 20 07:17:06 np0005625203.novalocal sshd[23682]: Disconnecting invalid user jimmy 185.246.128.171 port 51438: Change of username or service not allowed: (jimmy,ssh-connection) -> (zhao,ssh-connection) [preauth]
Feb 20 07:17:08 np0005625203.novalocal python3[23698]: ansible-ansible.legacy.command Invoked with _raw_params=ip a
                                                       ping -c 2 -W 2 192.168.122.10
                                                       ping -c 2 -W 2 192.168.122.11
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-064b-165c-00000000001b-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:17:11 np0005625203.novalocal sshd[23704]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:17:11 np0005625203.novalocal sshd[23706]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:17:12 np0005625203.novalocal sshd[23706]: Received disconnect from 144.91.127.158 port 48168:11: Bye Bye [preauth]
Feb 20 07:17:12 np0005625203.novalocal sshd[23706]: Disconnected from authenticating user root 144.91.127.158 port 48168 [preauth]
Feb 20 07:17:14 np0005625203.novalocal python3[23721]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 07:17:14 np0005625203.novalocal sudo[23735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lbluulcildutyfajtlpqrqbczhvsdjiy ; /usr/bin/python3
Feb 20 07:17:14 np0005625203.novalocal sudo[23735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:17:14 np0005625203.novalocal python3[23737]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 07:17:14 np0005625203.novalocal sudo[23735]: pam_unix(sudo:session): session closed for user root
Feb 20 07:17:14 np0005625203.novalocal sshd[23704]: Invalid user zhao from 185.246.128.171 port 21326
Feb 20 07:17:16 np0005625203.novalocal sshd[23704]: Disconnecting invalid user zhao 185.246.128.171 port 21326: Change of username or service not allowed: (zhao,ssh-connection) -> (adam,ssh-connection) [preauth]
Feb 20 07:17:16 np0005625203.novalocal python3[23751]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 07:17:16 np0005625203.novalocal sudo[23765]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mkhhikcvcqdkdteczjobsvtubphmkdoj ; /usr/bin/python3
Feb 20 07:17:16 np0005625203.novalocal sudo[23765]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:17:16 np0005625203.novalocal python3[23767]: ansible-ansible.posix.authorized_key Invoked with user=root key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Feb 20 07:17:16 np0005625203.novalocal sudo[23765]: pam_unix(sudo:session): session closed for user root
Feb 20 07:17:17 np0005625203.novalocal python3[23781]: ansible-ansible.builtin.slurp Invoked with path=/etc/hostname src=/etc/hostname
Feb 20 07:17:18 np0005625203.novalocal python3[23796]: ansible-ansible.legacy.command Invoked with _raw_params=hostname="np0005625203.novalocal"
                                                       hostname_str_array=(${hostname//./ })
                                                       echo ${hostname_str_array[0]} > /home/zuul/ansible_hostname
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-064b-165c-000000000022-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:17:18 np0005625203.novalocal sshd[23800]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:17:19 np0005625203.novalocal sudo[23815]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-irmvndlqpqmsehojjhvipxpvngrjkysq ; /usr/bin/python3
Feb 20 07:17:19 np0005625203.novalocal sudo[23815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:17:19 np0005625203.novalocal python3[23817]: ansible-ansible.legacy.command Invoked with _raw_params=hostname=$(cat /home/zuul/ansible_hostname)
                                                       hostnamectl hostname "$hostname.localdomain"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-064b-165c-000000000023-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:17:19 np0005625203.novalocal systemd[1]: Starting Hostname Service...
Feb 20 07:17:19 np0005625203.novalocal systemd[1]: Started Hostname Service.
Feb 20 07:17:19 np0005625203.localdomain systemd-hostnamed[23821]: Hostname set to <np0005625203.localdomain> (static)
Feb 20 07:17:19 np0005625203.localdomain NetworkManager[5968]: <info>  [1771571839.4045] hostname: static hostname changed from "np0005625203.novalocal" to "np0005625203.localdomain"
Feb 20 07:17:19 np0005625203.localdomain systemd[1]: Starting Network Manager Script Dispatcher Service...
Feb 20 07:17:19 np0005625203.localdomain systemd[1]: Started Network Manager Script Dispatcher Service.
Feb 20 07:17:19 np0005625203.localdomain sudo[23815]: pam_unix(sudo:session): session closed for user root
Feb 20 07:17:19 np0005625203.localdomain sshd[23832]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:17:20 np0005625203.localdomain sshd[19094]: pam_unix(sshd:session): session closed for user zuul
Feb 20 07:17:20 np0005625203.localdomain systemd[1]: session-10.scope: Deactivated successfully.
Feb 20 07:17:20 np0005625203.localdomain systemd[1]: session-10.scope: Consumed 1min 46.673s CPU time.
Feb 20 07:17:20 np0005625203.localdomain systemd-logind[759]: Session 10 logged out. Waiting for processes to exit.
Feb 20 07:17:20 np0005625203.localdomain systemd-logind[759]: Removed session 10.
Feb 20 07:17:20 np0005625203.localdomain sshd[23832]: Invalid user nutanix from 45.246.55.249 port 36672
Feb 20 07:17:20 np0005625203.localdomain sshd[23832]: Received disconnect from 45.246.55.249 port 36672:11: Bye Bye [preauth]
Feb 20 07:17:20 np0005625203.localdomain sshd[23832]: Disconnected from invalid user nutanix 45.246.55.249 port 36672 [preauth]
Feb 20 07:17:22 np0005625203.localdomain sshd[23800]: Invalid user adam from 185.246.128.171 port 52446
Feb 20 07:17:23 np0005625203.localdomain sshd[23800]: Disconnecting invalid user adam 185.246.128.171 port 52446: Change of username or service not allowed: (adam,ssh-connection) -> (paul,ssh-connection) [preauth]
Feb 20 07:17:23 np0005625203.localdomain sshd[23835]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:17:23 np0005625203.localdomain sshd[23835]: Accepted publickey for zuul from 38.102.83.114 port 33726 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 07:17:23 np0005625203.localdomain systemd-logind[759]: New session 11 of user zuul.
Feb 20 07:17:23 np0005625203.localdomain systemd[1]: Started Session 11 of User zuul.
Feb 20 07:17:23 np0005625203.localdomain sshd[23835]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 07:17:23 np0005625203.localdomain python3[23852]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname
Feb 20 07:17:25 np0005625203.localdomain sshd[23835]: pam_unix(sshd:session): session closed for user zuul
Feb 20 07:17:25 np0005625203.localdomain systemd[1]: session-11.scope: Deactivated successfully.
Feb 20 07:17:25 np0005625203.localdomain systemd-logind[759]: Session 11 logged out. Waiting for processes to exit.
Feb 20 07:17:25 np0005625203.localdomain systemd-logind[759]: Removed session 11.
Feb 20 07:17:26 np0005625203.localdomain sshd[23853]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:17:29 np0005625203.localdomain systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Feb 20 07:17:29 np0005625203.localdomain sshd[23853]: Invalid user paul from 185.246.128.171 port 20302
Feb 20 07:17:30 np0005625203.localdomain sshd[23856]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:17:30 np0005625203.localdomain sshd[23856]: Invalid user oracle from 187.87.206.21 port 34166
Feb 20 07:17:30 np0005625203.localdomain sshd[23853]: Disconnecting invalid user paul 185.246.128.171 port 20302: Change of username or service not allowed: (paul,ssh-connection) -> (webapp,ssh-connection) [preauth]
Feb 20 07:17:31 np0005625203.localdomain sshd[23856]: Received disconnect from 187.87.206.21 port 34166:11: Bye Bye [preauth]
Feb 20 07:17:31 np0005625203.localdomain sshd[23856]: Disconnected from invalid user oracle 187.87.206.21 port 34166 [preauth]
Feb 20 07:17:35 np0005625203.localdomain sshd[23858]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:17:41 np0005625203.localdomain sshd[23858]: Invalid user webapp from 185.246.128.171 port 61046
Feb 20 07:17:42 np0005625203.localdomain sshd[23858]: Disconnecting invalid user webapp 185.246.128.171 port 61046: Change of username or service not allowed: (webapp,ssh-connection) -> (hamed,ssh-connection) [preauth]
Feb 20 07:17:45 np0005625203.localdomain sshd[23860]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:17:49 np0005625203.localdomain systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 20 07:17:49 np0005625203.localdomain sshd[23860]: Invalid user hamed from 185.246.128.171 port 39301
Feb 20 07:17:50 np0005625203.localdomain sshd[23860]: Disconnecting invalid user hamed 185.246.128.171 port 39301: Change of username or service not allowed: (hamed,ssh-connection) -> (steven,ssh-connection) [preauth]
Feb 20 07:17:51 np0005625203.localdomain sshd[23865]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:17:53 np0005625203.localdomain sshd[23865]: Invalid user steven from 185.246.128.171 port 63758
Feb 20 07:17:54 np0005625203.localdomain sshd[23865]: Disconnecting invalid user steven 185.246.128.171 port 63758: Change of username or service not allowed: (steven,ssh-connection) -> (prod,ssh-connection) [preauth]
Feb 20 07:17:58 np0005625203.localdomain sshd[23867]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:18:01 np0005625203.localdomain sshd[23867]: Invalid user prod from 185.246.128.171 port 30230
Feb 20 07:18:02 np0005625203.localdomain sshd[23867]: Disconnecting invalid user prod 185.246.128.171 port 30230: Change of username or service not allowed: (prod,ssh-connection) -> (es2,ssh-connection) [preauth]
Feb 20 07:18:04 np0005625203.localdomain sshd[23869]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:18:05 np0005625203.localdomain sshd[23871]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:18:05 np0005625203.localdomain sshd[23871]: Invalid user iksi from 170.254.229.191 port 53732
Feb 20 07:18:05 np0005625203.localdomain sshd[23871]: Received disconnect from 170.254.229.191 port 53732:11: Bye Bye [preauth]
Feb 20 07:18:05 np0005625203.localdomain sshd[23871]: Disconnected from invalid user iksi 170.254.229.191 port 53732 [preauth]
Feb 20 07:18:07 np0005625203.localdomain sshd[23869]: Invalid user es2 from 185.246.128.171 port 54026
Feb 20 07:18:08 np0005625203.localdomain sshd[23869]: Disconnecting invalid user es2 185.246.128.171 port 54026: Change of username or service not allowed: (es2,ssh-connection) -> (cx,ssh-connection) [preauth]
Feb 20 07:18:09 np0005625203.localdomain sshd[23873]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:18:09 np0005625203.localdomain sshd[23873]: Accepted publickey for zuul from 38.102.83.114 port 51740 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 07:18:09 np0005625203.localdomain systemd-logind[759]: New session 12 of user zuul.
Feb 20 07:18:09 np0005625203.localdomain systemd[1]: Started Session 12 of User zuul.
Feb 20 07:18:09 np0005625203.localdomain sshd[23873]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 07:18:09 np0005625203.localdomain sudo[23890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jkbdhhbawuaowhtgoapqsgnsnuepzlvs ; /usr/bin/python3
Feb 20 07:18:09 np0005625203.localdomain sudo[23890]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:18:09 np0005625203.localdomain python3[23892]: ansible-ansible.legacy.dnf Invoked with name=['lvm2', 'jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:18:12 np0005625203.localdomain sshd[23894]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:18:13 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:18:13 np0005625203.localdomain systemd-rc-local-generator[23933]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:18:13 np0005625203.localdomain systemd-sysv-generator[23941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:18:13 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:18:13 np0005625203.localdomain systemd[1]: Starting dnf makecache...
Feb 20 07:18:13 np0005625203.localdomain systemd[1]: Listening on Device-mapper event daemon FIFOs.
Feb 20 07:18:13 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:18:13 np0005625203.localdomain systemd-rc-local-generator[23974]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:18:13 np0005625203.localdomain systemd-sysv-generator[23978]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:18:13 np0005625203.localdomain dnf[23948]: Updating Subscription Management repositories.
Feb 20 07:18:13 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:18:14 np0005625203.localdomain systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Feb 20 07:18:14 np0005625203.localdomain systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Feb 20 07:18:14 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:18:14 np0005625203.localdomain systemd-rc-local-generator[24017]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:18:14 np0005625203.localdomain systemd-sysv-generator[24020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:18:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:18:14 np0005625203.localdomain systemd[1]: Listening on LVM2 poll daemon socket.
Feb 20 07:18:14 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 07:18:14 np0005625203.localdomain systemd[1]: Starting man-db-cache-update.service...
Feb 20 07:18:14 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:18:14 np0005625203.localdomain systemd-sysv-generator[24093]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:18:14 np0005625203.localdomain systemd-rc-local-generator[24090]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:18:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:18:15 np0005625203.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Feb 20 07:18:15 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 07:18:15 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 07:18:15 np0005625203.localdomain systemd[1]: Finished man-db-cache-update.service.
Feb 20 07:18:15 np0005625203.localdomain systemd[1]: run-r3574becba8e24b958d03d08515f506da.service: Deactivated successfully.
Feb 20 07:18:15 np0005625203.localdomain systemd[1]: run-rcb47204382f8419fa461f4f1122e36b5.service: Deactivated successfully.
Feb 20 07:18:15 np0005625203.localdomain dnf[23948]: Failed determining last makecache time.
Feb 20 07:18:15 np0005625203.localdomain sudo[23890]: pam_unix(sudo:session): session closed for user root
Feb 20 07:18:16 np0005625203.localdomain dnf[23948]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS   39 kB/s | 4.1 kB     00:00
Feb 20 07:18:16 np0005625203.localdomain dnf[23948]: Red Hat OpenStack Platform 17.1 for RHEL 9 x86_  49 kB/s | 4.0 kB     00:00
Feb 20 07:18:16 np0005625203.localdomain dnf[23948]: Red Hat Enterprise Linux 9 for x86_64 - BaseOS   42 kB/s | 4.1 kB     00:00
Feb 20 07:18:16 np0005625203.localdomain sshd[23894]: Invalid user cx from 185.246.128.171 port 24390
Feb 20 07:18:16 np0005625203.localdomain dnf[23948]: Fast Datapath for RHEL 9 x86_64 (RPMs)           43 kB/s | 4.0 kB     00:00
Feb 20 07:18:16 np0005625203.localdomain dnf[23948]: Red Hat Enterprise Linux 9 for x86_64 - AppStre  46 kB/s | 4.5 kB     00:00
Feb 20 07:18:16 np0005625203.localdomain dnf[23948]: Red Hat Enterprise Linux 9 for x86_64 - AppStre  58 kB/s | 4.5 kB     00:00
Feb 20 07:18:17 np0005625203.localdomain dnf[23948]: Red Hat Enterprise Linux 9 for x86_64 - High Av  46 kB/s | 4.0 kB     00:00
Feb 20 07:18:17 np0005625203.localdomain dnf[23948]: Metadata cache created.
Feb 20 07:18:17 np0005625203.localdomain systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 20 07:18:17 np0005625203.localdomain systemd[1]: Finished dnf makecache.
Feb 20 07:18:17 np0005625203.localdomain systemd[1]: dnf-makecache.service: Consumed 3.110s CPU time.
Feb 20 07:18:17 np0005625203.localdomain sshd[23894]: Disconnecting invalid user cx 185.246.128.171 port 24390: Change of username or service not allowed: (cx,ssh-connection) -> (maria,ssh-connection) [preauth]
Feb 20 07:18:19 np0005625203.localdomain sshd[24674]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:18:20 np0005625203.localdomain sshd[24675]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:18:20 np0005625203.localdomain sshd[24675]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:18:24 np0005625203.localdomain sshd[24674]: Invalid user maria from 185.246.128.171 port 59444
Feb 20 07:18:26 np0005625203.localdomain sshd[24674]: Disconnecting invalid user maria 185.246.128.171 port 59444: Change of username or service not allowed: (maria,ssh-connection) -> (telnet,ssh-connection) [preauth]
Feb 20 07:18:29 np0005625203.localdomain sshd[24678]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:18:35 np0005625203.localdomain sshd[24678]: Invalid user telnet from 185.246.128.171 port 36136
Feb 20 07:18:35 np0005625203.localdomain sshd[24678]: Disconnecting invalid user telnet 185.246.128.171 port 36136: Change of username or service not allowed: (telnet,ssh-connection) -> (pritchard,ssh-connection) [preauth]
Feb 20 07:18:38 np0005625203.localdomain sshd[24680]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:18:41 np0005625203.localdomain sshd[24680]: Invalid user pritchard from 185.246.128.171 port 12152
Feb 20 07:18:41 np0005625203.localdomain sshd[24680]: Disconnecting invalid user pritchard 185.246.128.171 port 12152: Change of username or service not allowed: (pritchard,ssh-connection) -> (elasticsearch,ssh-connecti [preauth]
Feb 20 07:18:44 np0005625203.localdomain sshd[24682]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:18:47 np0005625203.localdomain sshd[24682]: Invalid user elasticsearch from 185.246.128.171 port 37928
Feb 20 07:18:52 np0005625203.localdomain sshd[24682]: Disconnecting invalid user elasticsearch 185.246.128.171 port 37928: Change of username or service not allowed: (elasticsearch,ssh-connection) -> (minh,ssh-connection) [preauth]
Feb 20 07:18:54 np0005625203.localdomain sshd[24684]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:18:56 np0005625203.localdomain sshd[24684]: Invalid user minh from 185.246.128.171 port 17061
Feb 20 07:18:56 np0005625203.localdomain sshd[24684]: Disconnecting invalid user minh 185.246.128.171 port 17061: Change of username or service not allowed: (minh,ssh-connection) -> (sftpuser,ssh-connection) [preauth]
Feb 20 07:19:00 np0005625203.localdomain sshd[24686]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:19:02 np0005625203.localdomain sshd[24686]: Invalid user sftpuser from 185.246.128.171 port 42105
Feb 20 07:19:05 np0005625203.localdomain sshd[24686]: Disconnecting invalid user sftpuser 185.246.128.171 port 42105: Change of username or service not allowed: (sftpuser,ssh-connection) -> (worker,ssh-connection) [preauth]
Feb 20 07:19:07 np0005625203.localdomain sshd[24688]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:19:11 np0005625203.localdomain sshd[24688]: Invalid user worker from 185.246.128.171 port 11002
Feb 20 07:19:12 np0005625203.localdomain sshd[24690]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:19:13 np0005625203.localdomain sshd[24690]: Invalid user n8n from 118.193.43.244 port 60460
Feb 20 07:19:13 np0005625203.localdomain sshd[24688]: Disconnecting invalid user worker 185.246.128.171 port 11002: Change of username or service not allowed: (worker,ssh-connection) -> (1,ssh-connection) [preauth]
Feb 20 07:19:13 np0005625203.localdomain sshd[24690]: Received disconnect from 118.193.43.244 port 60460:11: Bye Bye [preauth]
Feb 20 07:19:13 np0005625203.localdomain sshd[24690]: Disconnected from invalid user n8n 118.193.43.244 port 60460 [preauth]
Feb 20 07:19:15 np0005625203.localdomain sshd[23876]: Received disconnect from 38.102.83.114 port 51740:11: disconnected by user
Feb 20 07:19:16 np0005625203.localdomain sshd[23876]: Disconnected from user zuul 38.102.83.114 port 51740
Feb 20 07:19:16 np0005625203.localdomain sshd[23873]: pam_unix(sshd:session): session closed for user zuul
Feb 20 07:19:16 np0005625203.localdomain systemd[1]: session-12.scope: Deactivated successfully.
Feb 20 07:19:16 np0005625203.localdomain systemd[1]: session-12.scope: Consumed 4.872s CPU time.
Feb 20 07:19:16 np0005625203.localdomain systemd-logind[759]: Session 12 logged out. Waiting for processes to exit.
Feb 20 07:19:16 np0005625203.localdomain systemd-logind[759]: Removed session 12.
Feb 20 07:19:18 np0005625203.localdomain sshd[24692]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:19:21 np0005625203.localdomain sshd[24692]: Invalid user 1 from 185.246.128.171 port 56661
Feb 20 07:19:22 np0005625203.localdomain sshd[24692]: Disconnecting invalid user 1 185.246.128.171 port 56661: Change of username or service not allowed: (1,ssh-connection) -> (sol,ssh-connection) [preauth]
Feb 20 07:19:23 np0005625203.localdomain sshd[24694]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:19:28 np0005625203.localdomain sshd[24694]: Invalid user sol from 185.246.128.171 port 16899
Feb 20 07:19:28 np0005625203.localdomain sshd[24696]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:19:29 np0005625203.localdomain sshd[24696]: Invalid user n8n from 102.211.152.28 port 44264
Feb 20 07:19:29 np0005625203.localdomain sshd[24696]: Received disconnect from 102.211.152.28 port 44264:11: Bye Bye [preauth]
Feb 20 07:19:29 np0005625203.localdomain sshd[24696]: Disconnected from invalid user n8n 102.211.152.28 port 44264 [preauth]
Feb 20 07:19:33 np0005625203.localdomain sshd[24699]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:19:33 np0005625203.localdomain sshd[24699]: Invalid user admin from 192.99.169.99 port 50940
Feb 20 07:19:33 np0005625203.localdomain sshd[24699]: Received disconnect from 192.99.169.99 port 50940:11: Bye Bye [preauth]
Feb 20 07:19:33 np0005625203.localdomain sshd[24699]: Disconnected from invalid user admin 192.99.169.99 port 50940 [preauth]
Feb 20 07:19:33 np0005625203.localdomain sshd[24694]: Disconnecting invalid user sol 185.246.128.171 port 16899: Change of username or service not allowed: (sol,ssh-connection) -> (User,ssh-connection) [preauth]
Feb 20 07:19:36 np0005625203.localdomain sshd[24701]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:19:40 np0005625203.localdomain sshd[24701]: Invalid user User from 185.246.128.171 port 8891
Feb 20 07:19:45 np0005625203.localdomain sshd[24701]: Disconnecting invalid user User 185.246.128.171 port 8891: Change of username or service not allowed: (User,ssh-connection) -> (tst,ssh-connection) [preauth]
Feb 20 07:19:46 np0005625203.localdomain sshd[24704]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:19:48 np0005625203.localdomain sshd[24704]: Received disconnect from 103.171.84.20 port 35066:11: Bye Bye [preauth]
Feb 20 07:19:48 np0005625203.localdomain sshd[24704]: Disconnected from authenticating user root 103.171.84.20 port 35066 [preauth]
Feb 20 07:19:49 np0005625203.localdomain sshd[24706]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:19:51 np0005625203.localdomain sshd[24708]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:19:51 np0005625203.localdomain sshd[24708]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:19:52 np0005625203.localdomain sshd[24706]: Invalid user tst from 185.246.128.171 port 1257
Feb 20 07:19:53 np0005625203.localdomain sshd[24706]: Disconnecting invalid user tst 185.246.128.171 port 1257: Change of username or service not allowed: (tst,ssh-connection) -> (user11,ssh-connection) [preauth]
Feb 20 07:19:56 np0005625203.localdomain sshd[24710]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:20:00 np0005625203.localdomain sshd[24710]: Invalid user user11 from 185.246.128.171 port 31872
Feb 20 07:20:03 np0005625203.localdomain sshd[24710]: Disconnecting invalid user user11 185.246.128.171 port 31872: Change of username or service not allowed: (user11,ssh-connection) -> (adsl,ssh-connection) [preauth]
Feb 20 07:20:04 np0005625203.localdomain sshd[24712]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:20:08 np0005625203.localdomain sshd[24712]: Invalid user adsl from 185.246.128.171 port 2181
Feb 20 07:20:10 np0005625203.localdomain sshd[24712]: Disconnecting invalid user adsl 185.246.128.171 port 2181: Change of username or service not allowed: (adsl,ssh-connection) -> (john,ssh-connection) [preauth]
Feb 20 07:20:13 np0005625203.localdomain sshd[24714]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:20:16 np0005625203.localdomain sshd[24716]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:20:17 np0005625203.localdomain sshd[24716]: Received disconnect from 172.203.58.203 port 54246:11: Bye Bye [preauth]
Feb 20 07:20:17 np0005625203.localdomain sshd[24716]: Disconnected from authenticating user root 172.203.58.203 port 54246 [preauth]
Feb 20 07:20:20 np0005625203.localdomain sshd[24714]: Invalid user john from 185.246.128.171 port 38638
Feb 20 07:20:21 np0005625203.localdomain sshd[24714]: Disconnecting invalid user john 185.246.128.171 port 38638: Change of username or service not allowed: (john,ssh-connection) -> (mysql,ssh-connection) [preauth]
Feb 20 07:20:23 np0005625203.localdomain sshd[24718]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:20:27 np0005625203.localdomain sshd[24720]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:20:27 np0005625203.localdomain sshd[24720]: error: kex_exchange_identification: Connection closed by remote host
Feb 20 07:20:27 np0005625203.localdomain sshd[24720]: Connection closed by 92.118.39.72 port 42356
Feb 20 07:20:27 np0005625203.localdomain sshd[24718]: Invalid user mysql from 185.246.128.171 port 17296
Feb 20 07:20:32 np0005625203.localdomain sshd[24718]: Disconnecting invalid user mysql 185.246.128.171 port 17296: Change of username or service not allowed: (mysql,ssh-connection) -> (sshuser,ssh-connection) [preauth]
Feb 20 07:20:37 np0005625203.localdomain sshd[24721]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:20:41 np0005625203.localdomain sshd[24721]: Invalid user sshuser from 185.246.128.171 port 9387
Feb 20 07:20:42 np0005625203.localdomain sshd[24721]: Disconnecting invalid user sshuser 185.246.128.171 port 9387: Change of username or service not allowed: (sshuser,ssh-connection) -> (desliga,ssh-connection) [preauth]
Feb 20 07:20:42 np0005625203.localdomain sshd[24723]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:20:42 np0005625203.localdomain sshd[24724]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:20:43 np0005625203.localdomain sshd[24724]: Received disconnect from 144.91.127.158 port 35294:11: Bye Bye [preauth]
Feb 20 07:20:43 np0005625203.localdomain sshd[24724]: Disconnected from authenticating user root 144.91.127.158 port 35294 [preauth]
Feb 20 07:20:44 np0005625203.localdomain sshd[24723]: Invalid user desliga from 185.246.128.171 port 30543
Feb 20 07:20:45 np0005625203.localdomain sshd[24723]: Disconnecting invalid user desliga 185.246.128.171 port 30543: Change of username or service not allowed: (desliga,ssh-connection) -> (useradmin,ssh-connection) [preauth]
Feb 20 07:20:50 np0005625203.localdomain sshd[24727]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:20:51 np0005625203.localdomain sshd[24729]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:20:52 np0005625203.localdomain sshd[24729]: Invalid user httpd from 45.246.55.249 port 38128
Feb 20 07:20:52 np0005625203.localdomain sshd[24729]: Received disconnect from 45.246.55.249 port 38128:11: Bye Bye [preauth]
Feb 20 07:20:52 np0005625203.localdomain sshd[24729]: Disconnected from invalid user httpd 45.246.55.249 port 38128 [preauth]
Feb 20 07:20:54 np0005625203.localdomain sshd[24727]: Invalid user useradmin from 185.246.128.171 port 61571
Feb 20 07:20:56 np0005625203.localdomain sshd[24727]: Disconnecting invalid user useradmin 185.246.128.171 port 61571: Change of username or service not allowed: (useradmin,ssh-connection) -> (validator,ssh-connection) [preauth]
Feb 20 07:20:59 np0005625203.localdomain sshd[24731]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:21:06 np0005625203.localdomain sshd[24731]: Invalid user validator from 185.246.128.171 port 34723
Feb 20 07:21:07 np0005625203.localdomain sshd[24731]: Disconnecting invalid user validator 185.246.128.171 port 34723: Change of username or service not allowed: (validator,ssh-connection) -> (scpuser,ssh-connection) [preauth]
Feb 20 07:21:10 np0005625203.localdomain sshd[24733]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:21:13 np0005625203.localdomain sshd[24733]: Invalid user scpuser from 185.246.128.171 port 18346
Feb 20 07:21:14 np0005625203.localdomain sshd[24735]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:21:15 np0005625203.localdomain sshd[24735]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:21:15 np0005625203.localdomain sshd[24733]: Disconnecting invalid user scpuser 185.246.128.171 port 18346: Change of username or service not allowed: (scpuser,ssh-connection) -> (syncthing,ssh-connection) [preauth]
Feb 20 07:21:15 np0005625203.localdomain sshd[24737]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:21:16 np0005625203.localdomain sshd[24737]: Invalid user xtest from 143.198.161.12 port 59450
Feb 20 07:21:16 np0005625203.localdomain sshd[24737]: Received disconnect from 143.198.161.12 port 59450:11: Bye Bye [preauth]
Feb 20 07:21:16 np0005625203.localdomain sshd[24737]: Disconnected from invalid user xtest 143.198.161.12 port 59450 [preauth]
Feb 20 07:21:18 np0005625203.localdomain sshd[24739]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:21:21 np0005625203.localdomain sshd[24739]: Invalid user syncthing from 185.246.128.171 port 50321
Feb 20 07:21:23 np0005625203.localdomain sshd[24739]: Disconnecting invalid user syncthing 185.246.128.171 port 50321: Change of username or service not allowed: (syncthing,ssh-connection) -> (isaac,ssh-connection) [preauth]
Feb 20 07:21:25 np0005625203.localdomain sshd[24741]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:21:30 np0005625203.localdomain sshd[24743]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:21:31 np0005625203.localdomain sshd[24743]: Invalid user admin from 170.254.229.191 port 46782
Feb 20 07:21:31 np0005625203.localdomain sshd[24743]: Received disconnect from 170.254.229.191 port 46782:11: Bye Bye [preauth]
Feb 20 07:21:31 np0005625203.localdomain sshd[24743]: Disconnected from invalid user admin 170.254.229.191 port 46782 [preauth]
Feb 20 07:21:31 np0005625203.localdomain sshd[24741]: Invalid user isaac from 185.246.128.171 port 12821
Feb 20 07:21:32 np0005625203.localdomain sshd[24741]: Disconnecting invalid user isaac 185.246.128.171 port 12821: Change of username or service not allowed: (isaac,ssh-connection) -> (ftpadmin,ssh-connection) [preauth]
Feb 20 07:21:35 np0005625203.localdomain sshd[24745]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:21:37 np0005625203.localdomain sshd[24745]: Invalid user ftpadmin from 185.246.128.171 port 50880
Feb 20 07:21:37 np0005625203.localdomain sshd[24745]: Disconnecting invalid user ftpadmin 185.246.128.171 port 50880: Change of username or service not allowed: (ftpadmin,ssh-connection) -> (hacluster,ssh-connection) [preauth]
Feb 20 07:21:39 np0005625203.localdomain sshd[24747]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:21:43 np0005625203.localdomain sshd[24747]: Invalid user hacluster from 185.246.128.171 port 5455
Feb 20 07:21:44 np0005625203.localdomain sshd[24749]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:21:45 np0005625203.localdomain sshd[24747]: Disconnecting invalid user hacluster 185.246.128.171 port 5455: Change of username or service not allowed: (hacluster,ssh-connection) -> (support1,ssh-connection) [preauth]
Feb 20 07:21:45 np0005625203.localdomain sshd[24749]: Received disconnect from 40.81.244.142 port 39632:11: Bye Bye [preauth]
Feb 20 07:21:45 np0005625203.localdomain sshd[24749]: Disconnected from authenticating user root 40.81.244.142 port 39632 [preauth]
Feb 20 07:21:47 np0005625203.localdomain sshd[24751]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:21:50 np0005625203.localdomain sshd[24751]: Invalid user support1 from 185.246.128.171 port 36374
Feb 20 07:21:52 np0005625203.localdomain sshd[24751]: Disconnecting invalid user support1 185.246.128.171 port 36374: Change of username or service not allowed: (support1,ssh-connection) -> (cq,ssh-connection) [preauth]
Feb 20 07:21:56 np0005625203.localdomain sshd[24753]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:21:59 np0005625203.localdomain sshd[24753]: Invalid user cq from 185.246.128.171 port 8756
Feb 20 07:22:01 np0005625203.localdomain sshd[24753]: Disconnecting invalid user cq 185.246.128.171 port 8756: Change of username or service not allowed: (cq,ssh-connection) -> (asus,ssh-connection) [preauth]
Feb 20 07:22:04 np0005625203.localdomain sshd[24755]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:22:09 np0005625203.localdomain sshd[24755]: Invalid user asus from 185.246.128.171 port 43207
Feb 20 07:22:09 np0005625203.localdomain sshd[24755]: Disconnecting invalid user asus 185.246.128.171 port 43207: Change of username or service not allowed: (asus,ssh-connection) -> (amits,ssh-connection) [preauth]
Feb 20 07:22:12 np0005625203.localdomain sshd[24757]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:22:18 np0005625203.localdomain sshd[24757]: Invalid user amits from 185.246.128.171 port 9394
Feb 20 07:22:18 np0005625203.localdomain sshd[24757]: Disconnecting invalid user amits 185.246.128.171 port 9394: Change of username or service not allowed: (amits,ssh-connection) -> (gmod,ssh-connection) [preauth]
Feb 20 07:22:19 np0005625203.localdomain sshd[24759]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:22:21 np0005625203.localdomain sshd[24759]: Invalid user gmod from 185.246.128.171 port 37233
Feb 20 07:22:21 np0005625203.localdomain sshd[24759]: Disconnecting invalid user gmod 185.246.128.171 port 37233: Change of username or service not allowed: (gmod,ssh-connection) -> (nobody,ssh-connection) [preauth]
Feb 20 07:22:23 np0005625203.localdomain sshd[24761]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:22:27 np0005625203.localdomain sshd[24763]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:22:27 np0005625203.localdomain sshd[24763]: Invalid user postgres from 192.99.169.99 port 50976
Feb 20 07:22:27 np0005625203.localdomain sshd[24763]: Received disconnect from 192.99.169.99 port 50976:11: Bye Bye [preauth]
Feb 20 07:22:27 np0005625203.localdomain sshd[24763]: Disconnected from invalid user postgres 192.99.169.99 port 50976 [preauth]
Feb 20 07:22:31 np0005625203.localdomain sshd[24765]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:22:31 np0005625203.localdomain sshd[24761]: Disconnecting authenticating user nobody 185.246.128.171 port 54659: Change of username or service not allowed: (nobody,ssh-connection) -> (ftp_inst,ssh-connection) [preauth]
Feb 20 07:22:32 np0005625203.localdomain sshd[24767]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:22:33 np0005625203.localdomain sshd[24767]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:22:33 np0005625203.localdomain sshd[24765]: Received disconnect from 118.193.43.244 port 46322:11: Bye Bye [preauth]
Feb 20 07:22:33 np0005625203.localdomain sshd[24765]: Disconnected from authenticating user root 118.193.43.244 port 46322 [preauth]
Feb 20 07:22:33 np0005625203.localdomain sshd[24769]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:22:37 np0005625203.localdomain sshd[24769]: Invalid user ftp_inst from 185.246.128.171 port 27718
Feb 20 07:22:40 np0005625203.localdomain sshd[24769]: Disconnecting invalid user ftp_inst 185.246.128.171 port 27718: Change of username or service not allowed: (ftp_inst,ssh-connection) -> (nc,ssh-connection) [preauth]
Feb 20 07:22:46 np0005625203.localdomain sshd[24771]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:22:47 np0005625203.localdomain sshd[24771]: Invalid user nc from 185.246.128.171 port 18560
Feb 20 07:22:48 np0005625203.localdomain sshd[24771]: Disconnecting invalid user nc 185.246.128.171 port 18560: Change of username or service not allowed: (nc,ssh-connection) -> (Lucas,ssh-connection) [preauth]
Feb 20 07:22:49 np0005625203.localdomain sshd[24773]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:22:53 np0005625203.localdomain sshd[24773]: Invalid user Lucas from 185.246.128.171 port 29297
Feb 20 07:22:54 np0005625203.localdomain sshd[24773]: Disconnecting invalid user Lucas 185.246.128.171 port 29297: Change of username or service not allowed: (Lucas,ssh-connection) -> (mama,ssh-connection) [preauth]
Feb 20 07:22:56 np0005625203.localdomain sshd[24775]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:23:00 np0005625203.localdomain sshd[24775]: Invalid user mama from 185.246.128.171 port 57465
Feb 20 07:23:02 np0005625203.localdomain sshd[24775]: Disconnecting invalid user mama 185.246.128.171 port 57465: Change of username or service not allowed: (mama,ssh-connection) -> (trx,ssh-connection) [preauth]
Feb 20 07:23:05 np0005625203.localdomain sshd[24777]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:23:09 np0005625203.localdomain sshd[24777]: Invalid user trx from 185.246.128.171 port 31561
Feb 20 07:23:10 np0005625203.localdomain sshd[24777]: Disconnecting invalid user trx 185.246.128.171 port 31561: Change of username or service not allowed: (trx,ssh-connection) -> (user07,ssh-connection) [preauth]
Feb 20 07:23:13 np0005625203.localdomain sshd[24779]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:23:17 np0005625203.localdomain sshd[24779]: Invalid user user07 from 185.246.128.171 port 61069
Feb 20 07:23:17 np0005625203.localdomain sshd[24779]: Disconnecting invalid user user07 185.246.128.171 port 61069: Change of username or service not allowed: (user07,ssh-connection) -> (lotus,ssh-connection) [preauth]
Feb 20 07:23:20 np0005625203.localdomain sshd[24781]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:23:24 np0005625203.localdomain sshd[24781]: Invalid user lotus from 185.246.128.171 port 27937
Feb 20 07:23:25 np0005625203.localdomain sshd[24781]: Disconnecting invalid user lotus 185.246.128.171 port 27937: Change of username or service not allowed: (lotus,ssh-connection) -> (hive,ssh-connection) [preauth]
Feb 20 07:23:27 np0005625203.localdomain sshd[24783]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:23:31 np0005625203.localdomain sshd[24783]: Invalid user hive from 185.246.128.171 port 58188
Feb 20 07:23:32 np0005625203.localdomain sshd[24783]: Disconnecting invalid user hive 185.246.128.171 port 58188: Change of username or service not allowed: (hive,ssh-connection) -> (liuj,ssh-connection) [preauth]
Feb 20 07:23:34 np0005625203.localdomain sshd[24785]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:23:36 np0005625203.localdomain sshd[24787]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:23:36 np0005625203.localdomain sshd[24785]: Invalid user liuj from 185.246.128.171 port 21140
Feb 20 07:23:36 np0005625203.localdomain sshd[24787]: Invalid user solana from 92.118.39.72 port 47164
Feb 20 07:23:36 np0005625203.localdomain sshd[24787]: Connection closed by invalid user solana 92.118.39.72 port 47164 [preauth]
Feb 20 07:23:37 np0005625203.localdomain sshd[24785]: Disconnecting invalid user liuj 185.246.128.171 port 21140: Change of username or service not allowed: (liuj,ssh-connection) -> (debian,ssh-connection) [preauth]
Feb 20 07:23:38 np0005625203.localdomain sshd[24789]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:23:41 np0005625203.localdomain sshd[24789]: Invalid user debian from 185.246.128.171 port 40414
Feb 20 07:23:45 np0005625203.localdomain sshd[24791]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:23:45 np0005625203.localdomain sshd[24791]: Invalid user admin from 172.203.58.203 port 35542
Feb 20 07:23:45 np0005625203.localdomain sshd[24791]: Received disconnect from 172.203.58.203 port 35542:11: Bye Bye [preauth]
Feb 20 07:23:45 np0005625203.localdomain sshd[24791]: Disconnected from invalid user admin 172.203.58.203 port 35542 [preauth]
Feb 20 07:23:48 np0005625203.localdomain sshd[24789]: error: maximum authentication attempts exceeded for invalid user debian from 185.246.128.171 port 40414 ssh2 [preauth]
Feb 20 07:23:48 np0005625203.localdomain sshd[24789]: Disconnecting invalid user debian 185.246.128.171 port 40414: Too many authentication failures [preauth]
Feb 20 07:23:50 np0005625203.localdomain sshd[24793]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:23:50 np0005625203.localdomain sshd[24793]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:23:54 np0005625203.localdomain sshd[24795]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:23:57 np0005625203.localdomain sshd[24795]: Invalid user debian from 185.246.128.171 port 41379
Feb 20 07:23:59 np0005625203.localdomain sshd[24795]: Disconnecting invalid user debian 185.246.128.171 port 41379: Change of username or service not allowed: (debian,ssh-connection) -> (userftp,ssh-connection) [preauth]
Feb 20 07:24:01 np0005625203.localdomain sshd[24797]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:24:02 np0005625203.localdomain sshd[24798]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:24:03 np0005625203.localdomain sshd[24798]: Invalid user student from 123.204.132.127 port 58742
Feb 20 07:24:03 np0005625203.localdomain sshd[24798]: Received disconnect from 123.204.132.127 port 58742:11: Bye Bye [preauth]
Feb 20 07:24:03 np0005625203.localdomain sshd[24798]: Disconnected from invalid user student 123.204.132.127 port 58742 [preauth]
Feb 20 07:24:04 np0005625203.localdomain sshd[24797]: Invalid user userftp from 185.246.128.171 port 6683
Feb 20 07:24:07 np0005625203.localdomain sshd[24797]: Disconnecting invalid user userftp 185.246.128.171 port 6683: Change of username or service not allowed: (userftp,ssh-connection) -> (samuel,ssh-connection) [preauth]
Feb 20 07:24:09 np0005625203.localdomain sshd[24801]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:24:11 np0005625203.localdomain sshd[24801]: Invalid user samuel from 185.246.128.171 port 40141
Feb 20 07:24:13 np0005625203.localdomain sshd[24801]: Disconnecting invalid user samuel 185.246.128.171 port 40141: Change of username or service not allowed: (samuel,ssh-connection) -> (123456,ssh-connection) [preauth]
Feb 20 07:24:15 np0005625203.localdomain sshd[24803]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:24:15 np0005625203.localdomain sshd[24805]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:24:16 np0005625203.localdomain sshd[24803]: Invalid user gts from 144.91.127.158 port 53294
Feb 20 07:24:16 np0005625203.localdomain sshd[24803]: Received disconnect from 144.91.127.158 port 53294:11: Bye Bye [preauth]
Feb 20 07:24:16 np0005625203.localdomain sshd[24803]: Disconnected from invalid user gts 144.91.127.158 port 53294 [preauth]
Feb 20 07:24:19 np0005625203.localdomain sshd[24805]: Invalid user 123456 from 185.246.128.171 port 3601
Feb 20 07:24:21 np0005625203.localdomain sshd[24805]: Disconnecting invalid user 123456 185.246.128.171 port 3601: Change of username or service not allowed: (123456,ssh-connection) -> (esroot,ssh-connection) [preauth]
Feb 20 07:24:21 np0005625203.localdomain sshd[24807]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:24:22 np0005625203.localdomain sshd[24807]: Invalid user systemd from 45.246.55.249 port 38440
Feb 20 07:24:23 np0005625203.localdomain sshd[24807]: Received disconnect from 45.246.55.249 port 38440:11: Bye Bye [preauth]
Feb 20 07:24:23 np0005625203.localdomain sshd[24807]: Disconnected from invalid user systemd 45.246.55.249 port 38440 [preauth]
Feb 20 07:24:24 np0005625203.localdomain sshd[24809]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:24:26 np0005625203.localdomain sshd[24809]: Invalid user esroot from 185.246.128.171 port 40403
Feb 20 07:24:26 np0005625203.localdomain sshd[24809]: Disconnecting invalid user esroot 185.246.128.171 port 40403: Change of username or service not allowed: (esroot,ssh-connection) -> (erp,ssh-connection) [preauth]
Feb 20 07:24:28 np0005625203.localdomain sshd[24811]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:24:32 np0005625203.localdomain sshd[24811]: Invalid user erp from 185.246.128.171 port 57912
Feb 20 07:24:33 np0005625203.localdomain sshd[24811]: Disconnecting invalid user erp 185.246.128.171 port 57912: Change of username or service not allowed: (erp,ssh-connection) -> (gns3,ssh-connection) [preauth]
Feb 20 07:24:34 np0005625203.localdomain sshd[24813]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:24:38 np0005625203.localdomain sshd[24813]: Invalid user gns3 from 185.246.128.171 port 21853
Feb 20 07:24:38 np0005625203.localdomain sshd[24813]: Disconnecting invalid user gns3 185.246.128.171 port 21853: Change of username or service not allowed: (gns3,ssh-connection) -> (frontend,ssh-connection) [preauth]
Feb 20 07:24:40 np0005625203.localdomain sshd[24815]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:24:45 np0005625203.localdomain sshd[24815]: Invalid user frontend from 185.246.128.171 port 46910
Feb 20 07:24:47 np0005625203.localdomain sshd[24815]: Disconnecting invalid user frontend 185.246.128.171 port 46910: Change of username or service not allowed: (frontend,ssh-connection) -> (mika,ssh-connection) [preauth]
Feb 20 07:24:51 np0005625203.localdomain sshd[24817]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:24:54 np0005625203.localdomain sshd[24817]: Invalid user mika from 185.246.128.171 port 28702
Feb 20 07:24:54 np0005625203.localdomain sshd[24817]: Disconnecting invalid user mika 185.246.128.171 port 28702: Change of username or service not allowed: (mika,ssh-connection) -> (portal,ssh-connection) [preauth]
Feb 20 07:24:56 np0005625203.localdomain sshd[24819]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:25:02 np0005625203.localdomain sshd[24819]: Invalid user portal from 185.246.128.171 port 52409
Feb 20 07:25:03 np0005625203.localdomain sshd[24822]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:25:03 np0005625203.localdomain sshd[24822]: Invalid user dixi from 170.254.229.191 port 58210
Feb 20 07:25:03 np0005625203.localdomain sshd[24822]: Received disconnect from 170.254.229.191 port 58210:11: Bye Bye [preauth]
Feb 20 07:25:03 np0005625203.localdomain sshd[24822]: Disconnected from invalid user dixi 170.254.229.191 port 58210 [preauth]
Feb 20 07:25:04 np0005625203.localdomain sshd[24819]: Disconnecting invalid user portal 185.246.128.171 port 52409: Change of username or service not allowed: (portal,ssh-connection) -> (fastuser,ssh-connection) [preauth]
Feb 20 07:25:05 np0005625203.localdomain sshd[24824]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:25:06 np0005625203.localdomain sshd[24824]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:25:09 np0005625203.localdomain sshd[24826]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:25:09 np0005625203.localdomain sshd[24827]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:25:10 np0005625203.localdomain sshd[24827]: Invalid user sshadmin from 189.190.2.14 port 42414
Feb 20 07:25:10 np0005625203.localdomain sshd[24827]: Received disconnect from 189.190.2.14 port 42414:11: Bye Bye [preauth]
Feb 20 07:25:10 np0005625203.localdomain sshd[24827]: Disconnected from invalid user sshadmin 189.190.2.14 port 42414 [preauth]
Feb 20 07:25:13 np0005625203.localdomain sshd[24826]: Invalid user fastuser from 185.246.128.171 port 41963
Feb 20 07:25:17 np0005625203.localdomain sshd[24826]: Disconnecting invalid user fastuser 185.246.128.171 port 41963: Change of username or service not allowed: (fastuser,ssh-connection) -> (uftp,ssh-connection) [preauth]
Feb 20 07:25:18 np0005625203.localdomain sshd[24830]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:25:21 np0005625203.localdomain sshd[24832]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:25:21 np0005625203.localdomain sshd[24832]: Invalid user claude from 192.99.169.99 port 53592
Feb 20 07:25:21 np0005625203.localdomain sshd[24832]: Received disconnect from 192.99.169.99 port 53592:11: Bye Bye [preauth]
Feb 20 07:25:21 np0005625203.localdomain sshd[24832]: Disconnected from invalid user claude 192.99.169.99 port 53592 [preauth]
Feb 20 07:25:22 np0005625203.localdomain sshd[24830]: Invalid user uftp from 185.246.128.171 port 20078
Feb 20 07:25:23 np0005625203.localdomain sshd[24830]: Disconnecting invalid user uftp 185.246.128.171 port 20078: Change of username or service not allowed: (uftp,ssh-connection) -> (system,ssh-connection) [preauth]
Feb 20 07:25:25 np0005625203.localdomain sshd[24834]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:25:28 np0005625203.localdomain sshd[24834]: Invalid user system from 185.246.128.171 port 48320
Feb 20 07:25:36 np0005625203.localdomain sshd[24834]: error: maximum authentication attempts exceeded for invalid user system from 185.246.128.171 port 48320 ssh2 [preauth]
Feb 20 07:25:36 np0005625203.localdomain sshd[24834]: Disconnecting invalid user system 185.246.128.171 port 48320: Too many authentication failures [preauth]
Feb 20 07:25:38 np0005625203.localdomain sshd[24836]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:25:42 np0005625203.localdomain sshd[24836]: Invalid user system from 185.246.128.171 port 41965
Feb 20 07:25:43 np0005625203.localdomain sshd[24838]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:25:43 np0005625203.localdomain sshd[24838]: Invalid user sol from 92.118.39.72 port 59740
Feb 20 07:25:43 np0005625203.localdomain sshd[24838]: Connection closed by invalid user sol 92.118.39.72 port 59740 [preauth]
Feb 20 07:25:45 np0005625203.localdomain sshd[24836]: Disconnecting invalid user system 185.246.128.171 port 41965: Change of username or service not allowed: (system,ssh-connection) -> (marek,ssh-connection) [preauth]
Feb 20 07:25:48 np0005625203.localdomain sshd[24840]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:25:51 np0005625203.localdomain sshd[24842]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:25:52 np0005625203.localdomain sshd[24842]: Received disconnect from 118.193.43.244 port 55502:11: Bye Bye [preauth]
Feb 20 07:25:52 np0005625203.localdomain sshd[24842]: Disconnected from authenticating user root 118.193.43.244 port 55502 [preauth]
Feb 20 07:25:55 np0005625203.localdomain sshd[24840]: Invalid user marek from 185.246.128.171 port 16901
Feb 20 07:25:55 np0005625203.localdomain sshd[24840]: Disconnecting invalid user marek 185.246.128.171 port 16901: Change of username or service not allowed: (marek,ssh-connection) -> (volumio,ssh-connection) [preauth]
Feb 20 07:25:58 np0005625203.localdomain sshd[24844]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:00 np0005625203.localdomain sshd[24844]: Invalid user volumio from 185.246.128.171 port 62141
Feb 20 07:26:00 np0005625203.localdomain sshd[24844]: Disconnecting invalid user volumio 185.246.128.171 port 62141: Change of username or service not allowed: (volumio,ssh-connection) -> (apache,ssh-connection) [preauth]
Feb 20 07:26:01 np0005625203.localdomain sshd[24846]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:03 np0005625203.localdomain sshd[24848]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:03 np0005625203.localdomain sshd[24848]: Invalid user iksi from 143.198.161.12 port 58982
Feb 20 07:26:03 np0005625203.localdomain sshd[24848]: Received disconnect from 143.198.161.12 port 58982:11: Bye Bye [preauth]
Feb 20 07:26:03 np0005625203.localdomain sshd[24848]: Disconnected from invalid user iksi 143.198.161.12 port 58982 [preauth]
Feb 20 07:26:03 np0005625203.localdomain sshd[24846]: Invalid user apache from 185.246.128.171 port 13442
Feb 20 07:26:04 np0005625203.localdomain sshd[24846]: Disconnecting invalid user apache 185.246.128.171 port 13442: Change of username or service not allowed: (apache,ssh-connection) -> (lixiang,ssh-connection) [preauth]
Feb 20 07:26:06 np0005625203.localdomain sshd[24850]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:07 np0005625203.localdomain sshd[24850]: Invalid user lixiang from 185.246.128.171 port 33825
Feb 20 07:26:08 np0005625203.localdomain sshd[24850]: Disconnecting invalid user lixiang 185.246.128.171 port 33825: Change of username or service not allowed: (lixiang,ssh-connection) -> (123,ssh-connection) [preauth]
Feb 20 07:26:08 np0005625203.localdomain sshd[24852]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:09 np0005625203.localdomain sshd[24852]: Invalid user 123 from 185.246.128.171 port 44632
Feb 20 07:26:10 np0005625203.localdomain sshd[24852]: Disconnecting invalid user 123 185.246.128.171 port 44632: Change of username or service not allowed: (123,ssh-connection) -> (instrument,ssh-connection) [preauth]
Feb 20 07:26:13 np0005625203.localdomain sshd[24854]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:18 np0005625203.localdomain sshd[24854]: Invalid user instrument from 185.246.128.171 port 2590
Feb 20 07:26:18 np0005625203.localdomain sshd[24857]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:19 np0005625203.localdomain sshd[24857]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:26:20 np0005625203.localdomain sshd[24854]: Disconnecting invalid user instrument 185.246.128.171 port 2590: Change of username or service not allowed: (instrument,ssh-connection) -> (adm,ssh-connection) [preauth]
Feb 20 07:26:21 np0005625203.localdomain sshd[24859]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:24 np0005625203.localdomain sshd[24859]: Disconnecting authenticating user adm 185.246.128.171 port 39070: Change of username or service not allowed: (adm,ssh-connection) -> (scan,ssh-connection) [preauth]
Feb 20 07:26:26 np0005625203.localdomain sshd[24861]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:27 np0005625203.localdomain sshd[24863]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:28 np0005625203.localdomain sshd[24863]: Received disconnect from 187.87.206.21 port 34574:11: Bye Bye [preauth]
Feb 20 07:26:28 np0005625203.localdomain sshd[24863]: Disconnected from authenticating user root 187.87.206.21 port 34574 [preauth]
Feb 20 07:26:30 np0005625203.localdomain sshd[24861]: Invalid user scan from 185.246.128.171 port 54944
Feb 20 07:26:30 np0005625203.localdomain sshd[24861]: Disconnecting invalid user scan 185.246.128.171 port 54944: Change of username or service not allowed: (scan,ssh-connection) -> (iman,ssh-connection) [preauth]
Feb 20 07:26:33 np0005625203.localdomain sshd[24865]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:37 np0005625203.localdomain sshd[24865]: Invalid user iman from 185.246.128.171 port 25544
Feb 20 07:26:37 np0005625203.localdomain sshd[24865]: Disconnecting invalid user iman 185.246.128.171 port 25544: Change of username or service not allowed: (iman,ssh-connection) -> (teamspeak,ssh-connection) [preauth]
Feb 20 07:26:39 np0005625203.localdomain sshd[24867]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:42 np0005625203.localdomain sshd[24867]: Invalid user teamspeak from 185.246.128.171 port 55261
Feb 20 07:26:44 np0005625203.localdomain sshd[24867]: Disconnecting invalid user teamspeak 185.246.128.171 port 55261: Change of username or service not allowed: (teamspeak,ssh-connection) -> (report,ssh-connection) [preauth]
Feb 20 07:26:46 np0005625203.localdomain sshd[24869]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:49 np0005625203.localdomain sshd[24869]: Invalid user report from 185.246.128.171 port 20617
Feb 20 07:26:50 np0005625203.localdomain sshd[24869]: Disconnecting invalid user report 185.246.128.171 port 20617: Change of username or service not allowed: (report,ssh-connection) -> (debianuser,ssh-connection) [preauth]
Feb 20 07:26:53 np0005625203.localdomain sshd[24871]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:26:57 np0005625203.localdomain sshd[24871]: Invalid user debianuser from 185.246.128.171 port 52064
Feb 20 07:26:58 np0005625203.localdomain sshd[24871]: Disconnecting invalid user debianuser 185.246.128.171 port 52064: Change of username or service not allowed: (debianuser,ssh-connection) -> (terraform,ssh-connection) [preauth]
Feb 20 07:27:00 np0005625203.localdomain sshd[24873]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:04 np0005625203.localdomain sshd[24873]: Invalid user terraform from 185.246.128.171 port 19286
Feb 20 07:27:05 np0005625203.localdomain sshd[24873]: Disconnecting invalid user terraform 185.246.128.171 port 19286: Change of username or service not allowed: (terraform,ssh-connection) -> (deploy,ssh-connection) [preauth]
Feb 20 07:27:07 np0005625203.localdomain sshd[24875]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:07 np0005625203.localdomain sshd[24875]: Invalid user x from 172.203.58.203 port 45038
Feb 20 07:27:07 np0005625203.localdomain sshd[24875]: Received disconnect from 172.203.58.203 port 45038:11: Bye Bye [preauth]
Feb 20 07:27:07 np0005625203.localdomain sshd[24875]: Disconnected from invalid user x 172.203.58.203 port 45038 [preauth]
Feb 20 07:27:09 np0005625203.localdomain sshd[24877]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:12 np0005625203.localdomain sshd[24877]: Invalid user deploy from 185.246.128.171 port 56924
Feb 20 07:27:15 np0005625203.localdomain sshd[24877]: Disconnecting invalid user deploy 185.246.128.171 port 56924: Change of username or service not allowed: (deploy,ssh-connection) -> (tt,ssh-connection) [preauth]
Feb 20 07:27:18 np0005625203.localdomain sshd[24879]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:22 np0005625203.localdomain sshd[24879]: Invalid user tt from 185.246.128.171 port 33456
Feb 20 07:27:22 np0005625203.localdomain sshd[24879]: Disconnecting invalid user tt 185.246.128.171 port 33456: Change of username or service not allowed: (tt,ssh-connection) -> (fa,ssh-connection) [preauth]
Feb 20 07:27:26 np0005625203.localdomain sshd[24881]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:30 np0005625203.localdomain sshd[24881]: Invalid user fa from 185.246.128.171 port 8412
Feb 20 07:27:31 np0005625203.localdomain sshd[24881]: Disconnecting invalid user fa 185.246.128.171 port 8412: Change of username or service not allowed: (fa,ssh-connection) -> (denis,ssh-connection) [preauth]
Feb 20 07:27:33 np0005625203.localdomain sshd[24883]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:33 np0005625203.localdomain sshd[24885]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:33 np0005625203.localdomain sshd[24885]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:27:35 np0005625203.localdomain sshd[24883]: Invalid user denis from 185.246.128.171 port 38340
Feb 20 07:27:36 np0005625203.localdomain sshd[24883]: Disconnecting invalid user denis 185.246.128.171 port 38340: Change of username or service not allowed: (denis,ssh-connection) -> (nsroot,ssh-connection) [preauth]
Feb 20 07:27:38 np0005625203.localdomain sshd[24887]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:39 np0005625203.localdomain sshd[24889]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:39 np0005625203.localdomain sshd[24891]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:40 np0005625203.localdomain sshd[24887]: Invalid user builduser from 40.81.244.142 port 55864
Feb 20 07:27:40 np0005625203.localdomain sshd[24887]: Received disconnect from 40.81.244.142 port 55864:11: Bye Bye [preauth]
Feb 20 07:27:40 np0005625203.localdomain sshd[24887]: Disconnected from invalid user builduser 40.81.244.142 port 55864 [preauth]
Feb 20 07:27:40 np0005625203.localdomain sshd[24891]: Received disconnect from 144.91.127.158 port 57086:11: Bye Bye [preauth]
Feb 20 07:27:40 np0005625203.localdomain sshd[24891]: Disconnected from authenticating user root 144.91.127.158 port 57086 [preauth]
Feb 20 07:27:42 np0005625203.localdomain sshd[24889]: Invalid user nsroot from 185.246.128.171 port 63930
Feb 20 07:27:43 np0005625203.localdomain sshd[24889]: Disconnecting invalid user nsroot 185.246.128.171 port 63930: Change of username or service not allowed: (nsroot,ssh-connection) -> (jay,ssh-connection) [preauth]
Feb 20 07:27:43 np0005625203.localdomain sshd[24893]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:44 np0005625203.localdomain sshd[24893]: Invalid user actions from 45.246.55.249 port 39202
Feb 20 07:27:45 np0005625203.localdomain sshd[24893]: Received disconnect from 45.246.55.249 port 39202:11: Bye Bye [preauth]
Feb 20 07:27:45 np0005625203.localdomain sshd[24893]: Disconnected from invalid user actions 45.246.55.249 port 39202 [preauth]
Feb 20 07:27:45 np0005625203.localdomain sshd[24895]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:47 np0005625203.localdomain sshd[24895]: Invalid user jay from 185.246.128.171 port 28608
Feb 20 07:27:48 np0005625203.localdomain sshd[24897]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:49 np0005625203.localdomain sshd[24897]: Invalid user sol from 92.118.39.72 port 44062
Feb 20 07:27:49 np0005625203.localdomain sshd[24897]: Connection closed by invalid user sol 92.118.39.72 port 44062 [preauth]
Feb 20 07:27:50 np0005625203.localdomain sshd[24895]: Disconnecting invalid user jay 185.246.128.171 port 28608: Change of username or service not allowed: (jay,ssh-connection) -> (andre,ssh-connection) [preauth]
Feb 20 07:27:53 np0005625203.localdomain sshd[24899]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:27:54 np0005625203.localdomain sshd[24899]: Invalid user andre from 185.246.128.171 port 61380
Feb 20 07:27:55 np0005625203.localdomain sshd[24899]: Disconnecting invalid user andre 185.246.128.171 port 61380: Change of username or service not allowed: (andre,ssh-connection) -> (solr,ssh-connection) [preauth]
Feb 20 07:27:56 np0005625203.localdomain sshd[24901]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:28:00 np0005625203.localdomain sshd[24901]: Invalid user solr from 185.246.128.171 port 13717
Feb 20 07:28:02 np0005625203.localdomain sshd[24901]: Disconnecting invalid user solr 185.246.128.171 port 13717: Change of username or service not allowed: (solr,ssh-connection) -> (shutdown,ssh-connection) [preauth]
Feb 20 07:28:04 np0005625203.localdomain sshd[24903]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:28:09 np0005625203.localdomain sshd[24903]: Disconnecting authenticating user shutdown 185.246.128.171 port 48688: Change of username or service not allowed: (shutdown,ssh-connection) -> (vagrant,ssh-connection) [preauth]
Feb 20 07:28:13 np0005625203.localdomain sshd[24905]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:28:17 np0005625203.localdomain sshd[24905]: Invalid user vagrant from 185.246.128.171 port 23728
Feb 20 07:28:18 np0005625203.localdomain sshd[24905]: Disconnecting invalid user vagrant 185.246.128.171 port 23728: Change of username or service not allowed: (vagrant,ssh-connection) -> (tunnel,ssh-connection) [preauth]
Feb 20 07:28:22 np0005625203.localdomain sshd[24907]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:28:25 np0005625203.localdomain sshd[24909]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:28:26 np0005625203.localdomain sshd[24909]: Invalid user x from 170.254.229.191 port 36836
Feb 20 07:28:26 np0005625203.localdomain sshd[24909]: Received disconnect from 170.254.229.191 port 36836:11: Bye Bye [preauth]
Feb 20 07:28:26 np0005625203.localdomain sshd[24909]: Disconnected from invalid user x 170.254.229.191 port 36836 [preauth]
Feb 20 07:28:26 np0005625203.localdomain sshd[24907]: Invalid user tunnel from 185.246.128.171 port 64006
Feb 20 07:28:28 np0005625203.localdomain sshd[24907]: Disconnecting invalid user tunnel 185.246.128.171 port 64006: Change of username or service not allowed: (tunnel,ssh-connection) -> (musicbot,ssh-connection) [preauth]
Feb 20 07:28:30 np0005625203.localdomain sshd[24911]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:28:32 np0005625203.localdomain sshd[24911]: Invalid user musicbot from 185.246.128.171 port 38597
Feb 20 07:28:33 np0005625203.localdomain sshd[24911]: Disconnecting invalid user musicbot 185.246.128.171 port 38597: Change of username or service not allowed: (musicbot,ssh-connection) -> (thomas,ssh-connection) [preauth]
Feb 20 07:28:38 np0005625203.localdomain sshd[24913]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:28:41 np0005625203.localdomain sshd[24913]: Invalid user thomas from 185.246.128.171 port 8931
Feb 20 07:28:43 np0005625203.localdomain sshd[24913]: Disconnecting invalid user thomas 185.246.128.171 port 8931: Change of username or service not allowed: (thomas,ssh-connection) -> (ADMIN,ssh-connection) [preauth]
Feb 20 07:28:47 np0005625203.localdomain sshd[24915]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:28:47 np0005625203.localdomain sshd[24916]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:28:47 np0005625203.localdomain sshd[24915]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:28:50 np0005625203.localdomain sshd[24916]: Invalid user ADMIN from 185.246.128.171 port 48366
Feb 20 07:28:54 np0005625203.localdomain sshd[24916]: Disconnecting invalid user ADMIN 185.246.128.171 port 48366: Change of username or service not allowed: (ADMIN,ssh-connection) -> (publicuser,ssh-connection) [preauth]
Feb 20 07:28:55 np0005625203.localdomain sshd[24919]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:28:58 np0005625203.localdomain sshd[24919]: Invalid user publicuser from 185.246.128.171 port 19060
Feb 20 07:28:59 np0005625203.localdomain sshd[24919]: Disconnecting invalid user publicuser 185.246.128.171 port 19060: Change of username or service not allowed: (publicuser,ssh-connection) -> (es1,ssh-connection) [preauth]
Feb 20 07:29:00 np0005625203.localdomain sshd[24921]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:29:02 np0005625203.localdomain sshd[24921]: Invalid user es1 from 185.246.128.171 port 39509
Feb 20 07:29:02 np0005625203.localdomain sshd[24921]: Disconnecting invalid user es1 185.246.128.171 port 39509: Change of username or service not allowed: (es1,ssh-connection) -> (minecraft,ssh-connection) [preauth]
Feb 20 07:29:04 np0005625203.localdomain sshd[24923]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:29:06 np0005625203.localdomain sshd[24923]: Invalid user minecraft from 185.246.128.171 port 57849
Feb 20 07:29:09 np0005625203.localdomain sshd[24923]: Disconnecting invalid user minecraft 185.246.128.171 port 57849: Change of username or service not allowed: (minecraft,ssh-connection) -> (binance,ssh-connection) [preauth]
Feb 20 07:29:09 np0005625203.localdomain sshd[24925]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:29:10 np0005625203.localdomain sshd[24925]: Invalid user dixi from 118.193.43.244 port 45500
Feb 20 07:29:11 np0005625203.localdomain sshd[24925]: Received disconnect from 118.193.43.244 port 45500:11: Bye Bye [preauth]
Feb 20 07:29:11 np0005625203.localdomain sshd[24925]: Disconnected from invalid user dixi 118.193.43.244 port 45500 [preauth]
Feb 20 07:29:12 np0005625203.localdomain sshd[24927]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:29:15 np0005625203.localdomain sshd[24927]: Invalid user binance from 185.246.128.171 port 30386
Feb 20 07:29:16 np0005625203.localdomain sshd[24927]: Disconnecting invalid user binance 185.246.128.171 port 30386: Change of username or service not allowed: (binance,ssh-connection) -> (raj,ssh-connection) [preauth]
Feb 20 07:29:19 np0005625203.localdomain sshd[24929]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:29:21 np0005625203.localdomain sshd[24929]: Invalid user raj from 185.246.128.171 port 60463
Feb 20 07:29:22 np0005625203.localdomain sshd[24929]: Disconnecting invalid user raj 185.246.128.171 port 60463: Change of username or service not allowed: (raj,ssh-connection) -> (mina,ssh-connection) [preauth]
Feb 20 07:29:25 np0005625203.localdomain sshd[24931]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:29:28 np0005625203.localdomain sshd[24931]: Invalid user mina from 185.246.128.171 port 24872
Feb 20 07:29:28 np0005625203.localdomain sshd[24931]: Disconnecting invalid user mina 185.246.128.171 port 24872: Change of username or service not allowed: (mina,ssh-connection) -> (openvpn,ssh-connection) [preauth]
Feb 20 07:29:32 np0005625203.localdomain sshd[24933]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:29:34 np0005625203.localdomain sshd[24933]: Invalid user openvpn from 185.246.128.171 port 55652
Feb 20 07:29:35 np0005625203.localdomain sshd[24933]: Disconnecting invalid user openvpn 185.246.128.171 port 55652: Change of username or service not allowed: (openvpn,ssh-connection) -> (Cisco,ssh-connection) [preauth]
Feb 20 07:29:38 np0005625203.localdomain sshd[24935]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:29:40 np0005625203.localdomain sshd[24935]: Invalid user Cisco from 185.246.128.171 port 20375
Feb 20 07:29:42 np0005625203.localdomain sshd[24935]: Disconnecting invalid user Cisco 185.246.128.171 port 20375: Change of username or service not allowed: (Cisco,ssh-connection) -> (gestion,ssh-connection) [preauth]
Feb 20 07:29:44 np0005625203.localdomain sshd[24937]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:29:46 np0005625203.localdomain sshd[24937]: Invalid user gestion from 185.246.128.171 port 50837
Feb 20 07:29:47 np0005625203.localdomain sshd[24937]: Disconnecting invalid user gestion 185.246.128.171 port 50837: Change of username or service not allowed: (gestion,ssh-connection) -> (liuyu,ssh-connection) [preauth]
Feb 20 07:29:47 np0005625203.localdomain sshd[24939]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:29:48 np0005625203.localdomain sshd[24941]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:29:49 np0005625203.localdomain sshd[24939]: Invalid user oracle from 123.204.132.127 port 55822
Feb 20 07:29:49 np0005625203.localdomain sshd[24939]: Received disconnect from 123.204.132.127 port 55822:11: Bye Bye [preauth]
Feb 20 07:29:49 np0005625203.localdomain sshd[24939]: Disconnected from invalid user oracle 123.204.132.127 port 55822 [preauth]
Feb 20 07:29:54 np0005625203.localdomain sshd[24941]: Invalid user liuyu from 185.246.128.171 port 6579
Feb 20 07:29:55 np0005625203.localdomain sshd[24941]: Disconnecting invalid user liuyu 185.246.128.171 port 6579: Change of username or service not allowed: (liuyu,ssh-connection) -> (deployuser,ssh-connection) [preauth]
Feb 20 07:29:56 np0005625203.localdomain sshd[24943]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:29:57 np0005625203.localdomain sshd[24943]: Invalid user validator from 92.118.39.72 port 56632
Feb 20 07:29:57 np0005625203.localdomain sshd[24943]: Connection closed by invalid user validator 92.118.39.72 port 56632 [preauth]
Feb 20 07:30:00 np0005625203.localdomain sshd[24945]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:03 np0005625203.localdomain sshd[24945]: Invalid user deployuser from 185.246.128.171 port 60149
Feb 20 07:30:05 np0005625203.localdomain sshd[24945]: Disconnecting invalid user deployuser 185.246.128.171 port 60149: Change of username or service not allowed: (deployuser,ssh-connection) -> (popo,ssh-connection) [preauth]
Feb 20 07:30:07 np0005625203.localdomain sshd[24947]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:07 np0005625203.localdomain sshd[24948]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:07 np0005625203.localdomain sshd[24947]: error: kex_exchange_identification: Connection closed by remote host
Feb 20 07:30:07 np0005625203.localdomain sshd[24947]: Connection closed by 142.93.130.138 port 60474
Feb 20 07:30:09 np0005625203.localdomain sshd[24948]: Invalid user popo from 185.246.128.171 port 27053
Feb 20 07:30:09 np0005625203.localdomain sshd[24948]: Disconnecting invalid user popo 185.246.128.171 port 27053: Change of username or service not allowed: (popo,ssh-connection) -> (ansible,ssh-connection) [preauth]
Feb 20 07:30:11 np0005625203.localdomain sshd[24950]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:14 np0005625203.localdomain sshd[24950]: Invalid user ansible from 185.246.128.171 port 48693
Feb 20 07:30:18 np0005625203.localdomain sshd[24950]: Disconnecting invalid user ansible 185.246.128.171 port 48693: Change of username or service not allowed: (ansible,ssh-connection) -> (oper,ssh-connection) [preauth]
Feb 20 07:30:19 np0005625203.localdomain sshd[24952]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:22 np0005625203.localdomain sshd[24954]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:22 np0005625203.localdomain sshd[24952]: Invalid user oper from 185.246.128.171 port 21217
Feb 20 07:30:23 np0005625203.localdomain sshd[24956]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:23 np0005625203.localdomain sshd[24952]: Disconnecting invalid user oper 185.246.128.171 port 21217: Change of username or service not allowed: (oper,ssh-connection) -> (mohamed,ssh-connection) [preauth]
Feb 20 07:30:24 np0005625203.localdomain sshd[24954]: Invalid user claude from 103.171.84.20 port 39782
Feb 20 07:30:24 np0005625203.localdomain sshd[24954]: Received disconnect from 103.171.84.20 port 39782:11: Bye Bye [preauth]
Feb 20 07:30:24 np0005625203.localdomain sshd[24954]: Disconnected from invalid user claude 103.171.84.20 port 39782 [preauth]
Feb 20 07:30:24 np0005625203.localdomain sshd[24956]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:30:26 np0005625203.localdomain sshd[24959]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:30 np0005625203.localdomain sshd[24959]: Invalid user mohamed from 185.246.128.171 port 51348
Feb 20 07:30:30 np0005625203.localdomain sshd[24959]: Disconnecting invalid user mohamed 185.246.128.171 port 51348: Change of username or service not allowed: (mohamed,ssh-connection) -> (peter,ssh-connection) [preauth]
Feb 20 07:30:31 np0005625203.localdomain sshd[24961]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:32 np0005625203.localdomain sshd[24961]: Invalid user peter from 185.246.128.171 port 9036
Feb 20 07:30:32 np0005625203.localdomain sshd[24961]: Disconnecting invalid user peter 185.246.128.171 port 9036: Change of username or service not allowed: (peter,ssh-connection) -> (router,ssh-connection) [preauth]
Feb 20 07:30:33 np0005625203.localdomain sshd[24963]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:35 np0005625203.localdomain sshd[24965]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:35 np0005625203.localdomain sshd[24965]: Invalid user admin from 172.203.58.203 port 42500
Feb 20 07:30:35 np0005625203.localdomain sshd[24965]: Received disconnect from 172.203.58.203 port 42500:11: Bye Bye [preauth]
Feb 20 07:30:35 np0005625203.localdomain sshd[24965]: Disconnected from invalid user admin 172.203.58.203 port 42500 [preauth]
Feb 20 07:30:36 np0005625203.localdomain sshd[24963]: Invalid user router from 185.246.128.171 port 22928
Feb 20 07:30:37 np0005625203.localdomain sshd[24963]: Disconnecting invalid user router 185.246.128.171 port 22928: Change of username or service not allowed: (router,ssh-connection) -> (tim,ssh-connection) [preauth]
Feb 20 07:30:40 np0005625203.localdomain sshd[24967]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:44 np0005625203.localdomain sshd[24969]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:44 np0005625203.localdomain sshd[24967]: Invalid user tim from 185.246.128.171 port 52408
Feb 20 07:30:45 np0005625203.localdomain sshd[24969]: Invalid user builder from 187.87.206.21 port 33988
Feb 20 07:30:45 np0005625203.localdomain sshd[24969]: Received disconnect from 187.87.206.21 port 33988:11: Bye Bye [preauth]
Feb 20 07:30:45 np0005625203.localdomain sshd[24969]: Disconnected from invalid user builder 187.87.206.21 port 33988 [preauth]
Feb 20 07:30:45 np0005625203.localdomain sshd[24967]: Disconnecting invalid user tim 185.246.128.171 port 52408: Change of username or service not allowed: (tim,ssh-connection) -> (sara,ssh-connection) [preauth]
Feb 20 07:30:48 np0005625203.localdomain sshd[24971]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:49 np0005625203.localdomain sshd[24971]: Invalid user sara from 185.246.128.171 port 24844
Feb 20 07:30:50 np0005625203.localdomain sshd[24971]: Disconnecting invalid user sara 185.246.128.171 port 24844: Change of username or service not allowed: (sara,ssh-connection) -> (mahmoud,ssh-connection) [preauth]
Feb 20 07:30:52 np0005625203.localdomain sshd[24973]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:54 np0005625203.localdomain sshd[24973]: Invalid user mahmoud from 185.246.128.171 port 42052
Feb 20 07:30:54 np0005625203.localdomain sshd[24973]: Disconnecting invalid user mahmoud 185.246.128.171 port 42052: Change of username or service not allowed: (mahmoud,ssh-connection) -> (admin123,ssh-connection) [preauth]
Feb 20 07:30:56 np0005625203.localdomain sshd[24975]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:30:59 np0005625203.localdomain sshd[24975]: Invalid user admin123 from 185.246.128.171 port 63263
Feb 20 07:31:00 np0005625203.localdomain sshd[24975]: Disconnecting invalid user admin123 185.246.128.171 port 63263: Change of username or service not allowed: (admin123,ssh-connection) -> (yealink,ssh-connection) [preauth]
Feb 20 07:31:01 np0005625203.localdomain sshd[24977]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:05 np0005625203.localdomain sshd[24977]: Invalid user yealink from 185.246.128.171 port 21573
Feb 20 07:31:07 np0005625203.localdomain sshd[24977]: Disconnecting invalid user yealink 185.246.128.171 port 21573: Change of username or service not allowed: (yealink,ssh-connection) -> (USERID,ssh-connection) [preauth]
Feb 20 07:31:10 np0005625203.localdomain sshd[24979]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:12 np0005625203.localdomain sshd[24979]: Invalid user USERID from 185.246.128.171 port 59113
Feb 20 07:31:12 np0005625203.localdomain sshd[24979]: Disconnecting invalid user USERID 185.246.128.171 port 59113: Change of username or service not allowed: (USERID,ssh-connection) -> (weewx,ssh-connection) [preauth]
Feb 20 07:31:14 np0005625203.localdomain sshd[24981]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:17 np0005625203.localdomain sshd[24981]: Invalid user weewx from 185.246.128.171 port 12357
Feb 20 07:31:20 np0005625203.localdomain sshd[24983]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:20 np0005625203.localdomain sshd[24981]: Disconnecting invalid user weewx 185.246.128.171 port 12357: Change of username or service not allowed: (weewx,ssh-connection) -> (sync,ssh-connection) [preauth]
Feb 20 07:31:20 np0005625203.localdomain sshd[24985]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:21 np0005625203.localdomain sshd[24983]: Received disconnect from 45.246.55.249 port 49312:11: Bye Bye [preauth]
Feb 20 07:31:21 np0005625203.localdomain sshd[24983]: Disconnected from authenticating user root 45.246.55.249 port 49312 [preauth]
Feb 20 07:31:22 np0005625203.localdomain sshd[24985]: Invalid user admin from 40.81.244.142 port 54462
Feb 20 07:31:22 np0005625203.localdomain sshd[24985]: Received disconnect from 40.81.244.142 port 54462:11: Bye Bye [preauth]
Feb 20 07:31:22 np0005625203.localdomain sshd[24985]: Disconnected from invalid user admin 40.81.244.142 port 54462 [preauth]
Feb 20 07:31:22 np0005625203.localdomain sshd[24987]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:27 np0005625203.localdomain sshd[24987]: Disconnecting authenticating user sync 185.246.128.171 port 47938: Change of username or service not allowed: (sync,ssh-connection) -> (dmdba,ssh-connection) [preauth]
Feb 20 07:31:30 np0005625203.localdomain sshd[24989]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:33 np0005625203.localdomain sshd[24989]: Invalid user dmdba from 185.246.128.171 port 17601
Feb 20 07:31:34 np0005625203.localdomain sshd[24989]: Disconnecting invalid user dmdba 185.246.128.171 port 17601: Change of username or service not allowed: (dmdba,ssh-connection) -> (oscar,ssh-connection) [preauth]
Feb 20 07:31:34 np0005625203.localdomain sshd[24991]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:36 np0005625203.localdomain sshd[24991]: Invalid user oscar from 185.246.128.171 port 38596
Feb 20 07:31:36 np0005625203.localdomain sshd[24991]: Disconnecting invalid user oscar 185.246.128.171 port 38596: Change of username or service not allowed: (oscar,ssh-connection) -> (gang,ssh-connection) [preauth]
Feb 20 07:31:38 np0005625203.localdomain sshd[24993]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:41 np0005625203.localdomain sshd[24993]: Invalid user gang from 185.246.128.171 port 54016
Feb 20 07:31:43 np0005625203.localdomain sshd[24993]: Disconnecting invalid user gang 185.246.128.171 port 54016: Change of username or service not allowed: (gang,ssh-connection) -> (dev1,ssh-connection) [preauth]
Feb 20 07:31:46 np0005625203.localdomain sshd[24995]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:52 np0005625203.localdomain sshd[24995]: Invalid user dev1 from 185.246.128.171 port 24608
Feb 20 07:31:52 np0005625203.localdomain sshd[24997]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:52 np0005625203.localdomain sshd[24995]: Disconnecting invalid user dev1 185.246.128.171 port 24608: Change of username or service not allowed: (dev1,ssh-connection) -> (alexandra,ssh-connection) [preauth]
Feb 20 07:31:52 np0005625203.localdomain sshd[24997]: Invalid user titu from 170.254.229.191 port 36726
Feb 20 07:31:52 np0005625203.localdomain sshd[24997]: Received disconnect from 170.254.229.191 port 36726:11: Bye Bye [preauth]
Feb 20 07:31:52 np0005625203.localdomain sshd[24997]: Disconnected from invalid user titu 170.254.229.191 port 36726 [preauth]
Feb 20 07:31:54 np0005625203.localdomain sshd[24999]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:56 np0005625203.localdomain sshd[24999]: Invalid user alexandra from 185.246.128.171 port 58098
Feb 20 07:31:57 np0005625203.localdomain sshd[25001]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:31:57 np0005625203.localdomain sshd[25001]: Invalid user blockchain from 92.118.39.72 port 40982
Feb 20 07:31:58 np0005625203.localdomain sshd[25001]: Connection closed by invalid user blockchain 92.118.39.72 port 40982 [preauth]
Feb 20 07:31:58 np0005625203.localdomain sshd[24999]: Disconnecting invalid user alexandra 185.246.128.171 port 58098: Change of username or service not allowed: (alexandra,ssh-connection) -> (user2,ssh-connection) [preauth]
Feb 20 07:32:02 np0005625203.localdomain sshd[25003]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:32:03 np0005625203.localdomain sshd[25003]: Invalid user user2 from 185.246.128.171 port 28909
Feb 20 07:32:04 np0005625203.localdomain sshd[25003]: Disconnecting invalid user user2 185.246.128.171 port 28909: Change of username or service not allowed: (user2,ssh-connection) -> (dock,ssh-connection) [preauth]
Feb 20 07:32:06 np0005625203.localdomain sshd[25005]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:32:08 np0005625203.localdomain sshd[25007]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:32:09 np0005625203.localdomain sshd[25007]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:32:09 np0005625203.localdomain sshd[25005]: Invalid user dock from 185.246.128.171 port 46368
Feb 20 07:32:10 np0005625203.localdomain sshd[25005]: Disconnecting invalid user dock 185.246.128.171 port 46368: Change of username or service not allowed: (dock,ssh-connection) -> (a,ssh-connection) [preauth]
Feb 20 07:32:12 np0005625203.localdomain sshd[25009]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:32:16 np0005625203.localdomain sshd[25009]: Invalid user a from 185.246.128.171 port 12001
Feb 20 07:32:17 np0005625203.localdomain sshd[25009]: Disconnecting invalid user a 185.246.128.171 port 12001: Change of username or service not allowed: (a,ssh-connection) -> (lucy,ssh-connection) [preauth]
Feb 20 07:32:20 np0005625203.localdomain sshd[25011]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:32:24 np0005625203.localdomain sshd[25011]: Invalid user lucy from 185.246.128.171 port 45849
Feb 20 07:32:27 np0005625203.localdomain sshd[25011]: Disconnecting invalid user lucy 185.246.128.171 port 45849: Change of username or service not allowed: (lucy,ssh-connection) -> (diego,ssh-connection) [preauth]
Feb 20 07:32:30 np0005625203.localdomain sshd[25013]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:32:30 np0005625203.localdomain sshd[25015]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:32:31 np0005625203.localdomain sshd[25013]: Invalid user user1 from 118.193.43.244 port 49848
Feb 20 07:32:31 np0005625203.localdomain sshd[25013]: Received disconnect from 118.193.43.244 port 49848:11: Bye Bye [preauth]
Feb 20 07:32:31 np0005625203.localdomain sshd[25013]: Disconnected from invalid user user1 118.193.43.244 port 49848 [preauth]
Feb 20 07:32:33 np0005625203.localdomain sshd[25015]: Invalid user diego from 185.246.128.171 port 26114
Feb 20 07:32:34 np0005625203.localdomain sshd[25015]: Disconnecting invalid user diego 185.246.128.171 port 26114: Change of username or service not allowed: (diego,ssh-connection) -> (chenwang,ssh-connection) [preauth]
Feb 20 07:32:35 np0005625203.localdomain sshd[25017]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:32:37 np0005625203.localdomain sshd[25017]: Invalid user chenwang from 185.246.128.171 port 45098
Feb 20 07:32:37 np0005625203.localdomain sshd[25017]: Disconnecting invalid user chenwang 185.246.128.171 port 45098: Change of username or service not allowed: (chenwang,ssh-connection) -> (amit,ssh-connection) [preauth]
Feb 20 07:32:40 np0005625203.localdomain sshd[25019]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:32:47 np0005625203.localdomain sshd[25019]: Invalid user amit from 185.246.128.171 port 3792
Feb 20 07:32:48 np0005625203.localdomain sshd[25019]: Disconnecting invalid user amit 185.246.128.171 port 3792: Change of username or service not allowed: (amit,ssh-connection) -> (storage,ssh-connection) [preauth]
Feb 20 07:32:51 np0005625203.localdomain sshd[25021]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:32:51 np0005625203.localdomain sshd[25021]: Invalid user comercial from 143.198.161.12 port 57392
Feb 20 07:32:51 np0005625203.localdomain sshd[25021]: Received disconnect from 143.198.161.12 port 57392:11: Bye Bye [preauth]
Feb 20 07:32:51 np0005625203.localdomain sshd[25021]: Disconnected from invalid user comercial 143.198.161.12 port 57392 [preauth]
Feb 20 07:32:51 np0005625203.localdomain sshd[25023]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:32:56 np0005625203.localdomain sshd[25023]: Invalid user storage from 185.246.128.171 port 50986
Feb 20 07:32:57 np0005625203.localdomain sshd[25023]: Disconnecting invalid user storage 185.246.128.171 port 50986: Change of username or service not allowed: (storage,ssh-connection) -> (vendas,ssh-connection) [preauth]
Feb 20 07:33:00 np0005625203.localdomain sshd[25025]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:01 np0005625203.localdomain sshd[25025]: Invalid user vendas from 185.246.128.171 port 23085
Feb 20 07:33:04 np0005625203.localdomain sshd[25025]: Disconnecting invalid user vendas 185.246.128.171 port 23085: Change of username or service not allowed: (vendas,ssh-connection) -> (demo,ssh-connection) [preauth]
Feb 20 07:33:07 np0005625203.localdomain sshd[25027]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:08 np0005625203.localdomain sshd[25029]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:10 np0005625203.localdomain sshd[25029]: Invalid user n8n from 123.204.132.127 port 40658
Feb 20 07:33:10 np0005625203.localdomain sshd[25029]: Received disconnect from 123.204.132.127 port 40658:11: Bye Bye [preauth]
Feb 20 07:33:10 np0005625203.localdomain sshd[25029]: Disconnected from invalid user n8n 123.204.132.127 port 40658 [preauth]
Feb 20 07:33:10 np0005625203.localdomain sshd[25027]: Invalid user demo from 185.246.128.171 port 54570
Feb 20 07:33:10 np0005625203.localdomain sshd[25027]: Disconnecting invalid user demo 185.246.128.171 port 54570: Change of username or service not allowed: (demo,ssh-connection) -> (aziz,ssh-connection) [preauth]
Feb 20 07:33:11 np0005625203.localdomain sshd[25031]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:12 np0005625203.localdomain sshd[25031]: Invalid user aziz from 185.246.128.171 port 9231
Feb 20 07:33:13 np0005625203.localdomain sshd[25031]: Disconnecting invalid user aziz 185.246.128.171 port 9231: Change of username or service not allowed: (aziz,ssh-connection) -> (temp,ssh-connection) [preauth]
Feb 20 07:33:16 np0005625203.localdomain sshd[25033]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:19 np0005625203.localdomain sshd[25033]: Invalid user temp from 185.246.128.171 port 30921
Feb 20 07:33:20 np0005625203.localdomain sshd[25033]: Disconnecting invalid user temp 185.246.128.171 port 30921: Change of username or service not allowed: (temp,ssh-connection) -> (yesenia,ssh-connection) [preauth]
Feb 20 07:33:22 np0005625203.localdomain sshd[25035]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:25 np0005625203.localdomain sshd[25035]: Invalid user yesenia from 185.246.128.171 port 57286
Feb 20 07:33:25 np0005625203.localdomain sshd[25035]: Disconnecting invalid user yesenia 185.246.128.171 port 57286: Change of username or service not allowed: (yesenia,ssh-connection) -> (aovalle,ssh-connection) [preauth]
Feb 20 07:33:26 np0005625203.localdomain sshd[25037]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:29 np0005625203.localdomain sshd[25037]: Invalid user aovalle from 185.246.128.171 port 12683
Feb 20 07:33:30 np0005625203.localdomain sshd[25037]: Disconnecting invalid user aovalle 185.246.128.171 port 12683: Change of username or service not allowed: (aovalle,ssh-connection) -> (suraj,ssh-connection) [preauth]
Feb 20 07:33:34 np0005625203.localdomain sshd[25039]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:36 np0005625203.localdomain sshd[25039]: Invalid user suraj from 185.246.128.171 port 47264
Feb 20 07:33:37 np0005625203.localdomain sshd[25039]: Disconnecting invalid user suraj 185.246.128.171 port 47264: Change of username or service not allowed: (suraj,ssh-connection) -> (,ssh-connection) [preauth]
Feb 20 07:33:38 np0005625203.localdomain sshd[25041]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:40 np0005625203.localdomain sshd[25041]: Invalid user  from 185.246.128.171 port 62943
Feb 20 07:33:46 np0005625203.localdomain sshd[25041]: Disconnecting invalid user  185.246.128.171 port 62943: Change of username or service not allowed: (,ssh-connection) -> (ddd,ssh-connection) [preauth]
Feb 20 07:33:47 np0005625203.localdomain sshd[25043]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:50 np0005625203.localdomain sshd[25043]: Invalid user ddd from 185.246.128.171 port 38683
Feb 20 07:33:50 np0005625203.localdomain sshd[25045]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:51 np0005625203.localdomain sshd[25045]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:33:51 np0005625203.localdomain sshd[25043]: Disconnecting invalid user ddd 185.246.128.171 port 38683: Change of username or service not allowed: (ddd,ssh-connection) -> (office,ssh-connection) [preauth]
Feb 20 07:33:55 np0005625203.localdomain sshd[25047]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:56 np0005625203.localdomain sshd[25049]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:56 np0005625203.localdomain sshd[25049]: Invalid user pool from 92.118.39.72 port 53568
Feb 20 07:33:57 np0005625203.localdomain sshd[25049]: Connection closed by invalid user pool 92.118.39.72 port 53568 [preauth]
Feb 20 07:33:57 np0005625203.localdomain sshd[25051]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:33:57 np0005625203.localdomain sshd[25051]: Invalid user user1 from 172.203.58.203 port 48998
Feb 20 07:33:57 np0005625203.localdomain sshd[25051]: Received disconnect from 172.203.58.203 port 48998:11: Bye Bye [preauth]
Feb 20 07:33:57 np0005625203.localdomain sshd[25051]: Disconnected from invalid user user1 172.203.58.203 port 48998 [preauth]
Feb 20 07:33:58 np0005625203.localdomain sshd[25047]: Invalid user office from 185.246.128.171 port 8844
Feb 20 07:33:59 np0005625203.localdomain sshd[25047]: Disconnecting invalid user office 185.246.128.171 port 8844: Change of username or service not allowed: (office,ssh-connection) -> (pwrchute,ssh-connection) [preauth]
Feb 20 07:34:00 np0005625203.localdomain sshd[25053]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:34:04 np0005625203.localdomain sshd[25053]: Invalid user pwrchute from 185.246.128.171 port 33365
Feb 20 07:34:05 np0005625203.localdomain sshd[25053]: Disconnecting invalid user pwrchute 185.246.128.171 port 33365: Change of username or service not allowed: (pwrchute,ssh-connection) -> (Daniel,ssh-connection) [preauth]
Feb 20 07:34:06 np0005625203.localdomain sshd[25055]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:34:10 np0005625203.localdomain sshd[25055]: Invalid user Daniel from 185.246.128.171 port 60146
Feb 20 07:34:11 np0005625203.localdomain sshd[25055]: Disconnecting invalid user Daniel 185.246.128.171 port 60146: Change of username or service not allowed: (Daniel,ssh-connection) -> (note,ssh-connection) [preauth]
Feb 20 07:34:14 np0005625203.localdomain sshd[25057]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:34:17 np0005625203.localdomain sshd[25057]: Invalid user note from 185.246.128.171 port 28742
Feb 20 07:34:18 np0005625203.localdomain sshd[25057]: Disconnecting invalid user note 185.246.128.171 port 28742: Change of username or service not allowed: (note,ssh-connection) -> (vyatta,ssh-connection) [preauth]
Feb 20 07:34:21 np0005625203.localdomain sshd[25059]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:34:24 np0005625203.localdomain sshd[25059]: Invalid user vyatta from 185.246.128.171 port 59046
Feb 20 07:34:25 np0005625203.localdomain sshd[25059]: Disconnecting invalid user vyatta 185.246.128.171 port 59046: Change of username or service not allowed: (vyatta,ssh-connection) -> (linuxadmin,ssh-connection) [preauth]
Feb 20 07:34:27 np0005625203.localdomain sshd[25061]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:34:27 np0005625203.localdomain sshd[25061]: Received disconnect from 189.190.2.14 port 39326:11: Bye Bye [preauth]
Feb 20 07:34:27 np0005625203.localdomain sshd[25061]: Disconnected from authenticating user root 189.190.2.14 port 39326 [preauth]
Feb 20 07:34:27 np0005625203.localdomain sshd[25063]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:34:32 np0005625203.localdomain sshd[25063]: Invalid user linuxadmin from 185.246.128.171 port 25671
Feb 20 07:34:33 np0005625203.localdomain sshd[25063]: Disconnecting invalid user linuxadmin 185.246.128.171 port 25671: Change of username or service not allowed: (linuxadmin,ssh-connection) -> (spark,ssh-connection) [preauth]
Feb 20 07:34:36 np0005625203.localdomain sshd[25065]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:34:41 np0005625203.localdomain sshd[25065]: Invalid user spark from 185.246.128.171 port 2421
Feb 20 07:34:41 np0005625203.localdomain sshd[25065]: Disconnecting invalid user spark 185.246.128.171 port 2421: Change of username or service not allowed: (spark,ssh-connection) -> (ubuntu,ssh-connection) [preauth]
Feb 20 07:34:44 np0005625203.localdomain sshd[25067]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:34:48 np0005625203.localdomain sshd[25067]: Invalid user ubuntu from 185.246.128.171 port 36966
Feb 20 07:34:50 np0005625203.localdomain sshd[25069]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:34:51 np0005625203.localdomain sshd[25069]: Invalid user sshadmin from 187.87.206.21 port 41858
Feb 20 07:34:51 np0005625203.localdomain sshd[25069]: Received disconnect from 187.87.206.21 port 41858:11: Bye Bye [preauth]
Feb 20 07:34:51 np0005625203.localdomain sshd[25069]: Disconnected from invalid user sshadmin 187.87.206.21 port 41858 [preauth]
Feb 20 07:34:52 np0005625203.localdomain sshd[25067]: error: maximum authentication attempts exceeded for invalid user ubuntu from 185.246.128.171 port 36966 ssh2 [preauth]
Feb 20 07:34:52 np0005625203.localdomain sshd[25067]: Disconnecting invalid user ubuntu 185.246.128.171 port 36966: Too many authentication failures [preauth]
Feb 20 07:34:53 np0005625203.localdomain sshd[25071]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:34:57 np0005625203.localdomain sshd[25071]: Invalid user ubuntu from 185.246.128.171 port 12899
Feb 20 07:35:02 np0005625203.localdomain sshd[25073]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:03 np0005625203.localdomain sshd[25073]: Accepted publickey for zuul from 192.168.122.100 port 33380 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 07:35:03 np0005625203.localdomain systemd-logind[759]: New session 13 of user zuul.
Feb 20 07:35:03 np0005625203.localdomain systemd[1]: Started Session 13 of User zuul.
Feb 20 07:35:03 np0005625203.localdomain sshd[25073]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 07:35:03 np0005625203.localdomain sudo[25119]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oqilamszlpnjvcoikcegjeyrannvlhpx ; /usr/bin/python3
Feb 20 07:35:03 np0005625203.localdomain sudo[25119]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:03 np0005625203.localdomain python3[25121]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 07:35:04 np0005625203.localdomain sudo[25119]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:05 np0005625203.localdomain sudo[25206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbcgjxympltlksqdxkgbsymxyhqoroqt ; /usr/bin/python3
Feb 20 07:35:05 np0005625203.localdomain sudo[25206]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:05 np0005625203.localdomain python3[25208]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:35:05 np0005625203.localdomain sshd[25210]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:06 np0005625203.localdomain sshd[25071]: Disconnecting invalid user ubuntu 185.246.128.171 port 12899: Change of username or service not allowed: (ubuntu,ssh-connection) -> (asterisk,ssh-connection) [preauth]
Feb 20 07:35:07 np0005625203.localdomain sshd[25212]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:07 np0005625203.localdomain sshd[25210]: Received disconnect from 40.81.244.142 port 46578:11: Bye Bye [preauth]
Feb 20 07:35:07 np0005625203.localdomain sshd[25210]: Disconnected from authenticating user root 40.81.244.142 port 46578 [preauth]
Feb 20 07:35:08 np0005625203.localdomain sudo[25206]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:08 np0005625203.localdomain sshd[25212]: Invalid user asterisk from 185.246.128.171 port 10594
Feb 20 07:35:08 np0005625203.localdomain sudo[25227]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dfbymfemlhmghmogyscomlgspjhfponk ; /usr/bin/python3
Feb 20 07:35:08 np0005625203.localdomain sudo[25227]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:08 np0005625203.localdomain sshd[25212]: Disconnecting invalid user asterisk 185.246.128.171 port 10594: Change of username or service not allowed: (asterisk,ssh-connection) -> (aaa,ssh-connection) [preauth]
Feb 20 07:35:08 np0005625203.localdomain python3[25229]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:35:08 np0005625203.localdomain sudo[25227]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:09 np0005625203.localdomain sudo[25243]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tnnubaghpbmucqwadjitgacoqtdsoeux ; /usr/bin/python3
Feb 20 07:35:09 np0005625203.localdomain sudo[25243]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:09 np0005625203.localdomain python3[25245]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G
                                                         losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                                         lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:35:09 np0005625203.localdomain kernel: loop: module loaded
Feb 20 07:35:09 np0005625203.localdomain kernel: loop3: detected capacity change from 0 to 14680064
Feb 20 07:35:09 np0005625203.localdomain sudo[25243]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:09 np0005625203.localdomain sudo[25268]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylwpwrkvvbgewxmpirzsvlwpdpgaxnsy ; /usr/bin/python3
Feb 20 07:35:09 np0005625203.localdomain sudo[25268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:09 np0005625203.localdomain python3[25270]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                                         vgcreate ceph_vg0 /dev/loop3
                                                         lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                                         lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:35:09 np0005625203.localdomain sshd[25274]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:09 np0005625203.localdomain lvm[25273]: PV /dev/loop3 not used.
Feb 20 07:35:09 np0005625203.localdomain lvm[25276]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 20 07:35:10 np0005625203.localdomain systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Feb 20 07:35:10 np0005625203.localdomain lvm[25279]:   1 logical volume(s) in volume group "ceph_vg0" now active
Feb 20 07:35:10 np0005625203.localdomain lvm[25287]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 20 07:35:10 np0005625203.localdomain lvm[25287]: VG ceph_vg0 finished
Feb 20 07:35:10 np0005625203.localdomain systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Feb 20 07:35:10 np0005625203.localdomain sudo[25268]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:10 np0005625203.localdomain sudo[25335]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qttkiiqipmofsfszpmyzllhvtnzymyac ; /usr/bin/python3
Feb 20 07:35:10 np0005625203.localdomain sudo[25335]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:10 np0005625203.localdomain python3[25337]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:35:10 np0005625203.localdomain sudo[25335]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:11 np0005625203.localdomain sudo[25378]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sghnejqrzcigjhcnzmdfttfkppdjodjd ; /usr/bin/python3
Feb 20 07:35:11 np0005625203.localdomain sudo[25378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:11 np0005625203.localdomain python3[25380]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771572910.3716478-54728-228867623759516/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:35:11 np0005625203.localdomain sshd[25274]: Invalid user aaa from 185.246.128.171 port 23592
Feb 20 07:35:11 np0005625203.localdomain sudo[25378]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:11 np0005625203.localdomain sshd[25274]: Disconnecting invalid user aaa 185.246.128.171 port 23592: Change of username or service not allowed: (aaa,ssh-connection) -> (jumpserver,ssh-connection) [preauth]
Feb 20 07:35:11 np0005625203.localdomain sudo[25408]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ybitwbyauotwbwyofyjjivppxjkhagcz ; /usr/bin/python3
Feb 20 07:35:11 np0005625203.localdomain sudo[25408]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:11 np0005625203.localdomain python3[25410]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:35:12 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:35:12 np0005625203.localdomain systemd-rc-local-generator[25435]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:35:12 np0005625203.localdomain systemd-sysv-generator[25439]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:35:12 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:35:12 np0005625203.localdomain systemd[1]: Starting Ceph OSD losetup...
Feb 20 07:35:12 np0005625203.localdomain bash[25451]: /dev/loop3: [64516]:8400144 (/var/lib/ceph-osd-0.img)
Feb 20 07:35:12 np0005625203.localdomain systemd[1]: Finished Ceph OSD losetup.
Feb 20 07:35:12 np0005625203.localdomain lvm[25452]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 20 07:35:12 np0005625203.localdomain lvm[25452]: VG ceph_vg0 finished
Feb 20 07:35:12 np0005625203.localdomain sudo[25408]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:12 np0005625203.localdomain sudo[25467]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oddtxnvybkwuwcjueukehkqmxqogdxrv ; /usr/bin/python3
Feb 20 07:35:12 np0005625203.localdomain sudo[25467]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:12 np0005625203.localdomain python3[25469]: ansible-ansible.builtin.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:35:12 np0005625203.localdomain sshd[25471]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:15 np0005625203.localdomain sudo[25467]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:15 np0005625203.localdomain sudo[25486]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-awfivvzkjhskfcnydcnwudwkyhgllopr ; /usr/bin/python3
Feb 20 07:35:15 np0005625203.localdomain sudo[25486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:15 np0005625203.localdomain python3[25488]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:35:15 np0005625203.localdomain sudo[25486]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:16 np0005625203.localdomain sudo[25502]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wttxtcmdmdqmrsmecaaztclpwormhntm ; /usr/bin/python3
Feb 20 07:35:16 np0005625203.localdomain sudo[25502]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:16 np0005625203.localdomain python3[25504]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=7G
                                                         losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                                         lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:35:16 np0005625203.localdomain kernel: loop4: detected capacity change from 0 to 14680064
Feb 20 07:35:16 np0005625203.localdomain sudo[25502]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:16 np0005625203.localdomain sudo[25524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bopzxvnvsujhjxxsjyawijhmpqsrhzwp ; /usr/bin/python3
Feb 20 07:35:16 np0005625203.localdomain sudo[25524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:17 np0005625203.localdomain python3[25526]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                                         vgcreate ceph_vg1 /dev/loop4
                                                         lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                                         lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:35:17 np0005625203.localdomain lvm[25529]: PV /dev/loop4 not used.
Feb 20 07:35:17 np0005625203.localdomain lvm[25539]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 20 07:35:17 np0005625203.localdomain systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Feb 20 07:35:17 np0005625203.localdomain lvm[25541]:   1 logical volume(s) in volume group "ceph_vg1" now active
Feb 20 07:35:17 np0005625203.localdomain sudo[25524]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:17 np0005625203.localdomain systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Feb 20 07:35:17 np0005625203.localdomain sudo[25587]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qerxmeffpnrpnbbbabehjwqpzciadlis ; /usr/bin/python3
Feb 20 07:35:17 np0005625203.localdomain sudo[25587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:17 np0005625203.localdomain python3[25589]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:35:17 np0005625203.localdomain sudo[25587]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:18 np0005625203.localdomain sudo[25630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-etsjpwddekfmjkdfanqcgexqahdfidua ; /usr/bin/python3
Feb 20 07:35:18 np0005625203.localdomain sudo[25630]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:18 np0005625203.localdomain python3[25632]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771572917.555341-54974-142182017920808/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:35:18 np0005625203.localdomain sudo[25630]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:18 np0005625203.localdomain sudo[25660]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhoqkhjxrvacqjngfvwcypxsiibfigtk ; /usr/bin/python3
Feb 20 07:35:18 np0005625203.localdomain sudo[25660]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:18 np0005625203.localdomain python3[25662]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:35:18 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:35:18 np0005625203.localdomain systemd-sysv-generator[25690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:35:18 np0005625203.localdomain systemd-rc-local-generator[25686]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:35:19 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:35:19 np0005625203.localdomain systemd[1]: Starting Ceph OSD losetup...
Feb 20 07:35:19 np0005625203.localdomain bash[25703]: /dev/loop4: [64516]:8399529 (/var/lib/ceph-osd-1.img)
Feb 20 07:35:19 np0005625203.localdomain systemd[1]: Finished Ceph OSD losetup.
Feb 20 07:35:19 np0005625203.localdomain sudo[25660]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:19 np0005625203.localdomain lvm[25704]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 20 07:35:19 np0005625203.localdomain lvm[25704]: VG ceph_vg1 finished
Feb 20 07:35:19 np0005625203.localdomain sshd[25471]: Invalid user jumpserver from 185.246.128.171 port 36959
Feb 20 07:35:21 np0005625203.localdomain sshd[25471]: Disconnecting invalid user jumpserver 185.246.128.171 port 36959: Change of username or service not allowed: (jumpserver,ssh-connection) -> (stack,ssh-connection) [preauth]
Feb 20 07:35:25 np0005625203.localdomain sshd[25705]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:27 np0005625203.localdomain sudo[25749]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zbpsxjuijyossrxhohesecnnxtuyhqxt ; /usr/bin/python3
Feb 20 07:35:27 np0005625203.localdomain sudo[25749]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:28 np0005625203.localdomain sshd[25705]: Invalid user stack from 185.246.128.171 port 30911
Feb 20 07:35:28 np0005625203.localdomain python3[25751]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 07:35:28 np0005625203.localdomain sudo[25749]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:28 np0005625203.localdomain sshd[25705]: Disconnecting invalid user stack 185.246.128.171 port 30911: Change of username or service not allowed: (stack,ssh-connection) -> (visitors,ssh-connection) [preauth]
Feb 20 07:35:29 np0005625203.localdomain sudo[25769]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hhndzynwffplgbylpktjjjggkhbpnzom ; /usr/bin/python3
Feb 20 07:35:29 np0005625203.localdomain sudo[25769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:29 np0005625203.localdomain python3[25771]: ansible-hostname Invoked with name=np0005625203.localdomain use=None
Feb 20 07:35:29 np0005625203.localdomain systemd[1]: Starting Hostname Service...
Feb 20 07:35:29 np0005625203.localdomain sshd[25776]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:29 np0005625203.localdomain systemd[1]: Started Hostname Service.
Feb 20 07:35:29 np0005625203.localdomain sudo[25769]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:30 np0005625203.localdomain sshd[25780]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:30 np0005625203.localdomain sshd[25780]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:35:31 np0005625203.localdomain sudo[25796]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-enmnhtvidlbqlkenobkrskzoyzzsdxii ; /usr/bin/python3
Feb 20 07:35:31 np0005625203.localdomain sudo[25796]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:31 np0005625203.localdomain python3[25798]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None
Feb 20 07:35:31 np0005625203.localdomain sudo[25796]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:31 np0005625203.localdomain sudo[25844]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-amlautvzcbwmawldiwenrxklfnwmwyfz ; /usr/bin/python3
Feb 20 07:35:31 np0005625203.localdomain sudo[25844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:32 np0005625203.localdomain python3[25846]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.co9j0_s3tmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:35:32 np0005625203.localdomain sudo[25844]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:32 np0005625203.localdomain sshd[25776]: Invalid user visitors from 185.246.128.171 port 48942
Feb 20 07:35:32 np0005625203.localdomain sudo[25874]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fsvtglvtyuqbtopyvaxwrjlwzjhetoqy ; /usr/bin/python3
Feb 20 07:35:32 np0005625203.localdomain sudo[25874]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:32 np0005625203.localdomain python3[25876]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.co9j0_s3tmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:35:32 np0005625203.localdomain sudo[25874]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:33 np0005625203.localdomain sudo[25890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvfywrkyafsazvbziaiifgylkfuomoyb ; /usr/bin/python3
Feb 20 07:35:33 np0005625203.localdomain sudo[25890]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:33 np0005625203.localdomain sshd[25776]: Disconnecting invalid user visitors 185.246.128.171 port 48942: Change of username or service not allowed: (visitors,ssh-connection) -> (webuser,ssh-connection) [preauth]
Feb 20 07:35:33 np0005625203.localdomain python3[25892]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.co9j0_s3tmphosts insertbefore=BOF block=192.168.122.106 np0005625202.localdomain np0005625202
                                                         192.168.122.106 np0005625202.ctlplane.localdomain np0005625202.ctlplane
                                                         192.168.122.107 np0005625203.localdomain np0005625203
                                                         192.168.122.107 np0005625203.ctlplane.localdomain np0005625203.ctlplane
                                                         192.168.122.108 np0005625204.localdomain np0005625204
                                                         192.168.122.108 np0005625204.ctlplane.localdomain np0005625204.ctlplane
                                                         192.168.122.103 np0005625199.localdomain np0005625199
                                                         192.168.122.103 np0005625199.ctlplane.localdomain np0005625199.ctlplane
                                                         192.168.122.104 np0005625200.localdomain np0005625200
                                                         192.168.122.104 np0005625200.ctlplane.localdomain np0005625200.ctlplane
                                                         192.168.122.105 np0005625201.localdomain np0005625201
                                                         192.168.122.105 np0005625201.ctlplane.localdomain np0005625201.ctlplane
                                                         
                                                         192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane
                                                          marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:35:33 np0005625203.localdomain sudo[25890]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:33 np0005625203.localdomain sudo[25906]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzfiwzroeccmrkvcobplkpycejgfrucc ; /usr/bin/python3
Feb 20 07:35:33 np0005625203.localdomain sudo[25906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:33 np0005625203.localdomain python3[25908]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.co9j0_s3tmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:35:33 np0005625203.localdomain sudo[25906]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:33 np0005625203.localdomain sudo[25923]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chnsojoxkversrbjdnwlfsivbtweevbq ; /usr/bin/python3
Feb 20 07:35:33 np0005625203.localdomain sudo[25923]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:34 np0005625203.localdomain python3[25925]: ansible-file Invoked with path=/tmp/ansible.co9j0_s3tmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:35:34 np0005625203.localdomain sudo[25923]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:35 np0005625203.localdomain sshd[25926]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:36 np0005625203.localdomain sudo[25940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uhdjogkohyzubgketcecnfyimlbylzta ; /usr/bin/python3
Feb 20 07:35:36 np0005625203.localdomain sudo[25940]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:36 np0005625203.localdomain python3[25942]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:35:36 np0005625203.localdomain sudo[25940]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:36 np0005625203.localdomain sudo[25958]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wyjboynoasbqaiiigwpsfdtnfwdzuddd ; /usr/bin/python3
Feb 20 07:35:36 np0005625203.localdomain sudo[25958]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:37 np0005625203.localdomain python3[25961]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:35:38 np0005625203.localdomain sshd[25926]: Invalid user webuser from 185.246.128.171 port 13888
Feb 20 07:35:38 np0005625203.localdomain sshd[25963]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:39 np0005625203.localdomain sudo[25958]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:40 np0005625203.localdomain sshd[25963]: Invalid user n8n from 103.171.84.20 port 46566
Feb 20 07:35:40 np0005625203.localdomain sshd[25963]: Received disconnect from 103.171.84.20 port 46566:11: Bye Bye [preauth]
Feb 20 07:35:40 np0005625203.localdomain sshd[25963]: Disconnected from invalid user n8n 103.171.84.20 port 46566 [preauth]
Feb 20 07:35:40 np0005625203.localdomain sudo[26010]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-agdfgqfqafvhnhqcqwnjvhsswvmnulxe ; /usr/bin/python3
Feb 20 07:35:40 np0005625203.localdomain sudo[26010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:41 np0005625203.localdomain python3[26012]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:35:41 np0005625203.localdomain sudo[26010]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:41 np0005625203.localdomain sudo[26055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qgagwfxieobtzcwzjwmndrrsrjnrpdnn ; /usr/bin/python3
Feb 20 07:35:41 np0005625203.localdomain sudo[26055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:41 np0005625203.localdomain sshd[25926]: Disconnecting invalid user webuser 185.246.128.171 port 13888: Change of username or service not allowed: (webuser,ssh-connection) -> (kubelet,ssh-connection) [preauth]
Feb 20 07:35:41 np0005625203.localdomain python3[26057]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771572940.6246736-55770-184768856632893/source dest=/etc/chrony.conf owner=root group=root mode=420 follow=False _original_basename=chrony.conf.j2 checksum=4fd4fbbb2de00c70a54478b7feb8ef8adf6a3362 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:35:41 np0005625203.localdomain sudo[26055]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:42 np0005625203.localdomain sudo[26085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ditrhcrsgclioraiyipkgeosczdyafzn ; /usr/bin/python3
Feb 20 07:35:42 np0005625203.localdomain sudo[26085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:42 np0005625203.localdomain python3[26087]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:35:42 np0005625203.localdomain sudo[26085]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:42 np0005625203.localdomain sshd[26090]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:43 np0005625203.localdomain sudo[26104]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-idufobhjqbydrglfctucrsptkgwsstms ; /usr/bin/python3
Feb 20 07:35:43 np0005625203.localdomain sudo[26104]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:43 np0005625203.localdomain python3[26106]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 07:35:43 np0005625203.localdomain chronyd[765]: chronyd exiting
Feb 20 07:35:43 np0005625203.localdomain systemd[1]: Stopping NTP client/server...
Feb 20 07:35:43 np0005625203.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Feb 20 07:35:43 np0005625203.localdomain systemd[1]: Stopped NTP client/server.
Feb 20 07:35:43 np0005625203.localdomain systemd[1]: chronyd.service: Consumed 92ms CPU time, read 1.9M from disk, written 0B to disk.
Feb 20 07:35:43 np0005625203.localdomain systemd[1]: Starting NTP client/server...
Feb 20 07:35:43 np0005625203.localdomain chronyd[26113]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Feb 20 07:35:43 np0005625203.localdomain chronyd[26113]: Frequency -30.852 +/- 0.306 ppm read from /var/lib/chrony/drift
Feb 20 07:35:43 np0005625203.localdomain chronyd[26113]: Loaded seccomp filter (level 2)
Feb 20 07:35:43 np0005625203.localdomain systemd[1]: Started NTP client/server.
Feb 20 07:35:43 np0005625203.localdomain sudo[26104]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:44 np0005625203.localdomain sudo[26160]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjnrwtcjxxqffelcgbnuxtqsharhqxlr ; /usr/bin/python3
Feb 20 07:35:44 np0005625203.localdomain sudo[26160]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:44 np0005625203.localdomain python3[26162]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:35:44 np0005625203.localdomain sudo[26160]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:44 np0005625203.localdomain sudo[26203]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltjxtnnfyhswedawcguryhwalkiwtchs ; /usr/bin/python3
Feb 20 07:35:44 np0005625203.localdomain sudo[26203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:44 np0005625203.localdomain python3[26205]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771572943.9533253-56002-139281984235640/source dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service follow=False checksum=d4d85e046d61f558ac7ec8178c6d529d893e81e1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:35:44 np0005625203.localdomain sudo[26203]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:44 np0005625203.localdomain sudo[26233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cymypespybcdynoeejhdtlizwlfmmtom ; /usr/bin/python3
Feb 20 07:35:44 np0005625203.localdomain sudo[26233]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:45 np0005625203.localdomain python3[26235]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:35:45 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:35:45 np0005625203.localdomain systemd-rc-local-generator[26259]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:35:45 np0005625203.localdomain systemd-sysv-generator[26262]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:35:45 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:35:45 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:35:45 np0005625203.localdomain systemd-sysv-generator[26304]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:35:45 np0005625203.localdomain systemd-rc-local-generator[26299]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:35:45 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:35:45 np0005625203.localdomain systemd[1]: Starting chronyd online sources service...
Feb 20 07:35:45 np0005625203.localdomain chronyc[26312]: 200 OK
Feb 20 07:35:45 np0005625203.localdomain systemd[1]: chrony-online.service: Deactivated successfully.
Feb 20 07:35:45 np0005625203.localdomain systemd[1]: Finished chronyd online sources service.
Feb 20 07:35:45 np0005625203.localdomain sudo[26233]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:46 np0005625203.localdomain sudo[26327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ejokxiuqdvbafrictblljpiduqxjphyr ; /usr/bin/python3
Feb 20 07:35:46 np0005625203.localdomain sudo[26327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:46 np0005625203.localdomain python3[26329]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:35:46 np0005625203.localdomain chronyd[26113]: System clock was stepped by 0.000000 seconds
Feb 20 07:35:46 np0005625203.localdomain sudo[26327]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:46 np0005625203.localdomain sshd[26090]: Invalid user kubelet from 185.246.128.171 port 48857
Feb 20 07:35:46 np0005625203.localdomain sudo[26344]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nydyieylykioibzrycgpegbirppcogmd ; /usr/bin/python3
Feb 20 07:35:46 np0005625203.localdomain sudo[26344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:46 np0005625203.localdomain python3[26346]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:35:48 np0005625203.localdomain chronyd[26113]: Selected source 216.232.132.102 (pool.ntp.org)
Feb 20 07:35:48 np0005625203.localdomain sshd[26090]: Disconnecting invalid user kubelet 185.246.128.171 port 48857: Change of username or service not allowed: (kubelet,ssh-connection) -> (jrodrig,ssh-connection) [preauth]
Feb 20 07:35:50 np0005625203.localdomain sshd[26348]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:54 np0005625203.localdomain sshd[26348]: Invalid user jrodrig from 185.246.128.171 port 22167
Feb 20 07:35:54 np0005625203.localdomain sshd[26348]: Disconnecting invalid user jrodrig 185.246.128.171 port 22167: Change of username or service not allowed: (jrodrig,ssh-connection) -> (alma,ssh-connection) [preauth]
Feb 20 07:35:56 np0005625203.localdomain sshd[26350]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:35:56 np0005625203.localdomain sudo[26344]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:57 np0005625203.localdomain sudo[26364]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wmkfpuyjwtbnyqoppaqffeeyctaltyun ; /usr/bin/python3
Feb 20 07:35:57 np0005625203.localdomain sudo[26364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:57 np0005625203.localdomain python3[26366]: ansible-timezone Invoked with name=UTC hwclock=None
Feb 20 07:35:57 np0005625203.localdomain systemd[1]: Starting Time & Date Service...
Feb 20 07:35:57 np0005625203.localdomain systemd[1]: Started Time & Date Service.
Feb 20 07:35:57 np0005625203.localdomain sudo[26364]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:58 np0005625203.localdomain sudo[26384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmplldkvhpwkwpyjfulmgphlsujnzrey ; /usr/bin/python3
Feb 20 07:35:58 np0005625203.localdomain sudo[26384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:35:58 np0005625203.localdomain python3[26386]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 07:35:59 np0005625203.localdomain sshd[26350]: Invalid user alma from 185.246.128.171 port 49649
Feb 20 07:35:59 np0005625203.localdomain chronyd[26113]: chronyd exiting
Feb 20 07:35:59 np0005625203.localdomain systemd[1]: Stopping NTP client/server...
Feb 20 07:35:59 np0005625203.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Feb 20 07:35:59 np0005625203.localdomain systemd[1]: Stopped NTP client/server.
Feb 20 07:35:59 np0005625203.localdomain systemd[1]: Starting NTP client/server...
Feb 20 07:35:59 np0005625203.localdomain chronyd[26395]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Feb 20 07:35:59 np0005625203.localdomain chronyd[26395]: Frequency -30.852 +/- 0.306 ppm read from /var/lib/chrony/drift
Feb 20 07:35:59 np0005625203.localdomain chronyd[26395]: Loaded seccomp filter (level 2)
Feb 20 07:35:59 np0005625203.localdomain systemd[1]: Started NTP client/server.
Feb 20 07:35:59 np0005625203.localdomain sudo[26384]: pam_unix(sudo:session): session closed for user root
Feb 20 07:35:59 np0005625203.localdomain systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 20 07:35:59 np0005625203.localdomain sshd[26350]: Disconnecting invalid user alma 185.246.128.171 port 49649: Change of username or service not allowed: (alma,ssh-connection) -> (1234,ssh-connection) [preauth]
Feb 20 07:36:00 np0005625203.localdomain sshd[26399]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:36:01 np0005625203.localdomain sshd[26401]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:36:02 np0005625203.localdomain sshd[26401]: Invalid user miner from 92.118.39.72 port 37912
Feb 20 07:36:02 np0005625203.localdomain sshd[26401]: Connection closed by invalid user miner 92.118.39.72 port 37912 [preauth]
Feb 20 07:36:03 np0005625203.localdomain chronyd[26395]: Selected source 216.232.132.102 (pool.ntp.org)
Feb 20 07:36:03 np0005625203.localdomain sshd[26399]: Invalid user 1234 from 185.246.128.171 port 2851
Feb 20 07:36:04 np0005625203.localdomain sshd[26399]: Disconnecting invalid user 1234 185.246.128.171 port 2851: Change of username or service not allowed: (1234,ssh-connection) -> (newuser,ssh-connection) [preauth]
Feb 20 07:36:08 np0005625203.localdomain sshd[26403]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:36:10 np0005625203.localdomain sshd[26403]: Invalid user newuser from 185.246.128.171 port 42858
Feb 20 07:36:11 np0005625203.localdomain sshd[26403]: Disconnecting invalid user newuser 185.246.128.171 port 42858: Change of username or service not allowed: (newuser,ssh-connection) -> (x,ssh-connection) [preauth]
Feb 20 07:36:13 np0005625203.localdomain sshd[26405]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:36:15 np0005625203.localdomain sudo[26420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ucnqztegwimdatkvxlzhunuskmsrunql ; /usr/bin/python3
Feb 20 07:36:15 np0005625203.localdomain sudo[26420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:36:15 np0005625203.localdomain useradd[26424]: new group: name=ceph-admin, GID=1002
Feb 20 07:36:15 np0005625203.localdomain useradd[26424]: new user: name=ceph-admin, UID=1002, GID=1002, home=/home/ceph-admin, shell=/bin/bash, from=none
Feb 20 07:36:15 np0005625203.localdomain sudo[26420]: pam_unix(sudo:session): session closed for user root
Feb 20 07:36:15 np0005625203.localdomain sudo[26476]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-atbhgjcnpndyuyrmpybktlpznrbffddi ; /usr/bin/python3
Feb 20 07:36:15 np0005625203.localdomain sudo[26476]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:36:15 np0005625203.localdomain sudo[26476]: pam_unix(sudo:session): session closed for user root
Feb 20 07:36:15 np0005625203.localdomain sshd[26405]: Invalid user x from 185.246.128.171 port 64000
Feb 20 07:36:16 np0005625203.localdomain sudo[26519]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpaarglolmbxdwjoasplyxutywtdmtan ; /usr/bin/python3
Feb 20 07:36:16 np0005625203.localdomain sudo[26519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:36:16 np0005625203.localdomain sudo[26519]: pam_unix(sudo:session): session closed for user root
Feb 20 07:36:16 np0005625203.localdomain sshd[26405]: Disconnecting invalid user x 185.246.128.171 port 64000: Change of username or service not allowed: (x,ssh-connection) -> (es_user,ssh-connection) [preauth]
Feb 20 07:36:16 np0005625203.localdomain sudo[26549]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xaecaplsfbhbxedubazsbihzyeyhgwyc ; /usr/bin/python3
Feb 20 07:36:16 np0005625203.localdomain sudo[26549]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:36:16 np0005625203.localdomain sudo[26549]: pam_unix(sudo:session): session closed for user root
Feb 20 07:36:16 np0005625203.localdomain sudo[26565]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-obosswxszosmciafjlqqajzyanasbyfu ; /usr/bin/python3
Feb 20 07:36:16 np0005625203.localdomain sudo[26565]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:36:17 np0005625203.localdomain sudo[26565]: pam_unix(sudo:session): session closed for user root
Feb 20 07:36:17 np0005625203.localdomain sudo[26581]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-koytlzpbmqoprvdtgjrqscacbrvcaimd ; /usr/bin/python3
Feb 20 07:36:17 np0005625203.localdomain sudo[26581]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:36:17 np0005625203.localdomain sudo[26581]: pam_unix(sudo:session): session closed for user root
Feb 20 07:36:17 np0005625203.localdomain sudo[26597]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ocgvtudhteiilokgweheoymimimovfrz ; /usr/bin/python3
Feb 20 07:36:17 np0005625203.localdomain sudo[26597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:36:18 np0005625203.localdomain sudo[26597]: pam_unix(sudo:session): session closed for user root
Feb 20 07:36:19 np0005625203.localdomain sshd[26600]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:36:22 np0005625203.localdomain sshd[26600]: Invalid user es_user from 185.246.128.171 port 31759
Feb 20 07:36:23 np0005625203.localdomain sshd[26600]: Disconnecting invalid user es_user 185.246.128.171 port 31759: Change of username or service not allowed: (es_user,ssh-connection) -> (steam,ssh-connection) [preauth]
Feb 20 07:36:24 np0005625203.localdomain sshd[26602]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:36:25 np0005625203.localdomain sshd[26604]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:36:27 np0005625203.localdomain sshd[26604]: Invalid user builduser from 123.204.132.127 port 56318
Feb 20 07:36:27 np0005625203.localdomain sshd[26604]: Received disconnect from 123.204.132.127 port 56318:11: Bye Bye [preauth]
Feb 20 07:36:27 np0005625203.localdomain sshd[26604]: Disconnected from invalid user builduser 123.204.132.127 port 56318 [preauth]
Feb 20 07:36:27 np0005625203.localdomain sshd[26602]: Invalid user steam from 185.246.128.171 port 52780
Feb 20 07:36:27 np0005625203.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 20 07:36:28 np0005625203.localdomain sshd[26602]: Disconnecting invalid user steam 185.246.128.171 port 52780: Change of username or service not allowed: (steam,ssh-connection) -> (autcom,ssh-connection) [preauth]
Feb 20 07:36:29 np0005625203.localdomain sshd[26608]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:36:30 np0005625203.localdomain sshd[26608]: Invalid user autcom from 185.246.128.171 port 13570
Feb 20 07:36:31 np0005625203.localdomain sshd[26608]: Disconnecting invalid user autcom 185.246.128.171 port 13570: Change of username or service not allowed: (autcom,ssh-connection) -> (wiki,ssh-connection) [preauth]
Feb 20 07:36:32 np0005625203.localdomain sshd[26610]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:36:37 np0005625203.localdomain sshd[26610]: Invalid user wiki from 185.246.128.171 port 27906
Feb 20 07:36:40 np0005625203.localdomain sshd[26610]: Disconnecting invalid user wiki 185.246.128.171 port 27906: Change of username or service not allowed: (wiki,ssh-connection) -> (jane,ssh-connection) [preauth]
Feb 20 07:36:44 np0005625203.localdomain sshd[26612]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:36:47 np0005625203.localdomain sshd[26612]: Invalid user jane from 185.246.128.171 port 20802
Feb 20 07:36:48 np0005625203.localdomain sshd[26612]: Disconnecting invalid user jane 185.246.128.171 port 20802: Change of username or service not allowed: (jane,ssh-connection) -> (core,ssh-connection) [preauth]
Feb 20 07:36:50 np0005625203.localdomain sshd[26614]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:36:53 np0005625203.localdomain sshd[26614]: Invalid user core from 185.246.128.171 port 51158
Feb 20 07:36:53 np0005625203.localdomain sshd[26614]: Disconnecting invalid user core 185.246.128.171 port 51158: Change of username or service not allowed: (core,ssh-connection) -> (2,ssh-connection) [preauth]
Feb 20 07:36:56 np0005625203.localdomain sshd[26616]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:36:59 np0005625203.localdomain sshd[26616]: Invalid user 2 from 185.246.128.171 port 12442
Feb 20 07:36:59 np0005625203.localdomain sshd[26616]: Disconnecting invalid user 2 185.246.128.171 port 12442: Change of username or service not allowed: (2,ssh-connection) -> (usr,ssh-connection) [preauth]
Feb 20 07:37:02 np0005625203.localdomain sshd[26618]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:37:06 np0005625203.localdomain sshd[26618]: Invalid user usr from 185.246.128.171 port 44729
Feb 20 07:37:07 np0005625203.localdomain sshd[26620]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:37:07 np0005625203.localdomain sshd[26618]: Disconnecting invalid user usr 185.246.128.171 port 44729: Change of username or service not allowed: (usr,ssh-connection) -> (uploader,ssh-connection) [preauth]
Feb 20 07:37:07 np0005625203.localdomain sshd[26620]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:37:11 np0005625203.localdomain sshd[26622]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:37:15 np0005625203.localdomain sshd[26622]: Invalid user uploader from 185.246.128.171 port 24826
Feb 20 07:37:17 np0005625203.localdomain sshd[26622]: Disconnecting invalid user uploader 185.246.128.171 port 24826: Change of username or service not allowed: (uploader,ssh-connection) -> (engineer,ssh-connection) [preauth]
Feb 20 07:37:21 np0005625203.localdomain sshd[26624]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:37:23 np0005625203.localdomain sshd[26624]: Invalid user engineer from 185.246.128.171 port 7473
Feb 20 07:37:24 np0005625203.localdomain sshd[26624]: Disconnecting invalid user engineer 185.246.128.171 port 7473: Change of username or service not allowed: (engineer,ssh-connection) -> (hscroot,ssh-connection) [preauth]
Feb 20 07:37:27 np0005625203.localdomain sshd[26626]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:37:32 np0005625203.localdomain sshd[26626]: Invalid user hscroot from 185.246.128.171 port 40293
Feb 20 07:37:33 np0005625203.localdomain sshd[26626]: Disconnecting invalid user hscroot 185.246.128.171 port 40293: Change of username or service not allowed: (hscroot,ssh-connection) -> (test5,ssh-connection) [preauth]
Feb 20 07:37:34 np0005625203.localdomain sshd[26628]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:37:34 np0005625203.localdomain sshd[26628]: Received disconnect from 143.198.161.12 port 58322:11: Bye Bye [preauth]
Feb 20 07:37:34 np0005625203.localdomain sshd[26628]: Disconnected from authenticating user root 143.198.161.12 port 58322 [preauth]
Feb 20 07:37:35 np0005625203.localdomain sshd[26630]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:37:38 np0005625203.localdomain sshd[26630]: Invalid user test5 from 185.246.128.171 port 13373
Feb 20 07:37:41 np0005625203.localdomain sshd[26630]: Disconnecting invalid user test5 185.246.128.171 port 13373: Change of username or service not allowed: (test5,ssh-connection) -> (frank,ssh-connection) [preauth]
Feb 20 07:37:44 np0005625203.localdomain sshd[26632]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:37:45 np0005625203.localdomain sshd[26632]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:37:45 np0005625203.localdomain sshd[26634]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:37:49 np0005625203.localdomain sshd[26634]: Invalid user frank from 185.246.128.171 port 59977
Feb 20 07:37:50 np0005625203.localdomain sshd[26634]: Disconnecting invalid user frank 185.246.128.171 port 59977: Change of username or service not allowed: (frank,ssh-connection) -> (lab,ssh-connection) [preauth]
Feb 20 07:37:54 np0005625203.localdomain sshd[26636]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:37:57 np0005625203.localdomain sshd[26636]: Invalid user lab from 185.246.128.171 port 40668
Feb 20 07:37:58 np0005625203.localdomain sshd[26636]: Disconnecting invalid user lab 185.246.128.171 port 40668: Change of username or service not allowed: (lab,ssh-connection) -> (daniel,ssh-connection) [preauth]
Feb 20 07:37:59 np0005625203.localdomain sshd[26638]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:01 np0005625203.localdomain sshd[26640]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:01 np0005625203.localdomain sshd[26640]: Accepted publickey for ceph-admin from 192.168.122.103 port 56538 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 07:38:01 np0005625203.localdomain systemd[1]: Created slice User Slice of UID 1002.
Feb 20 07:38:01 np0005625203.localdomain systemd[1]: Starting User Runtime Directory /run/user/1002...
Feb 20 07:38:01 np0005625203.localdomain systemd-logind[759]: New session 14 of user ceph-admin.
Feb 20 07:38:01 np0005625203.localdomain systemd[1]: Finished User Runtime Directory /run/user/1002.
Feb 20 07:38:01 np0005625203.localdomain systemd[1]: Starting User Manager for UID 1002...
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:01 np0005625203.localdomain sshd[26657]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Queued start job for default target Main User Target.
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Created slice User Application Slice.
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Started Daily Cleanup of User's Temporary Directories.
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Reached target Paths.
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Reached target Timers.
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Starting D-Bus User Message Bus Socket...
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Starting Create User's Volatile Files and Directories...
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Finished Create User's Volatile Files and Directories.
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Listening on D-Bus User Message Bus Socket.
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Reached target Sockets.
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Reached target Basic System.
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Reached target Main User Target.
Feb 20 07:38:01 np0005625203.localdomain systemd[26644]: Startup finished in 119ms.
Feb 20 07:38:01 np0005625203.localdomain systemd[1]: Started User Manager for UID 1002.
Feb 20 07:38:01 np0005625203.localdomain systemd[1]: Started Session 14 of User ceph-admin.
Feb 20 07:38:01 np0005625203.localdomain sshd[26640]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:01 np0005625203.localdomain sshd[26657]: Accepted publickey for ceph-admin from 192.168.122.103 port 56552 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 07:38:01 np0005625203.localdomain systemd-logind[759]: New session 16 of user ceph-admin.
Feb 20 07:38:01 np0005625203.localdomain systemd[1]: Started Session 16 of User ceph-admin.
Feb 20 07:38:01 np0005625203.localdomain sshd[26657]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:01 np0005625203.localdomain sudo[26664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:38:01 np0005625203.localdomain sudo[26664]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:01 np0005625203.localdomain sudo[26664]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:01 np0005625203.localdomain sshd[26679]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:02 np0005625203.localdomain anacron[19053]: Job `cron.daily' started
Feb 20 07:38:02 np0005625203.localdomain anacron[19053]: Job `cron.daily' terminated
Feb 20 07:38:02 np0005625203.localdomain sshd[26679]: Accepted publickey for ceph-admin from 192.168.122.103 port 56560 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 07:38:02 np0005625203.localdomain systemd-logind[759]: New session 17 of user ceph-admin.
Feb 20 07:38:02 np0005625203.localdomain systemd[1]: Started Session 17 of User ceph-admin.
Feb 20 07:38:02 np0005625203.localdomain sshd[26679]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:02 np0005625203.localdomain sudo[26685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host --expect-hostname np0005625203.localdomain
Feb 20 07:38:02 np0005625203.localdomain sudo[26685]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:02 np0005625203.localdomain sudo[26685]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:02 np0005625203.localdomain sshd[26638]: Invalid user daniel from 185.246.128.171 port 2848
Feb 20 07:38:02 np0005625203.localdomain sshd[26700]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:02 np0005625203.localdomain sshd[26700]: Accepted publickey for ceph-admin from 192.168.122.103 port 56574 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 07:38:02 np0005625203.localdomain systemd-logind[759]: New session 18 of user ceph-admin.
Feb 20 07:38:02 np0005625203.localdomain systemd[1]: Started Session 18 of User ceph-admin.
Feb 20 07:38:02 np0005625203.localdomain sshd[26700]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:02 np0005625203.localdomain sudo[26704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f
Feb 20 07:38:02 np0005625203.localdomain sudo[26704]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:02 np0005625203.localdomain sudo[26704]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:02 np0005625203.localdomain sshd[26719]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:02 np0005625203.localdomain sshd[26719]: Accepted publickey for ceph-admin from 192.168.122.103 port 56576 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 07:38:02 np0005625203.localdomain systemd-logind[759]: New session 19 of user ceph-admin.
Feb 20 07:38:02 np0005625203.localdomain systemd[1]: Started Session 19 of User ceph-admin.
Feb 20 07:38:02 np0005625203.localdomain sshd[26719]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:02 np0005625203.localdomain sudo[26723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 07:38:02 np0005625203.localdomain sudo[26723]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:02 np0005625203.localdomain sudo[26723]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:03 np0005625203.localdomain sshd[26738]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:03 np0005625203.localdomain sshd[26738]: Accepted publickey for ceph-admin from 192.168.122.103 port 56578 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 07:38:03 np0005625203.localdomain systemd-logind[759]: New session 20 of user ceph-admin.
Feb 20 07:38:03 np0005625203.localdomain systemd[1]: Started Session 20 of User ceph-admin.
Feb 20 07:38:03 np0005625203.localdomain sshd[26738]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:03 np0005625203.localdomain sudo[26742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 07:38:03 np0005625203.localdomain sudo[26742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:03 np0005625203.localdomain sudo[26742]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:03 np0005625203.localdomain sshd[26757]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:03 np0005625203.localdomain sshd[26638]: Disconnecting invalid user daniel 185.246.128.171 port 2848: Change of username or service not allowed: (daniel,ssh-connection) -> (doge,ssh-connection) [preauth]
Feb 20 07:38:03 np0005625203.localdomain sshd[26757]: Accepted publickey for ceph-admin from 192.168.122.103 port 56584 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 07:38:03 np0005625203.localdomain systemd-logind[759]: New session 21 of user ceph-admin.
Feb 20 07:38:03 np0005625203.localdomain systemd[1]: Started Session 21 of User ceph-admin.
Feb 20 07:38:03 np0005625203.localdomain sshd[26757]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:03 np0005625203.localdomain sudo[26761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f.new
Feb 20 07:38:03 np0005625203.localdomain sudo[26761]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:03 np0005625203.localdomain sudo[26761]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:03 np0005625203.localdomain sshd[26776]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:03 np0005625203.localdomain sshd[26776]: Accepted publickey for ceph-admin from 192.168.122.103 port 56592 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 07:38:03 np0005625203.localdomain systemd-logind[759]: New session 22 of user ceph-admin.
Feb 20 07:38:03 np0005625203.localdomain systemd[1]: Started Session 22 of User ceph-admin.
Feb 20 07:38:04 np0005625203.localdomain sshd[26776]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:04 np0005625203.localdomain sudo[26780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 07:38:04 np0005625203.localdomain sudo[26780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:04 np0005625203.localdomain sudo[26780]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:04 np0005625203.localdomain sshd[26795]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:04 np0005625203.localdomain sshd[26795]: Accepted publickey for ceph-admin from 192.168.122.103 port 56602 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 07:38:04 np0005625203.localdomain systemd-logind[759]: New session 23 of user ceph-admin.
Feb 20 07:38:04 np0005625203.localdomain systemd[1]: Started Session 23 of User ceph-admin.
Feb 20 07:38:04 np0005625203.localdomain sshd[26795]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:04 np0005625203.localdomain sudo[26799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f.new
Feb 20 07:38:04 np0005625203.localdomain sudo[26799]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:04 np0005625203.localdomain sudo[26799]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:04 np0005625203.localdomain sshd[26814]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:04 np0005625203.localdomain sshd[26814]: Accepted publickey for ceph-admin from 192.168.122.103 port 56608 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 07:38:04 np0005625203.localdomain systemd-logind[759]: New session 24 of user ceph-admin.
Feb 20 07:38:04 np0005625203.localdomain systemd[1]: Started Session 24 of User ceph-admin.
Feb 20 07:38:04 np0005625203.localdomain sshd[26814]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:05 np0005625203.localdomain sshd[26831]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:05 np0005625203.localdomain sshd[26831]: Accepted publickey for ceph-admin from 192.168.122.103 port 56620 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 07:38:05 np0005625203.localdomain systemd-logind[759]: New session 25 of user ceph-admin.
Feb 20 07:38:05 np0005625203.localdomain systemd[1]: Started Session 25 of User ceph-admin.
Feb 20 07:38:05 np0005625203.localdomain sshd[26831]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:05 np0005625203.localdomain sudo[26835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f
Feb 20 07:38:05 np0005625203.localdomain sudo[26835]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:05 np0005625203.localdomain sudo[26835]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:05 np0005625203.localdomain sshd[26850]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:05 np0005625203.localdomain sshd[26850]: Accepted publickey for ceph-admin from 192.168.122.103 port 56636 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 07:38:05 np0005625203.localdomain systemd-logind[759]: New session 26 of user ceph-admin.
Feb 20 07:38:05 np0005625203.localdomain systemd[1]: Started Session 26 of User ceph-admin.
Feb 20 07:38:05 np0005625203.localdomain sshd[26850]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 07:38:05 np0005625203.localdomain sudo[26854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host --expect-hostname np0005625203.localdomain
Feb 20 07:38:05 np0005625203.localdomain sudo[26854]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:06 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:38:06 np0005625203.localdomain sudo[26854]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:07 np0005625203.localdomain sshd[26890]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:07 np0005625203.localdomain sshd[26892]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:09 np0005625203.localdomain sshd[26890]: Invalid user user from 92.118.39.72 port 50480
Feb 20 07:38:09 np0005625203.localdomain sshd[26890]: Connection closed by invalid user user 92.118.39.72 port 50480 [preauth]
Feb 20 07:38:11 np0005625203.localdomain sshd[26892]: Invalid user doge from 185.246.128.171 port 43948
Feb 20 07:38:11 np0005625203.localdomain sshd[26892]: Disconnecting invalid user doge 185.246.128.171 port 43948: Change of username or service not allowed: (doge,ssh-connection) -> (gitlab,ssh-connection) [preauth]
Feb 20 07:38:15 np0005625203.localdomain sshd[26894]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:18 np0005625203.localdomain sshd[26894]: Invalid user gitlab from 185.246.128.171 port 18222
Feb 20 07:38:19 np0005625203.localdomain sshd[26894]: Disconnecting invalid user gitlab 185.246.128.171 port 18222: Change of username or service not allowed: (gitlab,ssh-connection) -> (odoo17,ssh-connection) [preauth]
Feb 20 07:38:22 np0005625203.localdomain sshd[26896]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:24 np0005625203.localdomain sudo[26898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:38:24 np0005625203.localdomain sudo[26898]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:24 np0005625203.localdomain sudo[26898]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:24 np0005625203.localdomain sudo[26913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:38:24 np0005625203.localdomain sudo[26913]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:24 np0005625203.localdomain sudo[26913]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:24 np0005625203.localdomain sudo[26928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 07:38:24 np0005625203.localdomain sudo[26928]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:38:25 np0005625203.localdomain sudo[26928]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:25 np0005625203.localdomain sudo[26963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:38:25 np0005625203.localdomain sudo[26963]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:25 np0005625203.localdomain sudo[26963]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:25 np0005625203.localdomain sudo[26978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 07:38:25 np0005625203.localdomain sudo[26978]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:38:25 np0005625203.localdomain sudo[26978]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:26 np0005625203.localdomain sudo[27030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:38:26 np0005625203.localdomain sudo[27030]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:26 np0005625203.localdomain sudo[27030]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:26 np0005625203.localdomain sshd[26896]: Invalid user odoo17 from 185.246.128.171 port 51093
Feb 20 07:38:26 np0005625203.localdomain sudo[27045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:38:26 np0005625203.localdomain sudo[27045]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:38:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:38:26 np0005625203.localdomain systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 27072 (sysctl)
Feb 20 07:38:26 np0005625203.localdomain systemd[1]: Mounting Arbitrary Executable File Formats File System...
Feb 20 07:38:26 np0005625203.localdomain systemd[1]: Mounted Arbitrary Executable File Formats File System.
Feb 20 07:38:26 np0005625203.localdomain sudo[27045]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:26 np0005625203.localdomain sudo[27094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:38:26 np0005625203.localdomain sudo[27094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:26 np0005625203.localdomain sudo[27094]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:26 np0005625203.localdomain sudo[27109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Feb 20 07:38:26 np0005625203.localdomain sudo[27109]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:38:27 np0005625203.localdomain sudo[27109]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:27 np0005625203.localdomain sudo[27142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:38:27 np0005625203.localdomain sudo[27142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:27 np0005625203.localdomain sudo[27142]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:27 np0005625203.localdomain sshd[26896]: Disconnecting invalid user odoo17 185.246.128.171 port 51093: Change of username or service not allowed: (odoo17,ssh-connection) -> (www,ssh-connection) [preauth]
Feb 20 07:38:27 np0005625203.localdomain sudo[27157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -- inventory --format=json-pretty --filter-for-batch
Feb 20 07:38:27 np0005625203.localdomain sudo[27157]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:38:30 np0005625203.localdomain sshd[27287]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:31 np0005625203.localdomain kernel: VFS: idmapped mount is not enabled.
Feb 20 07:38:32 np0005625203.localdomain sshd[27287]: Invalid user www from 185.246.128.171 port 26815
Feb 20 07:38:33 np0005625203.localdomain sshd[27287]: Disconnecting invalid user www 185.246.128.171 port 26815: Change of username or service not allowed: (www,ssh-connection) -> (user6,ssh-connection) [preauth]
Feb 20 07:38:35 np0005625203.localdomain sshd[27326]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:36 np0005625203.localdomain sshd[27328]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:36 np0005625203.localdomain sshd[27328]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:38:39 np0005625203.localdomain sshd[27326]: Invalid user user6 from 185.246.128.171 port 56068
Feb 20 07:38:39 np0005625203.localdomain sshd[27326]: Disconnecting invalid user user6 185.246.128.171 port 56068: Change of username or service not allowed: (user6,ssh-connection) -> (mary,ssh-connection) [preauth]
Feb 20 07:38:40 np0005625203.localdomain sshd[27330]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:41 np0005625203.localdomain sshd[27332]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:41 np0005625203.localdomain sshd[27332]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:38:43 np0005625203.localdomain sshd[27330]: Invalid user mary from 185.246.128.171 port 16055
Feb 20 07:38:43 np0005625203.localdomain sshd[27330]: Disconnecting invalid user mary 185.246.128.171 port 16055: Change of username or service not allowed: (mary,ssh-connection) -> (victor,ssh-connection) [preauth]
Feb 20 07:38:44 np0005625203.localdomain sshd[27346]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:48 np0005625203.localdomain sshd[27346]: Invalid user victor from 185.246.128.171 port 35229
Feb 20 07:38:49 np0005625203.localdomain sshd[27407]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:49 np0005625203.localdomain sshd[27346]: Disconnecting invalid user victor 185.246.128.171 port 35229: Change of username or service not allowed: (victor,ssh-connection) -> (prueba,ssh-connection) [preauth]
Feb 20 07:38:50 np0005625203.localdomain sshd[27407]: Invalid user ubuntu from 40.81.244.142 port 32990
Feb 20 07:38:50 np0005625203.localdomain sshd[27407]: Received disconnect from 40.81.244.142 port 32990:11: Bye Bye [preauth]
Feb 20 07:38:50 np0005625203.localdomain sshd[27407]: Disconnected from invalid user ubuntu 40.81.244.142 port 32990 [preauth]
Feb 20 07:38:51 np0005625203.localdomain sshd[27409]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:51 np0005625203.localdomain sshd[27410]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:38:51 np0005625203.localdomain podman[27210]: 
Feb 20 07:38:51 np0005625203.localdomain podman[27210]: 2026-02-20 07:38:51.878138126 +0000 UTC m=+23.977360443 container create 4a970f89cd32bbbe8d28bc3d62ec4f209d35999ed414f1273632e3c2c68264dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_blackwell, vcs-type=git, GIT_BRANCH=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, architecture=x86_64, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=)
Feb 20 07:38:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck3263102200-merged.mount: Deactivated successfully.
Feb 20 07:38:51 np0005625203.localdomain systemd[1]: Created slice Slice /machine.
Feb 20 07:38:51 np0005625203.localdomain systemd[1]: Started libpod-conmon-4a970f89cd32bbbe8d28bc3d62ec4f209d35999ed414f1273632e3c2c68264dd.scope.
Feb 20 07:38:51 np0005625203.localdomain podman[27210]: 2026-02-20 07:38:27.94057524 +0000 UTC m=+0.039797577 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:38:51 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:38:52 np0005625203.localdomain podman[27210]: 2026-02-20 07:38:52.010269946 +0000 UTC m=+24.109492283 container init 4a970f89cd32bbbe8d28bc3d62ec4f209d35999ed414f1273632e3c2c68264dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_blackwell, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, release=1770267347, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_CLEAN=True, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, architecture=x86_64, RELEASE=main)
Feb 20 07:38:52 np0005625203.localdomain podman[27210]: 2026-02-20 07:38:52.021050026 +0000 UTC m=+24.120272373 container start 4a970f89cd32bbbe8d28bc3d62ec4f209d35999ed414f1273632e3c2c68264dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_blackwell, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, ceph=True, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 07:38:52 np0005625203.localdomain podman[27210]: 2026-02-20 07:38:52.021307203 +0000 UTC m=+24.120529540 container attach 4a970f89cd32bbbe8d28bc3d62ec4f209d35999ed414f1273632e3c2c68264dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_blackwell, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.42.2, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, description=Red Hat Ceph Storage 7)
Feb 20 07:38:52 np0005625203.localdomain trusting_blackwell[27414]: 167 167
Feb 20 07:38:52 np0005625203.localdomain systemd[1]: libpod-4a970f89cd32bbbe8d28bc3d62ec4f209d35999ed414f1273632e3c2c68264dd.scope: Deactivated successfully.
Feb 20 07:38:52 np0005625203.localdomain podman[27210]: 2026-02-20 07:38:52.02513142 +0000 UTC m=+24.124353757 container died 4a970f89cd32bbbe8d28bc3d62ec4f209d35999ed414f1273632e3c2c68264dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_blackwell, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, release=1770267347, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.buildah.version=1.42.2, vcs-type=git, version=7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 07:38:52 np0005625203.localdomain sshd[27410]: Invalid user cod4server from 187.87.206.21 port 44840
Feb 20 07:38:52 np0005625203.localdomain podman[27419]: 2026-02-20 07:38:52.113964086 +0000 UTC m=+0.081228864 container remove 4a970f89cd32bbbe8d28bc3d62ec4f209d35999ed414f1273632e3c2c68264dd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=trusting_blackwell, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, release=1770267347, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, name=rhceph, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-type=git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Feb 20 07:38:52 np0005625203.localdomain systemd[1]: libpod-conmon-4a970f89cd32bbbe8d28bc3d62ec4f209d35999ed414f1273632e3c2c68264dd.scope: Deactivated successfully.
Feb 20 07:38:52 np0005625203.localdomain sshd[27410]: Received disconnect from 187.87.206.21 port 44840:11: Bye Bye [preauth]
Feb 20 07:38:52 np0005625203.localdomain sshd[27410]: Disconnected from invalid user cod4server 187.87.206.21 port 44840 [preauth]
Feb 20 07:38:52 np0005625203.localdomain podman[27441]: 
Feb 20 07:38:52 np0005625203.localdomain podman[27441]: 2026-02-20 07:38:52.34928093 +0000 UTC m=+0.085688310 container create dc6dcc1b20f32a92b5389ff4486862b2aabb483f73d8f3bcd833c484f1daa51c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_gates, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, GIT_CLEAN=True, GIT_BRANCH=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, release=1770267347, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 07:38:52 np0005625203.localdomain systemd[1]: Started libpod-conmon-dc6dcc1b20f32a92b5389ff4486862b2aabb483f73d8f3bcd833c484f1daa51c.scope.
Feb 20 07:38:52 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:38:52 np0005625203.localdomain podman[27441]: 2026-02-20 07:38:52.310255097 +0000 UTC m=+0.046662477 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:38:52 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea0ff8bc97592d4e4e5fedd7b5e4669d26f9d4bc7f78f26fe9036fea966fd50/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 07:38:52 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ea0ff8bc97592d4e4e5fedd7b5e4669d26f9d4bc7f78f26fe9036fea966fd50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 07:38:52 np0005625203.localdomain podman[27441]: 2026-02-20 07:38:52.436499007 +0000 UTC m=+0.172906367 container init dc6dcc1b20f32a92b5389ff4486862b2aabb483f73d8f3bcd833c484f1daa51c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_gates, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, vendor=Red Hat, Inc., name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, release=1770267347, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64)
Feb 20 07:38:52 np0005625203.localdomain podman[27441]: 2026-02-20 07:38:52.445846023 +0000 UTC m=+0.182253383 container start dc6dcc1b20f32a92b5389ff4486862b2aabb483f73d8f3bcd833c484f1daa51c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_gates, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, name=rhceph, ceph=True, architecture=x86_64, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:38:52 np0005625203.localdomain podman[27441]: 2026-02-20 07:38:52.446230625 +0000 UTC m=+0.182637995 container attach dc6dcc1b20f32a92b5389ff4486862b2aabb483f73d8f3bcd833c484f1daa51c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_gates, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, version=7, release=1770267347, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, name=rhceph, build-date=2026-02-09T10:25:24Z)
Feb 20 07:38:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-79c6a96402371865cb92f3a74c5f329495fa422c345e82289795301b65285f66-merged.mount: Deactivated successfully.
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]: [
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:     {
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:         "available": false,
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:         "ceph_device": false,
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:         "lsm_data": {},
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:         "lvs": [],
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:         "path": "/dev/sr0",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:         "rejected_reasons": [
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "Has a FileSystem",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "Insufficient space (<5GB)"
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:         ],
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:         "sys_api": {
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "actuators": null,
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "device_nodes": "sr0",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "human_readable_size": "482.00 KB",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "id_bus": "ata",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "model": "QEMU DVD-ROM",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "nr_requests": "2",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "partitions": {},
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "path": "/dev/sr0",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "removable": "1",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "rev": "2.5+",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "ro": "0",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "rotational": "1",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "sas_address": "",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "sas_device_handle": "",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "scheduler_mode": "mq-deadline",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "sectors": 0,
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "sectorsize": "2048",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "size": 493568.0,
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "support_discard": "0",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "type": "disk",
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:             "vendor": "QEMU"
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:         }
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]:     }
Feb 20 07:38:53 np0005625203.localdomain magical_gates[27456]: ]
Feb 20 07:38:53 np0005625203.localdomain systemd[1]: libpod-dc6dcc1b20f32a92b5389ff4486862b2aabb483f73d8f3bcd833c484f1daa51c.scope: Deactivated successfully.
Feb 20 07:38:53 np0005625203.localdomain podman[27441]: 2026-02-20 07:38:53.254744373 +0000 UTC m=+0.991151733 container died dc6dcc1b20f32a92b5389ff4486862b2aabb483f73d8f3bcd833c484f1daa51c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_gates, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, release=1770267347, ceph=True, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=rhceph, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, version=7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Feb 20 07:38:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-7ea0ff8bc97592d4e4e5fedd7b5e4669d26f9d4bc7f78f26fe9036fea966fd50-merged.mount: Deactivated successfully.
Feb 20 07:38:53 np0005625203.localdomain podman[28585]: 2026-02-20 07:38:53.353124001 +0000 UTC m=+0.088047373 container remove dc6dcc1b20f32a92b5389ff4486862b2aabb483f73d8f3bcd833c484f1daa51c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_gates, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.component=rhceph-container, name=rhceph, architecture=x86_64, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, build-date=2026-02-09T10:25:24Z, vcs-type=git)
Feb 20 07:38:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:38:53 np0005625203.localdomain systemd[1]: libpod-conmon-dc6dcc1b20f32a92b5389ff4486862b2aabb483f73d8f3bcd833c484f1daa51c.scope: Deactivated successfully.
Feb 20 07:38:53 np0005625203.localdomain sudo[27157]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:53 np0005625203.localdomain sudo[28602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:38:53 np0005625203.localdomain sudo[28602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:53 np0005625203.localdomain sudo[28602]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:53 np0005625203.localdomain sudo[28617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 _orch set-coredump-overrides --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 --coredump-max-size=32G
Feb 20 07:38:53 np0005625203.localdomain sudo[28617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:38:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:38:53 np0005625203.localdomain systemd[1]: systemd-coredump.socket: Deactivated successfully.
Feb 20 07:38:53 np0005625203.localdomain systemd[1]: Closed Process Core Dump Socket.
Feb 20 07:38:53 np0005625203.localdomain systemd[1]: Stopping Process Core Dump Socket...
Feb 20 07:38:53 np0005625203.localdomain systemd[1]: Listening on Process Core Dump Socket.
Feb 20 07:38:53 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:38:54 np0005625203.localdomain systemd-rc-local-generator[28668]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:38:54 np0005625203.localdomain systemd-sysv-generator[28674]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:38:54 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:38:54 np0005625203.localdomain sshd[27409]: Invalid user prueba from 185.246.128.171 port 3146
Feb 20 07:38:54 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:38:54 np0005625203.localdomain systemd-sysv-generator[28714]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:38:54 np0005625203.localdomain systemd-rc-local-generator[28709]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:38:54 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:38:54 np0005625203.localdomain sudo[28617]: pam_unix(sudo:session): session closed for user root
Feb 20 07:38:57 np0005625203.localdomain sshd[27409]: Disconnecting invalid user prueba 185.246.128.171 port 3146: Change of username or service not allowed: (prueba,ssh-connection) -> (sample,ssh-connection) [preauth]
Feb 20 07:39:01 np0005625203.localdomain sshd[28720]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:39:06 np0005625203.localdomain sshd[28720]: Invalid user sample from 185.246.128.171 port 56319
Feb 20 07:39:07 np0005625203.localdomain sshd[28720]: Disconnecting invalid user sample 185.246.128.171 port 56319: Change of username or service not allowed: (sample,ssh-connection) -> (test2,ssh-connection) [preauth]
Feb 20 07:39:09 np0005625203.localdomain sshd[28722]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:39:12 np0005625203.localdomain sshd[28722]: Invalid user test2 from 185.246.128.171 port 31215
Feb 20 07:39:19 np0005625203.localdomain sshd[28722]: Disconnecting invalid user test2 185.246.128.171 port 31215: Change of username or service not allowed: (test2,ssh-connection) -> (siddharth,ssh-connection) [preauth]
Feb 20 07:39:22 np0005625203.localdomain sshd[28768]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:39:25 np0005625203.localdomain sshd[28876]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:39:25 np0005625203.localdomain sshd[28876]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:39:26 np0005625203.localdomain sshd[28768]: Invalid user siddharth from 185.246.128.171 port 31914
Feb 20 07:39:26 np0005625203.localdomain sshd[28768]: Disconnecting invalid user siddharth 185.246.128.171 port 31914: Change of username or service not allowed: (siddharth,ssh-connection) -> (alireza,ssh-connection) [preauth]
Feb 20 07:39:28 np0005625203.localdomain sudo[28878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:39:28 np0005625203.localdomain sudo[28878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:28 np0005625203.localdomain sudo[28878]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:28 np0005625203.localdomain sudo[28893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 07:39:28 np0005625203.localdomain sudo[28893]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:39:28 np0005625203.localdomain sshd[28923]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:39:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:39:28 np0005625203.localdomain podman[28952]: 
Feb 20 07:39:28 np0005625203.localdomain podman[28952]: 2026-02-20 07:39:28.693482518 +0000 UTC m=+0.091752697 container create ac7b575ba56e4b7502d1dc8c2bc8bc71c3c00632acc148d6ebdce9e74dc46ae9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_panini, release=1770267347, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, version=7, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z)
Feb 20 07:39:28 np0005625203.localdomain podman[28952]: 2026-02-20 07:39:28.64023363 +0000 UTC m=+0.038503809 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:28 np0005625203.localdomain systemd[1]: Started libpod-conmon-ac7b575ba56e4b7502d1dc8c2bc8bc71c3c00632acc148d6ebdce9e74dc46ae9.scope.
Feb 20 07:39:28 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:28 np0005625203.localdomain podman[28952]: 2026-02-20 07:39:28.774103873 +0000 UTC m=+0.172374052 container init ac7b575ba56e4b7502d1dc8c2bc8bc71c3c00632acc148d6ebdce9e74dc46ae9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_panini, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2)
Feb 20 07:39:28 np0005625203.localdomain podman[28952]: 2026-02-20 07:39:28.783929693 +0000 UTC m=+0.182199882 container start ac7b575ba56e4b7502d1dc8c2bc8bc71c3c00632acc148d6ebdce9e74dc46ae9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_panini, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, vcs-type=git, name=rhceph)
Feb 20 07:39:28 np0005625203.localdomain dazzling_panini[28967]: 167 167
Feb 20 07:39:28 np0005625203.localdomain systemd[1]: libpod-ac7b575ba56e4b7502d1dc8c2bc8bc71c3c00632acc148d6ebdce9e74dc46ae9.scope: Deactivated successfully.
Feb 20 07:39:28 np0005625203.localdomain podman[28952]: 2026-02-20 07:39:28.785837891 +0000 UTC m=+0.184108070 container attach ac7b575ba56e4b7502d1dc8c2bc8bc71c3c00632acc148d6ebdce9e74dc46ae9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_panini, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph)
Feb 20 07:39:28 np0005625203.localdomain podman[28952]: 2026-02-20 07:39:28.790340658 +0000 UTC m=+0.188610887 container died ac7b575ba56e4b7502d1dc8c2bc8bc71c3c00632acc148d6ebdce9e74dc46ae9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_panini, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, release=1770267347, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, name=rhceph, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:39:28 np0005625203.localdomain podman[28972]: 2026-02-20 07:39:28.87739518 +0000 UTC m=+0.079063878 container remove ac7b575ba56e4b7502d1dc8c2bc8bc71c3c00632acc148d6ebdce9e74dc46ae9 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=dazzling_panini, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, version=7, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 07:39:28 np0005625203.localdomain systemd[1]: libpod-conmon-ac7b575ba56e4b7502d1dc8c2bc8bc71c3c00632acc148d6ebdce9e74dc46ae9.scope: Deactivated successfully.
Feb 20 07:39:28 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:39:29 np0005625203.localdomain systemd-rc-local-generator[29010]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:39:29 np0005625203.localdomain systemd-sysv-generator[29013]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:39:29 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:39:29 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:39:29 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:39:29 np0005625203.localdomain systemd-rc-local-generator[29051]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:39:29 np0005625203.localdomain systemd-sysv-generator[29055]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:39:29 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:39:29 np0005625203.localdomain sshd[29060]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:39:29 np0005625203.localdomain systemd[1]: Reached target All Ceph clusters and services.
Feb 20 07:39:29 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:39:29 np0005625203.localdomain systemd-rc-local-generator[29091]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:39:29 np0005625203.localdomain systemd-sysv-generator[29095]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:39:29 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:39:29 np0005625203.localdomain systemd[1]: Reached target Ceph cluster a8557ee9-b55d-5519-942c-cf8f6172f1d8.
Feb 20 07:39:29 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:39:29 np0005625203.localdomain systemd-rc-local-generator[29126]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:39:29 np0005625203.localdomain systemd-sysv-generator[29133]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:39:29 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:39:29 np0005625203.localdomain sshd[28923]: Invalid user admin from 123.204.132.127 port 59182
Feb 20 07:39:29 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:39:30 np0005625203.localdomain systemd-rc-local-generator[29165]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:39:30 np0005625203.localdomain systemd-sysv-generator[29170]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:39:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:39:30 np0005625203.localdomain sshd[28923]: Received disconnect from 123.204.132.127 port 59182:11: Bye Bye [preauth]
Feb 20 07:39:30 np0005625203.localdomain sshd[28923]: Disconnected from invalid user admin 123.204.132.127 port 59182 [preauth]
Feb 20 07:39:30 np0005625203.localdomain systemd[1]: Created slice Slice /system/ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8.
Feb 20 07:39:30 np0005625203.localdomain systemd[1]: Reached target System Time Set.
Feb 20 07:39:30 np0005625203.localdomain systemd[1]: Reached target System Time Synchronized.
Feb 20 07:39:30 np0005625203.localdomain systemd[1]: Starting Ceph crash.np0005625203 for a8557ee9-b55d-5519-942c-cf8f6172f1d8...
Feb 20 07:39:30 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:39:30 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:39:30 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Feb 20 07:39:30 np0005625203.localdomain podman[29232]: 
Feb 20 07:39:30 np0005625203.localdomain podman[29232]: 2026-02-20 07:39:30.505789435 +0000 UTC m=+0.064531223 container create f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, version=7, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, name=rhceph)
Feb 20 07:39:30 np0005625203.localdomain podman[29232]: 2026-02-20 07:39:30.477788449 +0000 UTC m=+0.036530257 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:30 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56020247f2bb43dfd5072d9812233aeb875ad92f3b935026117c0a0f908561b/merged/etc/ceph/ceph.client.crash.np0005625203.keyring supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:30 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56020247f2bb43dfd5072d9812233aeb875ad92f3b935026117c0a0f908561b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:30 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c56020247f2bb43dfd5072d9812233aeb875ad92f3b935026117c0a0f908561b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:30 np0005625203.localdomain podman[29232]: 2026-02-20 07:39:30.640936417 +0000 UTC m=+0.199678195 container init f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, release=1770267347, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.buildah.version=1.42.2, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:39:30 np0005625203.localdomain podman[29232]: 2026-02-20 07:39:30.650824149 +0000 UTC m=+0.209565927 container start f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, version=7, io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:39:30 np0005625203.localdomain bash[29232]: f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d
Feb 20 07:39:30 np0005625203.localdomain systemd[1]: Started Ceph crash.np0005625203 for a8557ee9-b55d-5519-942c-cf8f6172f1d8.
Feb 20 07:39:30 np0005625203.localdomain sudo[28893]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:30 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]: INFO:ceph-crash:pinging cluster to exercise our key, trying key client.crash.np0005625203.
Feb 20 07:39:30 np0005625203.localdomain sudo[29254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:39:30 np0005625203.localdomain sudo[29254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:30 np0005625203.localdomain sudo[29254]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:30 np0005625203.localdomain sudo[29288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 --yes --no-systemd
Feb 20 07:39:30 np0005625203.localdomain sudo[29288]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:   cluster:
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:     id:     a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:     health: HEALTH_WARN
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:             OSD count 0 < osd_pool_default_size 3
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:  
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:   services:
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:     mon: 3 daemons, quorum np0005625199,np0005625201,np0005625200 (age 16s)
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:     mgr: np0005625199.ileebh(active, since 2m), standbys: np0005625201.mtnyvu, np0005625200.ypbkax
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:     osd: 0 osds: 0 up, 0 in
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:  
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:   data:
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:     pools:   0 pools, 0 pgs
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:     objects: 0 objects, 0 B
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:     usage:   0 B used, 0 B / 0 B avail
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:     pgs:     
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:  
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:   progress:
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:     Updating crash deployment (+4 -> 6) (11s)
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:       [=====================.......] (remaining: 3s)
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]:  
Feb 20 07:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203[29246]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Feb 20 07:39:31 np0005625203.localdomain podman[29342]: 
Feb 20 07:39:31 np0005625203.localdomain podman[29342]: 2026-02-20 07:39:31.49800658 +0000 UTC m=+0.071037752 container create a85de4196590fbb0d40c040a082e193bb7de52ac7df3f34cf8eab1e199789f3c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_curie, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, release=1770267347, distribution-scope=public, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph)
Feb 20 07:39:31 np0005625203.localdomain systemd[1]: Started libpod-conmon-a85de4196590fbb0d40c040a082e193bb7de52ac7df3f34cf8eab1e199789f3c.scope.
Feb 20 07:39:31 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:31 np0005625203.localdomain podman[29342]: 2026-02-20 07:39:31.562842242 +0000 UTC m=+0.135873444 container init a85de4196590fbb0d40c040a082e193bb7de52ac7df3f34cf8eab1e199789f3c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_curie, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, release=1770267347, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, name=rhceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7)
Feb 20 07:39:31 np0005625203.localdomain podman[29342]: 2026-02-20 07:39:31.470408017 +0000 UTC m=+0.043439209 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:31 np0005625203.localdomain podman[29342]: 2026-02-20 07:39:31.571638371 +0000 UTC m=+0.144669563 container start a85de4196590fbb0d40c040a082e193bb7de52ac7df3f34cf8eab1e199789f3c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_curie, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, vcs-type=git, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.42.2, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, ceph=True, io.openshift.expose-services=)
Feb 20 07:39:31 np0005625203.localdomain podman[29342]: 2026-02-20 07:39:31.571888809 +0000 UTC m=+0.144920041 container attach a85de4196590fbb0d40c040a082e193bb7de52ac7df3f34cf8eab1e199789f3c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_curie, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, vcs-type=git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, CEPH_POINT_RELEASE=, version=7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph)
Feb 20 07:39:31 np0005625203.localdomain ecstatic_curie[29356]: 167 167
Feb 20 07:39:31 np0005625203.localdomain systemd[1]: libpod-a85de4196590fbb0d40c040a082e193bb7de52ac7df3f34cf8eab1e199789f3c.scope: Deactivated successfully.
Feb 20 07:39:31 np0005625203.localdomain podman[29342]: 2026-02-20 07:39:31.57584654 +0000 UTC m=+0.148877762 container died a85de4196590fbb0d40c040a082e193bb7de52ac7df3f34cf8eab1e199789f3c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_curie, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph)
Feb 20 07:39:31 np0005625203.localdomain podman[29361]: 2026-02-20 07:39:31.666020327 +0000 UTC m=+0.077228162 container remove a85de4196590fbb0d40c040a082e193bb7de52ac7df3f34cf8eab1e199789f3c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ecstatic_curie, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, release=1770267347, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., RELEASE=main, version=7, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git)
Feb 20 07:39:31 np0005625203.localdomain systemd[1]: libpod-conmon-a85de4196590fbb0d40c040a082e193bb7de52ac7df3f34cf8eab1e199789f3c.scope: Deactivated successfully.
Feb 20 07:39:31 np0005625203.localdomain podman[29382]: 
Feb 20 07:39:31 np0005625203.localdomain podman[29382]: 2026-02-20 07:39:31.873621034 +0000 UTC m=+0.070354532 container create e654fc3a627cb2a01a20be0853066752e57f4f6e39be44fcdf8d85e6fdf472fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_payne, release=1770267347, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, GIT_CLEAN=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc.)
Feb 20 07:39:31 np0005625203.localdomain systemd[1]: Started libpod-conmon-e654fc3a627cb2a01a20be0853066752e57f4f6e39be44fcdf8d85e6fdf472fd.scope.
Feb 20 07:39:31 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:31 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8ac055d5f56f44bbb60ab977d19f14a2c7d059878ce43203ddba7e5aa59580/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:31 np0005625203.localdomain podman[29382]: 2026-02-20 07:39:31.846633049 +0000 UTC m=+0.043366537 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:31 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8ac055d5f56f44bbb60ab977d19f14a2c7d059878ce43203ddba7e5aa59580/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:31 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8ac055d5f56f44bbb60ab977d19f14a2c7d059878ce43203ddba7e5aa59580/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:31 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8ac055d5f56f44bbb60ab977d19f14a2c7d059878ce43203ddba7e5aa59580/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:31 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e8ac055d5f56f44bbb60ab977d19f14a2c7d059878ce43203ddba7e5aa59580/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:31 np0005625203.localdomain podman[29382]: 2026-02-20 07:39:31.995152909 +0000 UTC m=+0.191886397 container init e654fc3a627cb2a01a20be0853066752e57f4f6e39be44fcdf8d85e6fdf472fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_payne, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, version=7, ceph=True, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, name=rhceph)
Feb 20 07:39:32 np0005625203.localdomain podman[29382]: 2026-02-20 07:39:32.005088863 +0000 UTC m=+0.201822361 container start e654fc3a627cb2a01a20be0853066752e57f4f6e39be44fcdf8d85e6fdf472fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_payne, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, name=rhceph, architecture=x86_64, io.buildah.version=1.42.2, io.openshift.expose-services=, RELEASE=main, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 07:39:32 np0005625203.localdomain podman[29382]: 2026-02-20 07:39:32.005274189 +0000 UTC m=+0.202007677 container attach e654fc3a627cb2a01a20be0853066752e57f4f6e39be44fcdf8d85e6fdf472fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_payne, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, GIT_CLEAN=True, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., name=rhceph, build-date=2026-02-09T10:25:24Z, distribution-scope=public, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, GIT_BRANCH=main, io.openshift.expose-services=, ceph=True)
Feb 20 07:39:32 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-55aad9b747f2971076e5ebbf9128f5c0afed8710751cc562e0e47360b9cdd7c6-merged.mount: Deactivated successfully.
Feb 20 07:39:32 np0005625203.localdomain peaceful_payne[29398]: --> passed data devices: 0 physical, 2 LVM
Feb 20 07:39:32 np0005625203.localdomain peaceful_payne[29398]: --> relative data size: 1.0
Feb 20 07:39:32 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 20 07:39:32 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new a38015ef-851b-4289-b15a-f73ed0cda6b2
Feb 20 07:39:33 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 20 07:39:33 np0005625203.localdomain lvm[29452]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 20 07:39:33 np0005625203.localdomain lvm[29452]: VG ceph_vg0 finished
Feb 20 07:39:33 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Feb 20 07:39:33 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Feb 20 07:39:33 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 20 07:39:33 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Feb 20 07:39:33 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Feb 20 07:39:33 np0005625203.localdomain sshd[29060]: Invalid user alireza from 185.246.128.171 port 1919
Feb 20 07:39:33 np0005625203.localdomain peaceful_payne[29398]:  stderr: got monmap epoch 3
Feb 20 07:39:33 np0005625203.localdomain peaceful_payne[29398]: --> Creating keyring file for osd.1
Feb 20 07:39:33 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Feb 20 07:39:33 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Feb 20 07:39:33 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid a38015ef-851b-4289-b15a-f73ed0cda6b2 --setuser ceph --setgroup ceph
Feb 20 07:39:33 np0005625203.localdomain sshd[29060]: Disconnecting invalid user alireza 185.246.128.171 port 1919: Change of username or service not allowed: (alireza,ssh-connection) -> (security,ssh-connection) [preauth]
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]:  stderr: 2026-02-20T07:39:33.675+0000 7fc1f7c3fa80 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]:  stderr: 2026-02-20T07:39:33.675+0000 7fc1f7c3fa80 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: --> ceph-volume lvm activate successful for osd ID: 1
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c72a933f-aa20-4a0f-910f-f52fa396bd73
Feb 20 07:39:36 np0005625203.localdomain sshd[30376]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:39:36 np0005625203.localdomain lvm[30383]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 20 07:39:36 np0005625203.localdomain lvm[30383]: VG ceph_vg1 finished
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ceph-authtool --gen-print-key
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-4
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Feb 20 07:39:36 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-4/activate.monmap
Feb 20 07:39:37 np0005625203.localdomain peaceful_payne[29398]:  stderr: got monmap epoch 3
Feb 20 07:39:37 np0005625203.localdomain peaceful_payne[29398]: --> Creating keyring file for osd.4
Feb 20 07:39:37 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/keyring
Feb 20 07:39:37 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4/
Feb 20 07:39:37 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 4 --monmap /var/lib/ceph/osd/ceph-4/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-4/ --osd-uuid c72a933f-aa20-4a0f-910f-f52fa396bd73 --setuser ceph --setgroup ceph
Feb 20 07:39:39 np0005625203.localdomain sshd[30376]: Invalid user security from 185.246.128.171 port 37646
Feb 20 07:39:39 np0005625203.localdomain peaceful_payne[29398]:  stderr: 2026-02-20T07:39:37.446+0000 7fb602e7fa80 -1 bluestore(/var/lib/ceph/osd/ceph-4//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Feb 20 07:39:39 np0005625203.localdomain peaceful_payne[29398]:  stderr: 2026-02-20T07:39:37.446+0000 7fb602e7fa80 -1 bluestore(/var/lib/ceph/osd/ceph-4/) _read_fsid unparsable uuid
Feb 20 07:39:39 np0005625203.localdomain peaceful_payne[29398]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Feb 20 07:39:39 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Feb 20 07:39:39 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-4 --no-mon-config
Feb 20 07:39:39 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Feb 20 07:39:39 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-4/block
Feb 20 07:39:39 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 20 07:39:39 np0005625203.localdomain peaceful_payne[29398]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Feb 20 07:39:39 np0005625203.localdomain peaceful_payne[29398]: --> ceph-volume lvm activate successful for osd ID: 4
Feb 20 07:39:39 np0005625203.localdomain peaceful_payne[29398]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Feb 20 07:39:40 np0005625203.localdomain systemd[1]: libpod-e654fc3a627cb2a01a20be0853066752e57f4f6e39be44fcdf8d85e6fdf472fd.scope: Deactivated successfully.
Feb 20 07:39:40 np0005625203.localdomain systemd[1]: libpod-e654fc3a627cb2a01a20be0853066752e57f4f6e39be44fcdf8d85e6fdf472fd.scope: Consumed 3.698s CPU time.
Feb 20 07:39:40 np0005625203.localdomain podman[29382]: 2026-02-20 07:39:40.024090089 +0000 UTC m=+8.220823637 container died e654fc3a627cb2a01a20be0853066752e57f4f6e39be44fcdf8d85e6fdf472fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_payne, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, description=Red Hat Ceph Storage 7, release=1770267347, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.buildah.version=1.42.2, version=7, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, GIT_CLEAN=True, name=rhceph, architecture=x86_64)
Feb 20 07:39:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1e8ac055d5f56f44bbb60ab977d19f14a2c7d059878ce43203ddba7e5aa59580-merged.mount: Deactivated successfully.
Feb 20 07:39:40 np0005625203.localdomain podman[31287]: 2026-02-20 07:39:40.116731161 +0000 UTC m=+0.080740748 container remove e654fc3a627cb2a01a20be0853066752e57f4f6e39be44fcdf8d85e6fdf472fd (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=peaceful_payne, vendor=Red Hat, Inc., version=7, vcs-type=git, com.redhat.component=rhceph-container, name=rhceph, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, release=1770267347, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:39:40 np0005625203.localdomain systemd[1]: libpod-conmon-e654fc3a627cb2a01a20be0853066752e57f4f6e39be44fcdf8d85e6fdf472fd.scope: Deactivated successfully.
Feb 20 07:39:40 np0005625203.localdomain sudo[29288]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:40 np0005625203.localdomain sudo[31299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:39:40 np0005625203.localdomain sudo[31299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:40 np0005625203.localdomain sudo[31299]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:40 np0005625203.localdomain sudo[31314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -- lvm list --format json
Feb 20 07:39:40 np0005625203.localdomain sudo[31314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:40 np0005625203.localdomain sshd[30376]: Disconnecting invalid user security 185.246.128.171 port 37646: Change of username or service not allowed: (security,ssh-connection) -> (gitlab-runner,ssh-connectio [preauth]
Feb 20 07:39:40 np0005625203.localdomain podman[31368]: 
Feb 20 07:39:40 np0005625203.localdomain podman[31368]: 2026-02-20 07:39:40.944063177 +0000 UTC m=+0.067423193 container create 04893c887f2a2a5e8cbee81215a541215c453dcd2aa2e8c97b62446a16cf9ff1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_ritchie, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., name=rhceph, build-date=2026-02-09T10:25:24Z, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, release=1770267347, architecture=x86_64)
Feb 20 07:39:40 np0005625203.localdomain systemd[1]: Started libpod-conmon-04893c887f2a2a5e8cbee81215a541215c453dcd2aa2e8c97b62446a16cf9ff1.scope.
Feb 20 07:39:40 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:41 np0005625203.localdomain podman[31368]: 2026-02-20 07:39:41.006225347 +0000 UTC m=+0.129585363 container init 04893c887f2a2a5e8cbee81215a541215c453dcd2aa2e8c97b62446a16cf9ff1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_ritchie, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, version=7, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, RELEASE=main, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2)
Feb 20 07:39:41 np0005625203.localdomain podman[31368]: 2026-02-20 07:39:41.01646408 +0000 UTC m=+0.139824096 container start 04893c887f2a2a5e8cbee81215a541215c453dcd2aa2e8c97b62446a16cf9ff1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_ritchie, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, GIT_CLEAN=True, RELEASE=main, ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=1770267347, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:39:41 np0005625203.localdomain podman[31368]: 2026-02-20 07:39:41.016821161 +0000 UTC m=+0.140181207 container attach 04893c887f2a2a5e8cbee81215a541215c453dcd2aa2e8c97b62446a16cf9ff1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_ritchie, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, build-date=2026-02-09T10:25:24Z, RELEASE=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_BRANCH=main, architecture=x86_64, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph)
Feb 20 07:39:41 np0005625203.localdomain podman[31368]: 2026-02-20 07:39:40.918517306 +0000 UTC m=+0.041877332 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:41 np0005625203.localdomain compassionate_ritchie[31385]: 167 167
Feb 20 07:39:41 np0005625203.localdomain systemd[1]: libpod-04893c887f2a2a5e8cbee81215a541215c453dcd2aa2e8c97b62446a16cf9ff1.scope: Deactivated successfully.
Feb 20 07:39:41 np0005625203.localdomain podman[31368]: 2026-02-20 07:39:41.021758662 +0000 UTC m=+0.145118688 container died 04893c887f2a2a5e8cbee81215a541215c453dcd2aa2e8c97b62446a16cf9ff1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_ritchie, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, version=7, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:39:41 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ef6132376408efab0ab083eff5efcfc676981b4e92596442a816f0eefe18f294-merged.mount: Deactivated successfully.
Feb 20 07:39:41 np0005625203.localdomain podman[31390]: 2026-02-20 07:39:41.110317859 +0000 UTC m=+0.074423316 container remove 04893c887f2a2a5e8cbee81215a541215c453dcd2aa2e8c97b62446a16cf9ff1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_ritchie, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, build-date=2026-02-09T10:25:24Z, vcs-type=git, CEPH_POINT_RELEASE=, architecture=x86_64, release=1770267347, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 07:39:41 np0005625203.localdomain systemd[1]: libpod-conmon-04893c887f2a2a5e8cbee81215a541215c453dcd2aa2e8c97b62446a16cf9ff1.scope: Deactivated successfully.
Feb 20 07:39:41 np0005625203.localdomain podman[31411]: 
Feb 20 07:39:41 np0005625203.localdomain podman[31411]: 2026-02-20 07:39:41.305994822 +0000 UTC m=+0.074423177 container create 59c773c5b7af7c275558df60f43865db57fccd78a92fb6d889936bf1a1ed9613 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_fermat, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, vcs-type=git, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 07:39:41 np0005625203.localdomain systemd[1]: Started libpod-conmon-59c773c5b7af7c275558df60f43865db57fccd78a92fb6d889936bf1a1ed9613.scope.
Feb 20 07:39:41 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:41 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21dbac45925d07557355dbe1f7dfe21e82ab209267e9027eaf843129f3059fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:41 np0005625203.localdomain podman[31411]: 2026-02-20 07:39:41.276010135 +0000 UTC m=+0.044438520 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:41 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21dbac45925d07557355dbe1f7dfe21e82ab209267e9027eaf843129f3059fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:41 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21dbac45925d07557355dbe1f7dfe21e82ab209267e9027eaf843129f3059fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:41 np0005625203.localdomain podman[31411]: 2026-02-20 07:39:41.400905844 +0000 UTC m=+0.169334209 container init 59c773c5b7af7c275558df60f43865db57fccd78a92fb6d889936bf1a1ed9613 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_fermat, description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, build-date=2026-02-09T10:25:24Z, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=)
Feb 20 07:39:41 np0005625203.localdomain podman[31411]: 2026-02-20 07:39:41.410337052 +0000 UTC m=+0.178765407 container start 59c773c5b7af7c275558df60f43865db57fccd78a92fb6d889936bf1a1ed9613 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_fermat, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, vcs-type=git, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Feb 20 07:39:41 np0005625203.localdomain podman[31411]: 2026-02-20 07:39:41.410599189 +0000 UTC m=+0.179027594 container attach 59c773c5b7af7c275558df60f43865db57fccd78a92fb6d889936bf1a1ed9613 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_fermat, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, description=Red Hat Ceph Storage 7, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 07:39:41 np0005625203.localdomain sshd[31435]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]: {
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:     "1": [
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:         {
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "devices": [
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "/dev/loop3"
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             ],
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "lv_name": "ceph_lv0",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "lv_size": "7511998464",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=CaWwNl-Tu8g-GIIh-eGXo-v2dM-tWQj-UTNylA,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a8557ee9-b55d-5519-942c-cf8f6172f1d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=a38015ef-851b-4289-b15a-f73ed0cda6b2,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "lv_uuid": "CaWwNl-Tu8g-GIIh-eGXo-v2dM-tWQj-UTNylA",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "name": "ceph_lv0",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "path": "/dev/ceph_vg0/ceph_lv0",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "tags": {
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.block_uuid": "CaWwNl-Tu8g-GIIh-eGXo-v2dM-tWQj-UTNylA",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.cephx_lockbox_secret": "",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.cluster_fsid": "a8557ee9-b55d-5519-942c-cf8f6172f1d8",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.cluster_name": "ceph",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.crush_device_class": "",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.encrypted": "0",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.osd_fsid": "a38015ef-851b-4289-b15a-f73ed0cda6b2",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.osd_id": "1",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.type": "block",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.vdo": "0"
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             },
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "type": "block",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "vg_name": "ceph_vg0"
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:         }
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:     ],
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:     "4": [
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:         {
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "devices": [
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "/dev/loop4"
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             ],
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "lv_name": "ceph_lv1",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "lv_size": "7511998464",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=bfkVjj-3o3A-3ikJ-DnDQ-Ak8V-rwwV-i2xRjd,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=a8557ee9-b55d-5519-942c-cf8f6172f1d8,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=c72a933f-aa20-4a0f-910f-f52fa396bd73,ceph.osd_id=4,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "lv_uuid": "bfkVjj-3o3A-3ikJ-DnDQ-Ak8V-rwwV-i2xRjd",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "name": "ceph_lv1",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "path": "/dev/ceph_vg1/ceph_lv1",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "tags": {
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.block_uuid": "bfkVjj-3o3A-3ikJ-DnDQ-Ak8V-rwwV-i2xRjd",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.cephx_lockbox_secret": "",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.cluster_fsid": "a8557ee9-b55d-5519-942c-cf8f6172f1d8",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.cluster_name": "ceph",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.crush_device_class": "",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.encrypted": "0",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.osd_fsid": "c72a933f-aa20-4a0f-910f-f52fa396bd73",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.osd_id": "4",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.osdspec_affinity": "default_drive_group",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.type": "block",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:                 "ceph.vdo": "0"
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             },
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "type": "block",
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:             "vg_name": "ceph_vg1"
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:         }
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]:     ]
Feb 20 07:39:41 np0005625203.localdomain busy_fermat[31426]: }
Feb 20 07:39:41 np0005625203.localdomain systemd[1]: libpod-59c773c5b7af7c275558df60f43865db57fccd78a92fb6d889936bf1a1ed9613.scope: Deactivated successfully.
Feb 20 07:39:41 np0005625203.localdomain podman[31411]: 2026-02-20 07:39:41.766977126 +0000 UTC m=+0.535405511 container died 59c773c5b7af7c275558df60f43865db57fccd78a92fb6d889936bf1a1ed9613 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_fermat, GIT_BRANCH=main, GIT_CLEAN=True, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, release=1770267347, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 07:39:41 np0005625203.localdomain podman[31436]: 2026-02-20 07:39:41.858525644 +0000 UTC m=+0.078330826 container remove 59c773c5b7af7c275558df60f43865db57fccd78a92fb6d889936bf1a1ed9613 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=busy_fermat, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_BRANCH=main, version=7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, name=rhceph, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Feb 20 07:39:41 np0005625203.localdomain systemd[1]: libpod-conmon-59c773c5b7af7c275558df60f43865db57fccd78a92fb6d889936bf1a1ed9613.scope: Deactivated successfully.
Feb 20 07:39:41 np0005625203.localdomain sudo[31314]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:41 np0005625203.localdomain sudo[31448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:39:41 np0005625203.localdomain sudo[31448]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:41 np0005625203.localdomain sudo[31448]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:42 np0005625203.localdomain sudo[31463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 07:39:42 np0005625203.localdomain sudo[31463]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:42 np0005625203.localdomain systemd[1]: tmp-crun.xASwDx.mount: Deactivated successfully.
Feb 20 07:39:42 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-a21dbac45925d07557355dbe1f7dfe21e82ab209267e9027eaf843129f3059fa-merged.mount: Deactivated successfully.
Feb 20 07:39:42 np0005625203.localdomain podman[31521]: 
Feb 20 07:39:42 np0005625203.localdomain podman[31521]: 2026-02-20 07:39:42.630090794 +0000 UTC m=+0.068300150 container create cea39a3fa5d2e07f80fe8b006cd34cb298040aa1a1d9ed6577375231428aeb6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_burnell, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, com.redhat.component=rhceph-container, name=rhceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, build-date=2026-02-09T10:25:24Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git)
Feb 20 07:39:42 np0005625203.localdomain systemd[1]: Started libpod-conmon-cea39a3fa5d2e07f80fe8b006cd34cb298040aa1a1d9ed6577375231428aeb6e.scope.
Feb 20 07:39:42 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:42 np0005625203.localdomain podman[31521]: 2026-02-20 07:39:42.599358304 +0000 UTC m=+0.037567730 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:42 np0005625203.localdomain podman[31521]: 2026-02-20 07:39:42.697785683 +0000 UTC m=+0.135995049 container init cea39a3fa5d2e07f80fe8b006cd34cb298040aa1a1d9ed6577375231428aeb6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_burnell, build-date=2026-02-09T10:25:24Z, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, ceph=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 07:39:42 np0005625203.localdomain podman[31521]: 2026-02-20 07:39:42.710245824 +0000 UTC m=+0.148455160 container start cea39a3fa5d2e07f80fe8b006cd34cb298040aa1a1d9ed6577375231428aeb6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_burnell, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=1770267347, architecture=x86_64, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, distribution-scope=public, name=rhceph, vcs-type=git, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7)
Feb 20 07:39:42 np0005625203.localdomain podman[31521]: 2026-02-20 07:39:42.710658106 +0000 UTC m=+0.148867482 container attach cea39a3fa5d2e07f80fe8b006cd34cb298040aa1a1d9ed6577375231428aeb6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_burnell, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, vcs-type=git, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 07:39:42 np0005625203.localdomain heuristic_burnell[31537]: 167 167
Feb 20 07:39:42 np0005625203.localdomain systemd[1]: libpod-cea39a3fa5d2e07f80fe8b006cd34cb298040aa1a1d9ed6577375231428aeb6e.scope: Deactivated successfully.
Feb 20 07:39:42 np0005625203.localdomain podman[31521]: 2026-02-20 07:39:42.713459432 +0000 UTC m=+0.151668818 container died cea39a3fa5d2e07f80fe8b006cd34cb298040aa1a1d9ed6577375231428aeb6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_burnell, CEPH_POINT_RELEASE=, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, distribution-scope=public, name=rhceph, RELEASE=main, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, ceph=True, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 07:39:42 np0005625203.localdomain podman[31542]: 2026-02-20 07:39:42.795331306 +0000 UTC m=+0.068702942 container remove cea39a3fa5d2e07f80fe8b006cd34cb298040aa1a1d9ed6577375231428aeb6e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_burnell, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, ceph=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.tags=rhceph ceph, release=1770267347, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc.)
Feb 20 07:39:42 np0005625203.localdomain systemd[1]: libpod-conmon-cea39a3fa5d2e07f80fe8b006cd34cb298040aa1a1d9ed6577375231428aeb6e.scope: Deactivated successfully.
Feb 20 07:39:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-9ad8aa9be0b57e3486ad1b356c952bf71c679fb859d774e119a7495af8601caf-merged.mount: Deactivated successfully.
Feb 20 07:39:43 np0005625203.localdomain podman[31571]: 
Feb 20 07:39:43 np0005625203.localdomain podman[31571]: 2026-02-20 07:39:43.130242254 +0000 UTC m=+0.075227421 container create aed5ae56c936c1d23dfde99dee585b446415ab7dec650a18961071fddaec4f87 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate-test, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, RELEASE=main, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=)
Feb 20 07:39:43 np0005625203.localdomain systemd[1]: Started libpod-conmon-aed5ae56c936c1d23dfde99dee585b446415ab7dec650a18961071fddaec4f87.scope.
Feb 20 07:39:43 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:43 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b56c55576152e1723b3c35f7c724069ff4e9bfc5e047591033e2f6b520caff6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:43 np0005625203.localdomain podman[31571]: 2026-02-20 07:39:43.101752634 +0000 UTC m=+0.046737811 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:43 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b56c55576152e1723b3c35f7c724069ff4e9bfc5e047591033e2f6b520caff6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:43 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b56c55576152e1723b3c35f7c724069ff4e9bfc5e047591033e2f6b520caff6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:43 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b56c55576152e1723b3c35f7c724069ff4e9bfc5e047591033e2f6b520caff6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:43 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b56c55576152e1723b3c35f7c724069ff4e9bfc5e047591033e2f6b520caff6/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:43 np0005625203.localdomain podman[31571]: 2026-02-20 07:39:43.25012166 +0000 UTC m=+0.195106837 container init aed5ae56c936c1d23dfde99dee585b446415ab7dec650a18961071fddaec4f87 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate-test, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, release=1770267347, vcs-type=git, version=7, CEPH_POINT_RELEASE=)
Feb 20 07:39:43 np0005625203.localdomain podman[31571]: 2026-02-20 07:39:43.260500457 +0000 UTC m=+0.205485624 container start aed5ae56c936c1d23dfde99dee585b446415ab7dec650a18961071fddaec4f87 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate-test, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, ceph=True, release=1770267347, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, architecture=x86_64, description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True)
Feb 20 07:39:43 np0005625203.localdomain podman[31571]: 2026-02-20 07:39:43.260818997 +0000 UTC m=+0.205804164 container attach aed5ae56c936c1d23dfde99dee585b446415ab7dec650a18961071fddaec4f87 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate-test, GIT_BRANCH=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2)
Feb 20 07:39:43 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate-test[31587]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Feb 20 07:39:43 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate-test[31587]:                             [--no-systemd] [--no-tmpfs]
Feb 20 07:39:43 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate-test[31587]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 20 07:39:43 np0005625203.localdomain systemd[1]: libpod-aed5ae56c936c1d23dfde99dee585b446415ab7dec650a18961071fddaec4f87.scope: Deactivated successfully.
Feb 20 07:39:43 np0005625203.localdomain podman[31571]: 2026-02-20 07:39:43.481934247 +0000 UTC m=+0.426919444 container died aed5ae56c936c1d23dfde99dee585b446415ab7dec650a18961071fddaec4f87 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate-test, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, release=1770267347, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_BRANCH=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, vcs-type=git)
Feb 20 07:39:43 np0005625203.localdomain podman[31592]: 2026-02-20 07:39:43.571528406 +0000 UTC m=+0.077832820 container remove aed5ae56c936c1d23dfde99dee585b446415ab7dec650a18961071fddaec4f87 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate-test, version=7, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, release=1770267347, vendor=Red Hat, Inc.)
Feb 20 07:39:43 np0005625203.localdomain systemd-journald[618]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation.
Feb 20 07:39:43 np0005625203.localdomain systemd-journald[618]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating.
Feb 20 07:39:43 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 07:39:43 np0005625203.localdomain systemd[1]: libpod-conmon-aed5ae56c936c1d23dfde99dee585b446415ab7dec650a18961071fddaec4f87.scope: Deactivated successfully.
Feb 20 07:39:43 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 07:39:43 np0005625203.localdomain sshd[31435]: Invalid user gitlab-runner from 185.246.128.171 port 63606
Feb 20 07:39:43 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:39:43 np0005625203.localdomain systemd-rc-local-generator[31648]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:39:43 np0005625203.localdomain systemd-sysv-generator[31652]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:39:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:39:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4b56c55576152e1723b3c35f7c724069ff4e9bfc5e047591033e2f6b520caff6-merged.mount: Deactivated successfully.
Feb 20 07:39:44 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:39:44 np0005625203.localdomain systemd-rc-local-generator[31689]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:39:44 np0005625203.localdomain systemd-sysv-generator[31694]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:39:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:39:44 np0005625203.localdomain systemd[1]: Starting Ceph osd.1 for a8557ee9-b55d-5519-942c-cf8f6172f1d8...
Feb 20 07:39:44 np0005625203.localdomain podman[31754]: 
Feb 20 07:39:44 np0005625203.localdomain podman[31754]: 2026-02-20 07:39:44.740071572 +0000 UTC m=+0.073881070 container create d8b745b0850a52de91b893275a40b015d5d3ac71c841359075b506182a64347b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate, version=7, GIT_BRANCH=main, GIT_CLEAN=True, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, release=1770267347, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 07:39:44 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:44 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f60f32cd3625327a79bfde7a73b78a14645eaca1db72d2c270daadead2cdc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:44 np0005625203.localdomain podman[31754]: 2026-02-20 07:39:44.707911219 +0000 UTC m=+0.041720737 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:44 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f60f32cd3625327a79bfde7a73b78a14645eaca1db72d2c270daadead2cdc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:44 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f60f32cd3625327a79bfde7a73b78a14645eaca1db72d2c270daadead2cdc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:44 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f60f32cd3625327a79bfde7a73b78a14645eaca1db72d2c270daadead2cdc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:44 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6f60f32cd3625327a79bfde7a73b78a14645eaca1db72d2c270daadead2cdc9/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:44 np0005625203.localdomain podman[31754]: 2026-02-20 07:39:44.860490324 +0000 UTC m=+0.194299812 container init d8b745b0850a52de91b893275a40b015d5d3ac71c841359075b506182a64347b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, architecture=x86_64, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, ceph=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=)
Feb 20 07:39:44 np0005625203.localdomain podman[31754]: 2026-02-20 07:39:44.868842759 +0000 UTC m=+0.202652247 container start d8b745b0850a52de91b893275a40b015d5d3ac71c841359075b506182a64347b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, release=1770267347, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, com.redhat.component=rhceph-container, version=7, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=)
Feb 20 07:39:44 np0005625203.localdomain podman[31754]: 2026-02-20 07:39:44.869118908 +0000 UTC m=+0.202928396 container attach d8b745b0850a52de91b893275a40b015d5d3ac71c841359075b506182a64347b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate, CEPH_POINT_RELEASE=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, RELEASE=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, distribution-scope=public, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph)
Feb 20 07:39:45 np0005625203.localdomain sshd[31435]: Disconnecting invalid user gitlab-runner 185.246.128.171 port 63606: Change of username or service not allowed: (gitlab-runner,ssh-connection) -> (user7,ssh-connection) [preauth]
Feb 20 07:39:45 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate[31769]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 20 07:39:45 np0005625203.localdomain bash[31754]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 20 07:39:45 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate[31769]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Feb 20 07:39:45 np0005625203.localdomain bash[31754]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Feb 20 07:39:45 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate[31769]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Feb 20 07:39:45 np0005625203.localdomain bash[31754]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Feb 20 07:39:45 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate[31769]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 20 07:39:45 np0005625203.localdomain bash[31754]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Feb 20 07:39:45 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate[31769]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Feb 20 07:39:45 np0005625203.localdomain bash[31754]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Feb 20 07:39:45 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate[31769]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 20 07:39:45 np0005625203.localdomain bash[31754]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Feb 20 07:39:45 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate[31769]: --> ceph-volume raw activate successful for osd ID: 1
Feb 20 07:39:45 np0005625203.localdomain bash[31754]: --> ceph-volume raw activate successful for osd ID: 1
Feb 20 07:39:45 np0005625203.localdomain systemd[1]: libpod-d8b745b0850a52de91b893275a40b015d5d3ac71c841359075b506182a64347b.scope: Deactivated successfully.
Feb 20 07:39:45 np0005625203.localdomain podman[31894]: 2026-02-20 07:39:45.613615539 +0000 UTC m=+0.053702224 container died d8b745b0850a52de91b893275a40b015d5d3ac71c841359075b506182a64347b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2)
Feb 20 07:39:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e6f60f32cd3625327a79bfde7a73b78a14645eaca1db72d2c270daadead2cdc9-merged.mount: Deactivated successfully.
Feb 20 07:39:45 np0005625203.localdomain podman[31894]: 2026-02-20 07:39:45.645825034 +0000 UTC m=+0.085911688 container remove d8b745b0850a52de91b893275a40b015d5d3ac71c841359075b506182a64347b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1-activate, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, GIT_CLEAN=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, name=rhceph, io.openshift.tags=rhceph ceph, distribution-scope=public, CEPH_POINT_RELEASE=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 07:39:45 np0005625203.localdomain podman[31952]: 
Feb 20 07:39:45 np0005625203.localdomain podman[31952]: 2026-02-20 07:39:45.995316369 +0000 UTC m=+0.071976472 container create 95a819f5a92fe8df290a90e41baadb7eba23a8a4a092178d508538027b9aea19 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-type=git, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., release=1770267347, distribution-scope=public, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph)
Feb 20 07:39:46 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bef7074517e99c19430ba33d539f02285abbbbb34476dada55a5498ed9e7c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:46 np0005625203.localdomain podman[31952]: 2026-02-20 07:39:45.967296082 +0000 UTC m=+0.043956195 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:46 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bef7074517e99c19430ba33d539f02285abbbbb34476dada55a5498ed9e7c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:46 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bef7074517e99c19430ba33d539f02285abbbbb34476dada55a5498ed9e7c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:46 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bef7074517e99c19430ba33d539f02285abbbbb34476dada55a5498ed9e7c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:46 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bef7074517e99c19430ba33d539f02285abbbbb34476dada55a5498ed9e7c6/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:46 np0005625203.localdomain podman[31952]: 2026-02-20 07:39:46.105527438 +0000 UTC m=+0.182187521 container init 95a819f5a92fe8df290a90e41baadb7eba23a8a4a092178d508538027b9aea19 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, vcs-type=git, version=7, distribution-scope=public, GIT_CLEAN=True, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, architecture=x86_64)
Feb 20 07:39:46 np0005625203.localdomain podman[31952]: 2026-02-20 07:39:46.114690769 +0000 UTC m=+0.191350852 container start 95a819f5a92fe8df290a90e41baadb7eba23a8a4a092178d508538027b9aea19 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1, distribution-scope=public, ceph=True, RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 07:39:46 np0005625203.localdomain bash[31952]: 95a819f5a92fe8df290a90e41baadb7eba23a8a4a092178d508538027b9aea19
Feb 20 07:39:46 np0005625203.localdomain systemd[1]: Started Ceph osd.1 for a8557ee9-b55d-5519-942c-cf8f6172f1d8.
Feb 20 07:39:46 np0005625203.localdomain sudo[31463]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: set uid:gid to 167:167 (ceph:ceph)
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-osd, pid 2
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: pidfile_write: ignore empty --pid-file
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a23180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a23180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a23180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a23180 /var/lib/ceph/osd/ceph-1/block) close
Feb 20 07:39:46 np0005625203.localdomain sudo[31983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:39:46 np0005625203.localdomain sudo[31983]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:46 np0005625203.localdomain sudo[31983]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:46 np0005625203.localdomain sudo[31998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 07:39:46 np0005625203.localdomain sudo[31998]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) close
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: load: jerasure load: lrc 
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) close
Feb 20 07:39:46 np0005625203.localdomain podman[32062]: 
Feb 20 07:39:46 np0005625203.localdomain podman[32062]: 2026-02-20 07:39:46.94096844 +0000 UTC m=+0.065927996 container create 6d550162ab99d585ad75eb277e81b6eb16f98d83384acabd9f09421ac517c7f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_carver, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, vcs-type=git, GIT_CLEAN=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, release=1770267347, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, RELEASE=main, name=rhceph)
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Feb 20 07:39:46 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) close
Feb 20 07:39:46 np0005625203.localdomain systemd[1]: Started libpod-conmon-6d550162ab99d585ad75eb277e81b6eb16f98d83384acabd9f09421ac517c7f6.scope.
Feb 20 07:39:47 np0005625203.localdomain podman[32062]: 2026-02-20 07:39:46.909575981 +0000 UTC m=+0.034535567 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:47 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:47 np0005625203.localdomain podman[32062]: 2026-02-20 07:39:47.038928335 +0000 UTC m=+0.163887881 container init 6d550162ab99d585ad75eb277e81b6eb16f98d83384acabd9f09421ac517c7f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_carver, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, architecture=x86_64, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, CEPH_POINT_RELEASE=, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z)
Feb 20 07:39:47 np0005625203.localdomain systemd[1]: tmp-crun.ejEcjA.mount: Deactivated successfully.
Feb 20 07:39:47 np0005625203.localdomain podman[32062]: 2026-02-20 07:39:47.052138189 +0000 UTC m=+0.177097745 container start 6d550162ab99d585ad75eb277e81b6eb16f98d83384acabd9f09421ac517c7f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_carver, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, distribution-scope=public, architecture=x86_64, release=1770267347, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, ceph=True)
Feb 20 07:39:47 np0005625203.localdomain podman[32062]: 2026-02-20 07:39:47.052415738 +0000 UTC m=+0.177375284 container attach 6d550162ab99d585ad75eb277e81b6eb16f98d83384acabd9f09421ac517c7f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_carver, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, architecture=x86_64, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, ceph=True, vendor=Red Hat, Inc., name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.42.2, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git)
Feb 20 07:39:47 np0005625203.localdomain determined_carver[32081]: 167 167
Feb 20 07:39:47 np0005625203.localdomain systemd[1]: libpod-6d550162ab99d585ad75eb277e81b6eb16f98d83384acabd9f09421ac517c7f6.scope: Deactivated successfully.
Feb 20 07:39:47 np0005625203.localdomain podman[32062]: 2026-02-20 07:39:47.056926466 +0000 UTC m=+0.181886082 container died 6d550162ab99d585ad75eb277e81b6eb16f98d83384acabd9f09421ac517c7f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_carver, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, ceph=True, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, RELEASE=main, name=rhceph, vcs-type=git, release=1770267347, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, CEPH_POINT_RELEASE=)
Feb 20 07:39:47 np0005625203.localdomain podman[32086]: 2026-02-20 07:39:47.145106532 +0000 UTC m=+0.076044116 container remove 6d550162ab99d585ad75eb277e81b6eb16f98d83384acabd9f09421ac517c7f6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=determined_carver, ceph=True, CEPH_POINT_RELEASE=, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, distribution-scope=public, GIT_BRANCH=main, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, vcs-type=git)
Feb 20 07:39:47 np0005625203.localdomain systemd[1]: libpod-conmon-6d550162ab99d585ad75eb277e81b6eb16f98d83384acabd9f09421ac517c7f6.scope: Deactivated successfully.
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a22e00 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a23180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a23180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a23180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluefs mount
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluefs mount shared_bdev_used = 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: RocksDB version: 7.9.2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Git sha 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Compile date 2026-02-06 00:00:00
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: DB SUMMARY
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: DB Session ID:  U7URNON0KGU7958MKF6V
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: CURRENT file:  CURRENT
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: IDENTITY file:  IDENTITY
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                         Options.error_if_exists: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.create_if_missing: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                         Options.paranoid_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                                     Options.env: 0x55f8e7cb6cb0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                                Options.info_log: 0x55f8e89b0b40
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_file_opening_threads: 16
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                              Options.statistics: (nil)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.use_fsync: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.max_log_file_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                         Options.allow_fallocate: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.use_direct_reads: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.create_missing_column_families: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                              Options.db_log_dir: 
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                                 Options.wal_dir: db.wal
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.advise_random_on_open: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.write_buffer_manager: 0x55f8e7a0c140
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                            Options.rate_limiter: (nil)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.unordered_write: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.row_cache: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                              Options.wal_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.allow_ingest_behind: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.two_write_queues: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.manual_wal_flush: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.wal_compression: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.atomic_flush: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.log_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.allow_data_in_errors: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.db_host_id: __hostname__
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.max_background_jobs: 4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.max_background_compactions: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.max_subcompactions: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.max_open_files: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.bytes_per_sync: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.max_background_flushes: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Compression algorithms supported:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kZSTD supported: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kXpressCompression supported: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kBZip2Compression supported: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kLZ4Compression supported: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kZlibCompression supported: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kLZ4HCCompression supported: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kSnappyCompression supported: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b0d00)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: 
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b0d00)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b0d00)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b0d00)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b0d00)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b0d00)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b0d00)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b0f20)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa2d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b0f20)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa2d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b0f20)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa2d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f16081b4-e5e9-4dac-abbd-a41b96e586bb
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187280333, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187280589, "job": 1, "event": "recovery_finished"}
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: freelist init
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: freelist _read_cfg
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluefs umount
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a23180 /var/lib/ceph/osd/ceph-1/block) close
Feb 20 07:39:47 np0005625203.localdomain podman[32308]: 
Feb 20 07:39:47 np0005625203.localdomain podman[32308]: 2026-02-20 07:39:47.473060018 +0000 UTC m=+0.077424717 container create c2a5ce300bcf0c3a82e796992eaee0f192aea5af2615804502f37da71a1d2f61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate-test, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, name=rhceph, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 07:39:47 np0005625203.localdomain systemd[1]: Started libpod-conmon-c2a5ce300bcf0c3a82e796992eaee0f192aea5af2615804502f37da71a1d2f61.scope.
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a23180 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a23180 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bdev(0x55f8e7a23180 /var/lib/ceph/osd/ceph-1/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 7.0 GiB
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluefs mount
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluefs mount shared_bdev_used = 4718592
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: RocksDB version: 7.9.2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Git sha 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Compile date 2026-02-06 00:00:00
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: DB SUMMARY
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: DB Session ID:  U7URNON0KGU7958MKF6U
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: CURRENT file:  CURRENT
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: IDENTITY file:  IDENTITY
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                         Options.error_if_exists: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.create_if_missing: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                         Options.paranoid_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                                     Options.env: 0x55f8e7aae620
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                                Options.info_log: 0x55f8e7ac2820
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_file_opening_threads: 16
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                              Options.statistics: (nil)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.use_fsync: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.max_log_file_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                         Options.allow_fallocate: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.use_direct_reads: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.create_missing_column_families: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                              Options.db_log_dir: 
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                                 Options.wal_dir: db.wal
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.advise_random_on_open: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.write_buffer_manager: 0x55f8e7a0d5e0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                            Options.rate_limiter: (nil)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.unordered_write: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.row_cache: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                              Options.wal_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.allow_ingest_behind: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.two_write_queues: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.manual_wal_flush: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.wal_compression: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.atomic_flush: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.log_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.allow_data_in_errors: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.db_host_id: __hostname__
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.max_background_jobs: 4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.max_background_compactions: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.max_subcompactions: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.max_open_files: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.bytes_per_sync: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.max_background_flushes: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Compression algorithms supported:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kZSTD supported: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kXpressCompression supported: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kBZip2Compression supported: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kLZ4Compression supported: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kZlibCompression supported: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kLZ4HCCompression supported: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         kSnappyCompression supported: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b10c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa2d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: 
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b10c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa2d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain podman[32308]: 2026-02-20 07:39:47.442404611 +0000 UTC m=+0.046769300 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b10c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa2d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf053b6c2c5688f7081b570c9aaaa08bfecdf36756a6d479e73fc9157e80445/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b10c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa2d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b10c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa2d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b10c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa2d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf053b6c2c5688f7081b570c9aaaa08bfecdf36756a6d479e73fc9157e80445/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e89b10c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fa2d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e7ac2be0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fb610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e7ac2be0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fb610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f8e7ac2be0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x55f8e79fb610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f16081b4-e5e9-4dac-abbd-a41b96e586bb
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187560019, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187568748, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771573187, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f16081b4-e5e9-4dac-abbd-a41b96e586bb", "db_session_id": "U7URNON0KGU7958MKF6U", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187573260, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771573187, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f16081b4-e5e9-4dac-abbd-a41b96e586bb", "db_session_id": "U7URNON0KGU7958MKF6U", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187576742, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771573187, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f16081b4-e5e9-4dac-abbd-a41b96e586bb", "db_session_id": "U7URNON0KGU7958MKF6U", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 20 07:39:47 np0005625203.localdomain sshd[32507]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:39:47 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf053b6c2c5688f7081b570c9aaaa08bfecdf36756a6d479e73fc9157e80445/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573187581042, "job": 1, "event": "recovery_finished"}
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 20 07:39:47 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf053b6c2c5688f7081b570c9aaaa08bfecdf36756a6d479e73fc9157e80445/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:47 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adf053b6c2c5688f7081b570c9aaaa08bfecdf36756a6d479e73fc9157e80445/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f8e7a66700
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: DB pointer 0x55f8e890da00
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fb610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fb610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fb610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Feb 20 07:39:47 np0005625203.localdomain podman[32308]: 2026-02-20 07:39:47.611817081 +0000 UTC m=+0.216181750 container init c2a5ce300bcf0c3a82e796992eaee0f192aea5af2615804502f37da71a1d2f61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate-test, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, release=1770267347, io.buildah.version=1.42.2, GIT_BRANCH=main, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git)
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: _get_class not permitted to load lua
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: _get_class not permitted to load sdk
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: _get_class not permitted to load test_remote_reads
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: osd.1 0 load_pgs
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: osd.1 0 load_pgs opened 0 pgs
Feb 20 07:39:47 np0005625203.localdomain ceph-osd[31970]: osd.1 0 log_to_monitors true
Feb 20 07:39:47 np0005625203.localdomain podman[32308]: 2026-02-20 07:39:47.621873239 +0000 UTC m=+0.226237908 container start c2a5ce300bcf0c3a82e796992eaee0f192aea5af2615804502f37da71a1d2f61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate-test, io.openshift.expose-services=, release=1770267347, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:39:47 np0005625203.localdomain podman[32308]: 2026-02-20 07:39:47.622424555 +0000 UTC m=+0.226789254 container attach c2a5ce300bcf0c3a82e796992eaee0f192aea5af2615804502f37da71a1d2f61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate-test, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, build-date=2026-02-09T10:25:24Z, RELEASE=main, name=rhceph, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 07:39:47 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1[31966]: 2026-02-20T07:39:47.620+0000 7f3d54b09a80 -1 osd.1 0 log_to_monitors true
Feb 20 07:39:47 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate-test[32322]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Feb 20 07:39:47 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate-test[32322]:                             [--no-systemd] [--no-tmpfs]
Feb 20 07:39:47 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate-test[32322]: ceph-volume activate: error: unrecognized arguments: --bad-option
Feb 20 07:39:47 np0005625203.localdomain systemd[1]: libpod-c2a5ce300bcf0c3a82e796992eaee0f192aea5af2615804502f37da71a1d2f61.scope: Deactivated successfully.
Feb 20 07:39:47 np0005625203.localdomain podman[32308]: 2026-02-20 07:39:47.82015642 +0000 UTC m=+0.424521129 container died c2a5ce300bcf0c3a82e796992eaee0f192aea5af2615804502f37da71a1d2f61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate-test, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.openshift.expose-services=, release=1770267347, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, RELEASE=main, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, name=rhceph)
Feb 20 07:39:47 np0005625203.localdomain podman[32544]: 2026-02-20 07:39:47.890350846 +0000 UTC m=+0.063947315 container remove c2a5ce300bcf0c3a82e796992eaee0f192aea5af2615804502f37da71a1d2f61 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate-test, vcs-type=git, ceph=True, name=rhceph, release=1770267347, distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_CLEAN=True, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 07:39:47 np0005625203.localdomain systemd[1]: libpod-conmon-c2a5ce300bcf0c3a82e796992eaee0f192aea5af2615804502f37da71a1d2f61.scope: Deactivated successfully.
Feb 20 07:39:47 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-39b2104c674ecbb4405ae358a74e0a0875423234d61cbaddb3c415f96dd2710f-merged.mount: Deactivated successfully.
Feb 20 07:39:48 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:39:48 np0005625203.localdomain sshd[32507]: Invalid user ftptest from 189.190.2.14 port 53412
Feb 20 07:39:48 np0005625203.localdomain systemd-sysv-generator[32602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:39:48 np0005625203.localdomain systemd-rc-local-generator[32591]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:39:48 np0005625203.localdomain sshd[32507]: Received disconnect from 189.190.2.14 port 53412:11: Bye Bye [preauth]
Feb 20 07:39:48 np0005625203.localdomain sshd[32507]: Disconnected from invalid user ftptest 189.190.2.14 port 53412 [preauth]
Feb 20 07:39:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:39:48 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:39:48 np0005625203.localdomain systemd-rc-local-generator[32638]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:39:48 np0005625203.localdomain systemd-sysv-generator[32645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:39:48 np0005625203.localdomain sshd[32651]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:39:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:39:48 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 20 07:39:48 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 20 07:39:48 np0005625203.localdomain systemd[1]: Starting Ceph osd.4 for a8557ee9-b55d-5519-942c-cf8f6172f1d8...
Feb 20 07:39:48 np0005625203.localdomain ceph-osd[31970]: osd.1 0 done with init, starting boot process
Feb 20 07:39:48 np0005625203.localdomain ceph-osd[31970]: osd.1 0 start_boot
Feb 20 07:39:48 np0005625203.localdomain ceph-osd[31970]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 20 07:39:48 np0005625203.localdomain ceph-osd[31970]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 20 07:39:48 np0005625203.localdomain ceph-osd[31970]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 20 07:39:48 np0005625203.localdomain ceph-osd[31970]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 20 07:39:48 np0005625203.localdomain ceph-osd[31970]: osd.1 0  bench count 12288000 bsize 4 KiB
Feb 20 07:39:49 np0005625203.localdomain podman[32702]: 
Feb 20 07:39:49 np0005625203.localdomain podman[32702]: 2026-02-20 07:39:49.056151679 +0000 UTC m=+0.075246592 container create 4a7798404b667146227d1b7402224dbcc4a8cd14e518dd32b3a50fe60bf7da33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, ceph=True, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, distribution-scope=public, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=)
Feb 20 07:39:49 np0005625203.localdomain systemd[1]: tmp-crun.OZF5xj.mount: Deactivated successfully.
Feb 20 07:39:49 np0005625203.localdomain podman[32702]: 2026-02-20 07:39:49.02512903 +0000 UTC m=+0.044223953 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:49 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:49 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37580546ed008aeea102140d7cc967b267cf01a5df18d7cd30cfe4f9d2657fa7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:49 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37580546ed008aeea102140d7cc967b267cf01a5df18d7cd30cfe4f9d2657fa7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:49 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37580546ed008aeea102140d7cc967b267cf01a5df18d7cd30cfe4f9d2657fa7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:49 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37580546ed008aeea102140d7cc967b267cf01a5df18d7cd30cfe4f9d2657fa7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:49 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37580546ed008aeea102140d7cc967b267cf01a5df18d7cd30cfe4f9d2657fa7/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:49 np0005625203.localdomain podman[32702]: 2026-02-20 07:39:49.199301716 +0000 UTC m=+0.218396629 container init 4a7798404b667146227d1b7402224dbcc4a8cd14e518dd32b3a50fe60bf7da33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate, vcs-type=git, architecture=x86_64, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, GIT_BRANCH=main, io.openshift.expose-services=, ceph=True, io.buildah.version=1.42.2, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:39:49 np0005625203.localdomain podman[32702]: 2026-02-20 07:39:49.215533361 +0000 UTC m=+0.234628274 container start 4a7798404b667146227d1b7402224dbcc4a8cd14e518dd32b3a50fe60bf7da33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate, io.openshift.expose-services=, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Feb 20 07:39:49 np0005625203.localdomain podman[32702]: 2026-02-20 07:39:49.215935444 +0000 UTC m=+0.235030357 container attach 4a7798404b667146227d1b7402224dbcc4a8cd14e518dd32b3a50fe60bf7da33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, RELEASE=main, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.buildah.version=1.42.2, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, version=7, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 07:39:49 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate[32717]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Feb 20 07:39:49 np0005625203.localdomain bash[32702]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Feb 20 07:39:49 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate[32717]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Feb 20 07:39:49 np0005625203.localdomain bash[32702]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-4 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Feb 20 07:39:49 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate[32717]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Feb 20 07:39:49 np0005625203.localdomain bash[32702]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Feb 20 07:39:49 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate[32717]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 20 07:39:49 np0005625203.localdomain bash[32702]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Feb 20 07:39:49 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate[32717]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Feb 20 07:39:49 np0005625203.localdomain bash[32702]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-4/block
Feb 20 07:39:49 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate[32717]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Feb 20 07:39:49 np0005625203.localdomain bash[32702]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-4
Feb 20 07:39:49 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate[32717]: --> ceph-volume raw activate successful for osd ID: 4
Feb 20 07:39:49 np0005625203.localdomain bash[32702]: --> ceph-volume raw activate successful for osd ID: 4
Feb 20 07:39:49 np0005625203.localdomain systemd[1]: libpod-4a7798404b667146227d1b7402224dbcc4a8cd14e518dd32b3a50fe60bf7da33.scope: Deactivated successfully.
Feb 20 07:39:49 np0005625203.localdomain podman[32702]: 2026-02-20 07:39:49.865140842 +0000 UTC m=+0.884235795 container died 4a7798404b667146227d1b7402224dbcc4a8cd14e518dd32b3a50fe60bf7da33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate, vcs-type=git, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, CEPH_POINT_RELEASE=, version=7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, GIT_CLEAN=True)
Feb 20 07:39:49 np0005625203.localdomain podman[32848]: 2026-02-20 07:39:49.967167572 +0000 UTC m=+0.090236370 container remove 4a7798404b667146227d1b7402224dbcc4a8cd14e518dd32b3a50fe60bf7da33 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4-activate, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, distribution-scope=public, RELEASE=main, ceph=True, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:39:50 np0005625203.localdomain systemd[1]: tmp-crun.40VFVa.mount: Deactivated successfully.
Feb 20 07:39:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-37580546ed008aeea102140d7cc967b267cf01a5df18d7cd30cfe4f9d2657fa7-merged.mount: Deactivated successfully.
Feb 20 07:39:50 np0005625203.localdomain podman[32906]: 
Feb 20 07:39:50 np0005625203.localdomain podman[32906]: 2026-02-20 07:39:50.297977875 +0000 UTC m=+0.079621676 container create 06722cde6864150e882272885931b1648feac400f82b7ce481379212190a02e6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 07:39:50 np0005625203.localdomain systemd[1]: tmp-crun.gXt5Uf.mount: Deactivated successfully.
Feb 20 07:39:50 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e37a04aeed7aa29fe7d2d7b964aa6ff40ad1b975afd775141fa633f3685e41b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:50 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e37a04aeed7aa29fe7d2d7b964aa6ff40ad1b975afd775141fa633f3685e41b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:50 np0005625203.localdomain podman[32906]: 2026-02-20 07:39:50.262086288 +0000 UTC m=+0.043730119 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:50 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e37a04aeed7aa29fe7d2d7b964aa6ff40ad1b975afd775141fa633f3685e41b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:50 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e37a04aeed7aa29fe7d2d7b964aa6ff40ad1b975afd775141fa633f3685e41b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:50 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e37a04aeed7aa29fe7d2d7b964aa6ff40ad1b975afd775141fa633f3685e41b/merged/var/lib/ceph/osd/ceph-4 supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:50 np0005625203.localdomain podman[32906]: 2026-02-20 07:39:50.413033243 +0000 UTC m=+0.194677044 container init 06722cde6864150e882272885931b1648feac400f82b7ce481379212190a02e6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-type=git, name=rhceph, CEPH_POINT_RELEASE=, version=7, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, release=1770267347)
Feb 20 07:39:50 np0005625203.localdomain podman[32906]: 2026-02-20 07:39:50.424826283 +0000 UTC m=+0.206470094 container start 06722cde6864150e882272885931b1648feac400f82b7ce481379212190a02e6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, release=1770267347, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, architecture=x86_64, RELEASE=main, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.buildah.version=1.42.2)
Feb 20 07:39:50 np0005625203.localdomain bash[32906]: 06722cde6864150e882272885931b1648feac400f82b7ce481379212190a02e6
Feb 20 07:39:50 np0005625203.localdomain systemd[1]: Started Ceph osd.4 for a8557ee9-b55d-5519-942c-cf8f6172f1d8.
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: set uid:gid to 167:167 (ceph:ceph)
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-osd, pid 2
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: pidfile_write: ignore empty --pid-file
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eb180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eb180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eb180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eb180 /var/lib/ceph/osd/ceph-4/block) close
Feb 20 07:39:50 np0005625203.localdomain sudo[31998]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:50 np0005625203.localdomain sudo[32937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:39:50 np0005625203.localdomain sudo[32937]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:50 np0005625203.localdomain sudo[32937]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:50 np0005625203.localdomain sudo[32952]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -- raw list --format json
Feb 20 07:39:50 np0005625203.localdomain sudo[32952]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) close
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[31970]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 28.894 iops: 7396.795 elapsed_sec: 0.406
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [WRN] : OSD bench result of 7396.794560 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[31970]: osd.1 0 waiting for initial osdmap
Feb 20 07:39:50 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1[31966]: 2026-02-20T07:39:50.911+0000 7f3d5129d640 -1 osd.1 0 waiting for initial osdmap
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[31970]: osd.1 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[31970]: osd.1 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[31970]: osd.1 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[31970]: osd.1 10 check_osdmap_features require_osd_release unknown -> reef
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[31970]: osd.1 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 20 07:39:50 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-1[31966]: 2026-02-20T07:39:50.955+0000 7f3d4c0b2640 -1 osd.1 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[31970]: osd.1 10 set_numa_affinity not setting numa affinity
Feb 20 07:39:50 np0005625203.localdomain ceph-osd[31970]: osd.1 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: starting osd.4 osd_data /var/lib/ceph/osd/ceph-4 /var/lib/ceph/osd/ceph-4/journal
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: load: jerasure load: lrc 
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) close
Feb 20 07:39:51 np0005625203.localdomain podman[33011]: 
Feb 20 07:39:51 np0005625203.localdomain podman[33011]: 2026-02-20 07:39:51.324314603 +0000 UTC m=+0.064907495 container create 3d4ee883eadd6543dbb73996515a4f2decce515d7b65beebcb4d4d757a8354b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_lamarr, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, distribution-scope=public, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, architecture=x86_64, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) close
Feb 20 07:39:51 np0005625203.localdomain systemd[1]: Started libpod-conmon-3d4ee883eadd6543dbb73996515a4f2decce515d7b65beebcb4d4d757a8354b6.scope.
Feb 20 07:39:51 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:51 np0005625203.localdomain podman[33011]: 2026-02-20 07:39:51.390752555 +0000 UTC m=+0.131345457 container init 3d4ee883eadd6543dbb73996515a4f2decce515d7b65beebcb4d4d757a8354b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_lamarr, version=7, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.openshift.expose-services=, release=1770267347)
Feb 20 07:39:51 np0005625203.localdomain systemd[1]: tmp-crun.aOYGAV.mount: Deactivated successfully.
Feb 20 07:39:51 np0005625203.localdomain podman[33011]: 2026-02-20 07:39:51.301152305 +0000 UTC m=+0.041745177 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:51 np0005625203.localdomain podman[33011]: 2026-02-20 07:39:51.402349709 +0000 UTC m=+0.142942601 container start 3d4ee883eadd6543dbb73996515a4f2decce515d7b65beebcb4d4d757a8354b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_lamarr, release=1770267347, RELEASE=main, io.buildah.version=1.42.2, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_CLEAN=True)
Feb 20 07:39:51 np0005625203.localdomain podman[33011]: 2026-02-20 07:39:51.402642098 +0000 UTC m=+0.143235020 container attach 3d4ee883eadd6543dbb73996515a4f2decce515d7b65beebcb4d4d757a8354b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_lamarr, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 07:39:51 np0005625203.localdomain focused_lamarr[33030]: 167 167
Feb 20 07:39:51 np0005625203.localdomain systemd[1]: libpod-3d4ee883eadd6543dbb73996515a4f2decce515d7b65beebcb4d4d757a8354b6.scope: Deactivated successfully.
Feb 20 07:39:51 np0005625203.localdomain podman[33011]: 2026-02-20 07:39:51.405743823 +0000 UTC m=+0.146336725 container died 3d4ee883eadd6543dbb73996515a4f2decce515d7b65beebcb4d4d757a8354b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_lamarr, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, name=rhceph, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, ceph=True, GIT_BRANCH=main, release=1770267347)
Feb 20 07:39:51 np0005625203.localdomain podman[33035]: 2026-02-20 07:39:51.497868809 +0000 UTC m=+0.077727957 container remove 3d4ee883eadd6543dbb73996515a4f2decce515d7b65beebcb4d4d757a8354b6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=focused_lamarr, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_BRANCH=main, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, release=1770267347, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:39:51 np0005625203.localdomain systemd[1]: libpod-conmon-3d4ee883eadd6543dbb73996515a4f2decce515d7b65beebcb4d4d757a8354b6.scope: Deactivated successfully.
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: osd.4:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eae00 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 data 0.06
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eb180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eb180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eb180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluefs mount
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluefs mount shared_bdev_used = 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: RocksDB version: 7.9.2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Git sha 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Compile date 2026-02-06 00:00:00
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: DB SUMMARY
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: DB Session ID:  V8V2Q67L0K3CFWSM72WE
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: CURRENT file:  CURRENT
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: IDENTITY file:  IDENTITY
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                         Options.error_if_exists: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.create_if_missing: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                         Options.paranoid_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                                     Options.env: 0x558a5ec7ecb0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                                Options.info_log: 0x558a5f980b80
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_file_opening_threads: 16
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                              Options.statistics: (nil)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.use_fsync: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.max_log_file_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                         Options.allow_fallocate: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.use_direct_reads: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.create_missing_column_families: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                              Options.db_log_dir: 
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                                 Options.wal_dir: db.wal
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.advise_random_on_open: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.write_buffer_manager: 0x558a5e9d4140
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                            Options.rate_limiter: (nil)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.unordered_write: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.row_cache: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                              Options.wal_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.allow_ingest_behind: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.two_write_queues: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.manual_wal_flush: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.wal_compression: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.atomic_flush: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.log_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.allow_data_in_errors: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.db_host_id: __hostname__
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.max_background_jobs: 4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.max_background_compactions: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.max_subcompactions: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.max_open_files: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.bytes_per_sync: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.max_background_flushes: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Compression algorithms supported:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kZSTD supported: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kXpressCompression supported: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kBZip2Compression supported: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kLZ4Compression supported: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kZlibCompression supported: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kLZ4HCCompression supported: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kSnappyCompression supported: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f980d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c2850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: 
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f980d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c2850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f980d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c2850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f980d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c2850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f980d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c2850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f980d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c2850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f980d40)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c2850
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f980f60)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c22d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f980f60)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c22d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f980f60)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c22d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b587cebb-41c8-4b92-915e-a37d87644a22
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191623916, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191624153, "job": 1, "event": "recovery_finished"}
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old nid_max 1025
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta old blobid_max 10240
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _open_super_meta min_alloc_size 0x1000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: freelist init
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: freelist _read_cfg
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluefs umount
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eb180 /var/lib/ceph/osd/ceph-4/block) close
Feb 20 07:39:51 np0005625203.localdomain podman[33088]: 
Feb 20 07:39:51 np0005625203.localdomain podman[33088]: 2026-02-20 07:39:51.696893284 +0000 UTC m=+0.070251269 container create 73df5f3139f5f0e1cbbba524dc632ae5e8cbac51c4cff1e0e57b8117da863c97 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_clarke, vendor=Red Hat, Inc., release=1770267347, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, version=7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, name=rhceph, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main)
Feb 20 07:39:51 np0005625203.localdomain systemd[1]: Started libpod-conmon-73df5f3139f5f0e1cbbba524dc632ae5e8cbac51c4cff1e0e57b8117da863c97.scope.
Feb 20 07:39:51 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:51 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fdb01179acfc32dbab5ee883722d8004c40ba2339749066f265c013690d6c5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:51 np0005625203.localdomain podman[33088]: 2026-02-20 07:39:51.668998861 +0000 UTC m=+0.042356866 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:51 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fdb01179acfc32dbab5ee883722d8004c40ba2339749066f265c013690d6c5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:51 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fdb01179acfc32dbab5ee883722d8004c40ba2339749066f265c013690d6c5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:51 np0005625203.localdomain podman[33088]: 2026-02-20 07:39:51.798614754 +0000 UTC m=+0.171972739 container init 73df5f3139f5f0e1cbbba524dc632ae5e8cbac51c4cff1e0e57b8117da863c97 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_clarke, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, version=7, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[31970]: osd.1 11 state: booting -> active
Feb 20 07:39:51 np0005625203.localdomain podman[33088]: 2026-02-20 07:39:51.80860555 +0000 UTC m=+0.181963525 container start 73df5f3139f5f0e1cbbba524dc632ae5e8cbac51c4cff1e0e57b8117da863c97 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_clarke, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vendor=Red Hat, Inc., vcs-type=git, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, ceph=True, architecture=x86_64, version=7)
Feb 20 07:39:51 np0005625203.localdomain podman[33088]: 2026-02-20 07:39:51.808877098 +0000 UTC m=+0.182235123 container attach 73df5f3139f5f0e1cbbba524dc632ae5e8cbac51c4cff1e0e57b8117da863c97 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_clarke, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, ceph=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, distribution-scope=public, architecture=x86_64, name=rhceph, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eb180 /var/lib/ceph/osd/ceph-4/block) open path /var/lib/ceph/osd/ceph-4/block
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eb180 /var/lib/ceph/osd/ceph-4/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-4/block failed: (22) Invalid argument
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bdev(0x558a5e9eb180 /var/lib/ceph/osd/ceph-4/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-4/block size 7.0 GiB
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluefs mount
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluefs mount shared_bdev_used = 4718592
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: RocksDB version: 7.9.2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Git sha 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Compile date 2026-02-06 00:00:00
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: DB SUMMARY
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: DB Session ID:  V8V2Q67L0K3CFWSM72WF
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: CURRENT file:  CURRENT
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: IDENTITY file:  IDENTITY
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                         Options.error_if_exists: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.create_if_missing: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                         Options.paranoid_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                                     Options.env: 0x558a5eb10310
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                                      Options.fs: LegacyFileSystem
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                                Options.info_log: 0x558a5f981c80
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_file_opening_threads: 16
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                              Options.statistics: (nil)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.use_fsync: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.max_log_file_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                         Options.allow_fallocate: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.use_direct_reads: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.create_missing_column_families: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                              Options.db_log_dir: 
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                                 Options.wal_dir: db.wal
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.advise_random_on_open: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.write_buffer_manager: 0x558a5e9d5540
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                            Options.rate_limiter: (nil)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.unordered_write: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.row_cache: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                              Options.wal_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.allow_ingest_behind: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.two_write_queues: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.manual_wal_flush: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.wal_compression: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.atomic_flush: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.log_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.allow_data_in_errors: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.db_host_id: __hostname__
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.max_background_jobs: 4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.max_background_compactions: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.max_subcompactions: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.writable_file_max_buffer_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.max_total_wal_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.max_open_files: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.bytes_per_sync: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.compaction_readahead_size: 2097152
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.max_background_flushes: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Compression algorithms supported:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kZSTD supported: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kXpressCompression supported: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kBZip2Compression supported: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kLZ4Compression supported: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kZlibCompression supported: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kLZ4HCCompression supported: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         kSnappyCompression supported: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 0, name: default)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f9c1600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c22d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: 
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 1, name: m-0)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f9c1600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c22d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 2, name: m-1)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f9c1600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c22d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 3, name: m-2)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f9c1600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c22d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 4, name: p-0)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f9c1600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c22d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 5, name: p-1)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f9c1600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c22d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 6, name: p-2)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f9c1600)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c22d0
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 483183820
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 7, name: O-0)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f9c13c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c3610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 8, name: O-1)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f9c13c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c3610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 9, name: O-2)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:           Options.merge_operator: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x558a5f9c13c0)
                                                            cache_index_and_filter_blocks: 1
                                                            cache_index_and_filter_blocks_with_high_priority: 0
                                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                                            pin_top_level_index_and_filter: 1
                                                            index_type: 0
                                                            data_block_index_type: 0
                                                            index_shortening: 1
                                                            data_block_hash_table_util_ratio: 0.750000
                                                            checksum: 4
                                                            no_block_cache: 0
                                                            block_cache: 0x558a5e9c3610
                                                            block_cache_name: BinnedLRUCache
                                                            block_cache_options:
                                                              capacity : 536870912
                                                              num_shard_bits : 4
                                                              strict_capacity_limit : 0
                                                              high_pri_pool_ratio: 0.000
                                                            block_cache_compressed: (nil)
                                                            persistent_cache: (nil)
                                                            block_size: 4096
                                                            block_size_deviation: 10
                                                            block_restart_interval: 16
                                                            index_block_restart_interval: 1
                                                            metadata_block_size: 4096
                                                            partition_filters: 0
                                                            use_delta_encoding: 1
                                                            filter_policy: bloomfilter
                                                            whole_key_filtering: 1
                                                            verify_compression: 0
                                                            read_amp_bytes_per_bit: 0
                                                            format_version: 5
                                                            enable_index_compression: 1
                                                            block_align: 0
                                                            max_auto_readahead_size: 262144
                                                            prepopulate_block_cache: 0
                                                            initial_auto_readahead_size: 8192
                                                            num_file_reads_for_auto_readahead: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.write_buffer_size: 16777216
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.max_write_buffer_number: 64
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.compression: LZ4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.num_levels: 7
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.bloom_locality: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                               Options.ttl: 2592000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                       Options.enable_blob_files: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                           Options.min_blob_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 10, name: L)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:578] Failed to register data paths of column family (id: 11, name: P)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: b587cebb-41c8-4b92-915e-a37d87644a22
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191894505, "job": 1, "event": "recovery_started", "wal_files": [31]}
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191899826, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771573191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b587cebb-41c8-4b92-915e-a37d87644a22", "db_session_id": "V8V2Q67L0K3CFWSM72WF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191904517, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771573191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b587cebb-41c8-4b92-915e-a37d87644a22", "db_session_id": "V8V2Q67L0K3CFWSM72WF", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191909238, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1290, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771573191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "b587cebb-41c8-4b92-915e-a37d87644a22", "db_session_id": "V8V2Q67L0K3CFWSM72WF", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl_open.cc:1432] Failed to truncate log #31: IO error: No such file or directory: While open a file for appending: db.wal/000031.log: No such file or directory
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771573191913749, "job": 1, "event": "recovery_finished"}
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x558a5ea8a380
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: DB pointer 0x558a5f8dda00
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super from 4, latest 4
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _upgrade_super done
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c3610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c3610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c3610#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 0.1 total, 0.1 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: <cls> /builddir/build/BUILD/ceph-18.2.1/src/cls/hello/cls_hello.cc:316: loading cls_hello
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: _get_class not permitted to load lua
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: _get_class not permitted to load sdk
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: _get_class not permitted to load test_remote_reads
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: osd.4 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: osd.4 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: osd.4 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: osd.4 0 load_pgs
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: osd.4 0 load_pgs opened 0 pgs
Feb 20 07:39:51 np0005625203.localdomain ceph-osd[32924]: osd.4 0 log_to_monitors true
Feb 20 07:39:51 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4[32920]: 2026-02-20T07:39:51.955+0000 7fda5a1e6a80 -1 osd.4 0 log_to_monitors true
Feb 20 07:39:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-b8e020fb7662e92879e38a4e414244acab20f23ca3ce5478c0816aa53461e8ed-merged.mount: Deactivated successfully.
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]: {
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:     "a38015ef-851b-4289-b15a-f73ed0cda6b2": {
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:         "ceph_fsid": "a8557ee9-b55d-5519-942c-cf8f6172f1d8",
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:         "osd_id": 1,
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:         "osd_uuid": "a38015ef-851b-4289-b15a-f73ed0cda6b2",
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:         "type": "bluestore"
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:     },
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:     "c72a933f-aa20-4a0f-910f-f52fa396bd73": {
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:         "ceph_fsid": "a8557ee9-b55d-5519-942c-cf8f6172f1d8",
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:         "osd_id": 4,
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:         "osd_uuid": "c72a933f-aa20-4a0f-910f-f52fa396bd73",
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:         "type": "bluestore"
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]:     }
Feb 20 07:39:52 np0005625203.localdomain gifted_clarke[33265]: }
Feb 20 07:39:52 np0005625203.localdomain systemd[1]: libpod-73df5f3139f5f0e1cbbba524dc632ae5e8cbac51c4cff1e0e57b8117da863c97.scope: Deactivated successfully.
Feb 20 07:39:52 np0005625203.localdomain podman[33088]: 2026-02-20 07:39:52.451454944 +0000 UTC m=+0.824812949 container died 73df5f3139f5f0e1cbbba524dc632ae5e8cbac51c4cff1e0e57b8117da863c97 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_clarke, distribution-scope=public, RELEASE=main, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, ceph=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 07:39:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0fdb01179acfc32dbab5ee883722d8004c40ba2339749066f265c013690d6c5f-merged.mount: Deactivated successfully.
Feb 20 07:39:52 np0005625203.localdomain podman[33516]: 2026-02-20 07:39:52.548454409 +0000 UTC m=+0.082354218 container remove 73df5f3139f5f0e1cbbba524dc632ae5e8cbac51c4cff1e0e57b8117da863c97 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gifted_clarke, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, architecture=x86_64, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347)
Feb 20 07:39:52 np0005625203.localdomain systemd[1]: libpod-conmon-73df5f3139f5f0e1cbbba524dc632ae5e8cbac51c4cff1e0e57b8117da863c97.scope: Deactivated successfully.
Feb 20 07:39:52 np0005625203.localdomain sudo[32952]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:52 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Feb 20 07:39:52 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Feb 20 07:39:53 np0005625203.localdomain sudo[33530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:39:53 np0005625203.localdomain sshd[32651]: Invalid user user7 from 185.246.128.171 port 32945
Feb 20 07:39:53 np0005625203.localdomain sudo[33530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:53 np0005625203.localdomain sudo[33530]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:53 np0005625203.localdomain sudo[33545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:39:53 np0005625203.localdomain sudo[33545]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:53 np0005625203.localdomain sudo[33545]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:53 np0005625203.localdomain sudo[33560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 07:39:53 np0005625203.localdomain sudo[33560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:53 np0005625203.localdomain sshd[32651]: Disconnecting invalid user user7 185.246.128.171 port 32945: Change of username or service not allowed: (user7,ssh-connection) -> (devops,ssh-connection) [preauth]
Feb 20 07:39:53 np0005625203.localdomain ceph-osd[32924]: osd.4 0 done with init, starting boot process
Feb 20 07:39:53 np0005625203.localdomain ceph-osd[32924]: osd.4 0 start_boot
Feb 20 07:39:53 np0005625203.localdomain ceph-osd[32924]: osd.4 0 maybe_override_options_for_qos osd_max_backfills set to 1
Feb 20 07:39:53 np0005625203.localdomain ceph-osd[32924]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Feb 20 07:39:53 np0005625203.localdomain ceph-osd[32924]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Feb 20 07:39:53 np0005625203.localdomain ceph-osd[32924]: osd.4 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Feb 20 07:39:53 np0005625203.localdomain ceph-osd[32924]: osd.4 0  bench count 12288000 bsize 4 KiB
Feb 20 07:39:53 np0005625203.localdomain ceph-osd[31970]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 20 07:39:53 np0005625203.localdomain ceph-osd[31970]: osd.1 13 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Feb 20 07:39:53 np0005625203.localdomain ceph-osd[31970]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 20 07:39:53 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:39:54 np0005625203.localdomain podman[33645]: 2026-02-20 07:39:54.03378912 +0000 UTC m=+0.089876939 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1770267347, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.buildah.version=1.42.2, io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 07:39:54 np0005625203.localdomain podman[33645]: 2026-02-20 07:39:54.167192599 +0000 UTC m=+0.223280448 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, distribution-scope=public, CEPH_POINT_RELEASE=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.)
Feb 20 07:39:54 np0005625203.localdomain sudo[33560]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:54 np0005625203.localdomain sudo[33709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:39:54 np0005625203.localdomain sudo[33709]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:54 np0005625203.localdomain sudo[33709]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:54 np0005625203.localdomain sudo[33724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:39:54 np0005625203.localdomain sudo[33724]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:54 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 undersized+peered mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:39:55 np0005625203.localdomain sudo[33724]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:55 np0005625203.localdomain sudo[33770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:39:55 np0005625203.localdomain sudo[33770]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:55 np0005625203.localdomain sudo[33770]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:55 np0005625203.localdomain sudo[33785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -- inventory --format=json-pretty --filter-for-batch
Feb 20 07:39:55 np0005625203.localdomain sudo[33785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:55 np0005625203.localdomain sshd[33800]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:39:55 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=15 pruub=14.960871696s) [1,5,3] r=0 lpr=15 pi=[13,15)/0 crt=0'0 mlcod 0'0 peered pruub 23.255788803s@ mbc={}] start_peering_interval up [1] -> [1,5,3], acting [1] -> [1,5,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:39:55 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=15 pruub=14.960871696s) [1,5,3] r=0 lpr=15 pi=[13,15)/0 crt=0'0 mlcod 0'0 unknown pruub 23.255788803s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:39:56 np0005625203.localdomain podman[33837]: 
Feb 20 07:39:56 np0005625203.localdomain podman[33837]: 2026-02-20 07:39:56.141664825 +0000 UTC m=+0.077336556 container create 28b99091b81fb11a958fde378a9cccd45e3bfa963ea5203918f9bd81b3907939 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_margulis, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, release=1770267347, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, RELEASE=main, ceph=True, GIT_BRANCH=main)
Feb 20 07:39:56 np0005625203.localdomain ceph-osd[32924]: osd.4 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 30.038 iops: 7689.833 elapsed_sec: 0.390
Feb 20 07:39:56 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [WRN] : OSD bench result of 7689.833183 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.4. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Feb 20 07:39:56 np0005625203.localdomain ceph-osd[32924]: osd.4 0 waiting for initial osdmap
Feb 20 07:39:56 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4[32920]: 2026-02-20T07:39:56.158+0000 7fda56165640 -1 osd.4 0 waiting for initial osdmap
Feb 20 07:39:56 np0005625203.localdomain ceph-osd[32924]: osd.4 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Feb 20 07:39:56 np0005625203.localdomain ceph-osd[32924]: osd.4 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Feb 20 07:39:56 np0005625203.localdomain ceph-osd[32924]: osd.4 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Feb 20 07:39:56 np0005625203.localdomain ceph-osd[32924]: osd.4 15 check_osdmap_features require_osd_release unknown -> reef
Feb 20 07:39:56 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-osd-4[32920]: 2026-02-20T07:39:56.177+0000 7fda5178f640 -1 osd.4 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 20 07:39:56 np0005625203.localdomain ceph-osd[32924]: osd.4 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Feb 20 07:39:56 np0005625203.localdomain ceph-osd[32924]: osd.4 15 set_numa_affinity not setting numa affinity
Feb 20 07:39:56 np0005625203.localdomain ceph-osd[32924]: osd.4 15 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Feb 20 07:39:56 np0005625203.localdomain systemd[1]: Started libpod-conmon-28b99091b81fb11a958fde378a9cccd45e3bfa963ea5203918f9bd81b3907939.scope.
Feb 20 07:39:56 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:56 np0005625203.localdomain podman[33837]: 2026-02-20 07:39:56.112027488 +0000 UTC m=+0.047699249 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:56 np0005625203.localdomain podman[33837]: 2026-02-20 07:39:56.213023646 +0000 UTC m=+0.148695377 container init 28b99091b81fb11a958fde378a9cccd45e3bfa963ea5203918f9bd81b3907939 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_margulis, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vcs-type=git, GIT_BRANCH=main, version=7, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., RELEASE=main, architecture=x86_64, release=1770267347)
Feb 20 07:39:56 np0005625203.localdomain podman[33837]: 2026-02-20 07:39:56.223036642 +0000 UTC m=+0.158708373 container start 28b99091b81fb11a958fde378a9cccd45e3bfa963ea5203918f9bd81b3907939 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_margulis, ceph=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, RELEASE=main, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 07:39:56 np0005625203.localdomain podman[33837]: 2026-02-20 07:39:56.223310741 +0000 UTC m=+0.158982522 container attach 28b99091b81fb11a958fde378a9cccd45e3bfa963ea5203918f9bd81b3907939 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_margulis, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, vcs-type=git, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, architecture=x86_64, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph)
Feb 20 07:39:56 np0005625203.localdomain elegant_margulis[33855]: 167 167
Feb 20 07:39:56 np0005625203.localdomain systemd[1]: libpod-28b99091b81fb11a958fde378a9cccd45e3bfa963ea5203918f9bd81b3907939.scope: Deactivated successfully.
Feb 20 07:39:56 np0005625203.localdomain podman[33837]: 2026-02-20 07:39:56.226723625 +0000 UTC m=+0.162395406 container died 28b99091b81fb11a958fde378a9cccd45e3bfa963ea5203918f9bd81b3907939 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_margulis, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, distribution-scope=public, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, architecture=x86_64, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-type=git, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 07:39:56 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-847b12918ea7dc08a678eea20770a4d38293b7a0f21b51251514dadb39f4d1c4-merged.mount: Deactivated successfully.
Feb 20 07:39:56 np0005625203.localdomain podman[33860]: 2026-02-20 07:39:56.317361787 +0000 UTC m=+0.076238643 container remove 28b99091b81fb11a958fde378a9cccd45e3bfa963ea5203918f9bd81b3907939 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elegant_margulis, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 07:39:56 np0005625203.localdomain systemd[1]: libpod-conmon-28b99091b81fb11a958fde378a9cccd45e3bfa963ea5203918f9bd81b3907939.scope: Deactivated successfully.
Feb 20 07:39:56 np0005625203.localdomain podman[33880]: 
Feb 20 07:39:56 np0005625203.localdomain podman[33880]: 2026-02-20 07:39:56.518742623 +0000 UTC m=+0.069251868 container create 8a72a3c9f6fb34769793131c76523ff9133068cb2fd28d282f6c55c7db744fe3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_sinoussi, vcs-type=git, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, architecture=x86_64, ceph=True, RELEASE=main, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, distribution-scope=public, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vendor=Red Hat, Inc., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:39:56 np0005625203.localdomain systemd[1]: Started libpod-conmon-8a72a3c9f6fb34769793131c76523ff9133068cb2fd28d282f6c55c7db744fe3.scope.
Feb 20 07:39:56 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:39:56 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2821e7cd3871f4721b9ee4f8394a9681e82f5b2ad3aa8f0d8adb14bbba6cc714/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:56 np0005625203.localdomain podman[33880]: 2026-02-20 07:39:56.49115055 +0000 UTC m=+0.041659825 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 07:39:56 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2821e7cd3871f4721b9ee4f8394a9681e82f5b2ad3aa8f0d8adb14bbba6cc714/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:56 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2821e7cd3871f4721b9ee4f8394a9681e82f5b2ad3aa8f0d8adb14bbba6cc714/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 07:39:56 np0005625203.localdomain podman[33880]: 2026-02-20 07:39:56.613064597 +0000 UTC m=+0.163573842 container init 8a72a3c9f6fb34769793131c76523ff9133068cb2fd28d282f6c55c7db744fe3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_sinoussi, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, ceph=True, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git)
Feb 20 07:39:56 np0005625203.localdomain podman[33880]: 2026-02-20 07:39:56.621887396 +0000 UTC m=+0.172396641 container start 8a72a3c9f6fb34769793131c76523ff9133068cb2fd28d282f6c55c7db744fe3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_sinoussi, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, vcs-type=git, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, ceph=True, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, name=rhceph, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_BRANCH=main)
Feb 20 07:39:56 np0005625203.localdomain podman[33880]: 2026-02-20 07:39:56.622163975 +0000 UTC m=+0.172673220 container attach 8a72a3c9f6fb34769793131c76523ff9133068cb2fd28d282f6c55c7db744fe3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_sinoussi, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, version=7, name=rhceph, release=1770267347, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, distribution-scope=public, ceph=True)
Feb 20 07:39:56 np0005625203.localdomain ceph-osd[32924]: osd.4 16 state: booting -> active
Feb 20 07:39:57 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=15/16 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=15) [1,5,3] r=0 lpr=15 pi=[13,15)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]: [
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:     {
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:         "available": false,
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:         "ceph_device": false,
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:         "lsm_data": {},
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:         "lvs": [],
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:         "path": "/dev/sr0",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:         "rejected_reasons": [
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "Insufficient space (<5GB)",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "Has a FileSystem"
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:         ],
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:         "sys_api": {
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "actuators": null,
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "device_nodes": "sr0",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "human_readable_size": "482.00 KB",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "id_bus": "ata",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "model": "QEMU DVD-ROM",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "nr_requests": "2",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "partitions": {},
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "path": "/dev/sr0",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "removable": "1",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "rev": "2.5+",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "ro": "0",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "rotational": "1",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "sas_address": "",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "sas_device_handle": "",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "scheduler_mode": "mq-deadline",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "sectors": 0,
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "sectorsize": "2048",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "size": 493568.0,
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "support_discard": "0",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "type": "disk",
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:             "vendor": "QEMU"
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:         }
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]:     }
Feb 20 07:39:57 np0005625203.localdomain elastic_sinoussi[33895]: ]
Feb 20 07:39:57 np0005625203.localdomain systemd[1]: libpod-8a72a3c9f6fb34769793131c76523ff9133068cb2fd28d282f6c55c7db744fe3.scope: Deactivated successfully.
Feb 20 07:39:57 np0005625203.localdomain podman[35353]: 2026-02-20 07:39:57.555280093 +0000 UTC m=+0.048669299 container died 8a72a3c9f6fb34769793131c76523ff9133068cb2fd28d282f6c55c7db744fe3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_sinoussi, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, name=rhceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64)
Feb 20 07:39:57 np0005625203.localdomain systemd[1]: tmp-crun.r6c5KG.mount: Deactivated successfully.
Feb 20 07:39:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-2821e7cd3871f4721b9ee4f8394a9681e82f5b2ad3aa8f0d8adb14bbba6cc714-merged.mount: Deactivated successfully.
Feb 20 07:39:57 np0005625203.localdomain podman[35353]: 2026-02-20 07:39:57.591090818 +0000 UTC m=+0.084480014 container remove 8a72a3c9f6fb34769793131c76523ff9133068cb2fd28d282f6c55c7db744fe3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_sinoussi, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.tags=rhceph ceph, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, architecture=x86_64, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7)
Feb 20 07:39:57 np0005625203.localdomain systemd[1]: libpod-conmon-8a72a3c9f6fb34769793131c76523ff9133068cb2fd28d282f6c55c7db744fe3.scope: Deactivated successfully.
Feb 20 07:39:57 np0005625203.localdomain sudo[33785]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:59 np0005625203.localdomain sudo[35367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:39:59 np0005625203.localdomain sudo[35367]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:39:59 np0005625203.localdomain sudo[35367]: pam_unix(sudo:session): session closed for user root
Feb 20 07:39:59 np0005625203.localdomain sshd[33800]: Invalid user devops from 185.246.128.171 port 5421
Feb 20 07:40:00 np0005625203.localdomain sshd[35382]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:40:01 np0005625203.localdomain sshd[35382]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:40:02 np0005625203.localdomain sshd[33800]: Disconnecting invalid user devops 185.246.128.171 port 5421: Change of username or service not allowed: (devops,ssh-connection) -> (carlos,ssh-connection) [preauth]
Feb 20 07:40:04 np0005625203.localdomain sshd[35384]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:40:05 np0005625203.localdomain sudo[35386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:40:05 np0005625203.localdomain sudo[35386]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:40:05 np0005625203.localdomain sudo[35386]: pam_unix(sudo:session): session closed for user root
Feb 20 07:40:05 np0005625203.localdomain sudo[35401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 07:40:05 np0005625203.localdomain sudo[35401]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:40:06 np0005625203.localdomain systemd[1]: tmp-crun.x5xhu5.mount: Deactivated successfully.
Feb 20 07:40:06 np0005625203.localdomain systemd[26644]: Starting Mark boot as successful...
Feb 20 07:40:06 np0005625203.localdomain podman[35486]: 2026-02-20 07:40:06.433450757 +0000 UTC m=+0.096926435 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, version=7, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.42.2, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Feb 20 07:40:06 np0005625203.localdomain systemd[26644]: Finished Mark boot as successful.
Feb 20 07:40:06 np0005625203.localdomain podman[35486]: 2026-02-20 07:40:06.535089124 +0000 UTC m=+0.198564752 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, RELEASE=main, io.openshift.tags=rhceph ceph, name=rhceph, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:40:06 np0005625203.localdomain sudo[35401]: pam_unix(sudo:session): session closed for user root
Feb 20 07:40:07 np0005625203.localdomain sudo[35552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:40:07 np0005625203.localdomain sudo[35552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:40:07 np0005625203.localdomain sudo[35552]: pam_unix(sudo:session): session closed for user root
Feb 20 07:40:08 np0005625203.localdomain sshd[35384]: Invalid user carlos from 185.246.128.171 port 49334
Feb 20 07:40:08 np0005625203.localdomain sshd[35384]: Disconnecting invalid user carlos 185.246.128.171 port 49334: Change of username or service not allowed: (carlos,ssh-connection) -> (usuario,ssh-connection) [preauth]
Feb 20 07:40:10 np0005625203.localdomain sshd[35567]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:40:12 np0005625203.localdomain sshd[35567]: Invalid user usuario from 185.246.128.171 port 14946
Feb 20 07:40:12 np0005625203.localdomain sshd[35567]: Disconnecting invalid user usuario 185.246.128.171 port 14946: Change of username or service not allowed: (usuario,ssh-connection) -> (supervisor,ssh-connection) [preauth]
Feb 20 07:40:13 np0005625203.localdomain sshd[35569]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:40:15 np0005625203.localdomain sshd[35569]: Invalid user supervisor from 185.246.128.171 port 31461
Feb 20 07:40:15 np0005625203.localdomain sshd[35571]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:40:15 np0005625203.localdomain sshd[35571]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:40:16 np0005625203.localdomain sshd[35569]: Disconnecting invalid user supervisor 185.246.128.171 port 31461: Change of username or service not allowed: (supervisor,ssh-connection) -> (richard,ssh-connection) [preauth]
Feb 20 07:40:18 np0005625203.localdomain sshd[35573]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:40:21 np0005625203.localdomain sshd[35573]: Invalid user richard from 185.246.128.171 port 53736
Feb 20 07:40:25 np0005625203.localdomain sshd[35573]: Disconnecting invalid user richard 185.246.128.171 port 53736: Change of username or service not allowed: (richard,ssh-connection) -> (moth3r,ssh-connection) [preauth]
Feb 20 07:40:28 np0005625203.localdomain sshd[35575]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:40:32 np0005625203.localdomain sshd[35575]: Invalid user moth3r from 185.246.128.171 port 43982
Feb 20 07:40:34 np0005625203.localdomain sshd[35575]: Disconnecting invalid user moth3r 185.246.128.171 port 43982: Change of username or service not allowed: (moth3r,ssh-connection) -> (ftpguest,ssh-connection) [preauth]
Feb 20 07:40:37 np0005625203.localdomain sshd[35577]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:40:40 np0005625203.localdomain sshd[35577]: Invalid user ftpguest from 185.246.128.171 port 25962
Feb 20 07:40:40 np0005625203.localdomain sshd[35577]: Disconnecting invalid user ftpguest 185.246.128.171 port 25962: Change of username or service not allowed: (ftpguest,ssh-connection) -> (username,ssh-connection) [preauth]
Feb 20 07:40:41 np0005625203.localdomain sshd[35579]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:40:43 np0005625203.localdomain sshd[35579]: Invalid user username from 185.246.128.171 port 44656
Feb 20 07:40:46 np0005625203.localdomain sshd[35579]: Disconnecting invalid user username 185.246.128.171 port 44656: Change of username or service not allowed: (username,ssh-connection) -> (ravi,ssh-connection) [preauth]
Feb 20 07:40:47 np0005625203.localdomain sshd[35581]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:40:50 np0005625203.localdomain sshd[35581]: Invalid user ravi from 185.246.128.171 port 10828
Feb 20 07:40:51 np0005625203.localdomain sshd[35581]: Disconnecting invalid user ravi 185.246.128.171 port 10828: Change of username or service not allowed: (ravi,ssh-connection) -> (mc1,ssh-connection) [preauth]
Feb 20 07:40:54 np0005625203.localdomain sshd[35583]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:40:55 np0005625203.localdomain sshd[35584]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:40:56 np0005625203.localdomain sshd[35584]: Invalid user admin from 103.171.84.20 port 56962
Feb 20 07:40:57 np0005625203.localdomain sshd[35584]: Received disconnect from 103.171.84.20 port 56962:11: Bye Bye [preauth]
Feb 20 07:40:57 np0005625203.localdomain sshd[35584]: Disconnected from invalid user admin 103.171.84.20 port 56962 [preauth]
Feb 20 07:40:59 np0005625203.localdomain sshd[35583]: Invalid user mc1 from 185.246.128.171 port 46625
Feb 20 07:41:00 np0005625203.localdomain sshd[35583]: Disconnecting invalid user mc1 185.246.128.171 port 46625: Change of username or service not allowed: (mc1,ssh-connection) -> (nutanix,ssh-connection) [preauth]
Feb 20 07:41:00 np0005625203.localdomain sshd[35587]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:41:01 np0005625203.localdomain sshd[35587]: Invalid user nutanix from 185.246.128.171 port 15581
Feb 20 07:41:02 np0005625203.localdomain sshd[35587]: Disconnecting invalid user nutanix 185.246.128.171 port 15581: Change of username or service not allowed: (nutanix,ssh-connection) -> (connect,ssh-connection) [preauth]
Feb 20 07:41:03 np0005625203.localdomain sshd[35589]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:41:05 np0005625203.localdomain sshd[35591]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:41:06 np0005625203.localdomain sshd[35591]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:41:06 np0005625203.localdomain sshd[35589]: Invalid user connect from 185.246.128.171 port 26868
Feb 20 07:41:07 np0005625203.localdomain sudo[35593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:41:07 np0005625203.localdomain sudo[35593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:41:07 np0005625203.localdomain sudo[35593]: pam_unix(sudo:session): session closed for user root
Feb 20 07:41:07 np0005625203.localdomain sudo[35608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 07:41:07 np0005625203.localdomain sudo[35608]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:41:08 np0005625203.localdomain podman[35690]: 2026-02-20 07:41:08.269537164 +0000 UTC m=+0.077672860 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, release=1770267347, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git)
Feb 20 07:41:08 np0005625203.localdomain podman[35690]: 2026-02-20 07:41:08.401138258 +0000 UTC m=+0.209273964 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, ceph=True, io.buildah.version=1.42.2, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., version=7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 07:41:08 np0005625203.localdomain sudo[35608]: pam_unix(sudo:session): session closed for user root
Feb 20 07:41:08 np0005625203.localdomain sudo[35754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:41:08 np0005625203.localdomain sudo[35754]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:41:08 np0005625203.localdomain sudo[35754]: pam_unix(sudo:session): session closed for user root
Feb 20 07:41:08 np0005625203.localdomain sudo[35769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:41:08 np0005625203.localdomain sudo[35769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:41:08 np0005625203.localdomain sshd[35589]: Disconnecting invalid user connect 185.246.128.171 port 26868: Change of username or service not allowed: (connect,ssh-connection) -> (casaos,ssh-connection) [preauth]
Feb 20 07:41:09 np0005625203.localdomain sshd[35816]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:41:09 np0005625203.localdomain sudo[35769]: pam_unix(sudo:session): session closed for user root
Feb 20 07:41:09 np0005625203.localdomain sudo[35817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:41:09 np0005625203.localdomain sudo[35817]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:41:09 np0005625203.localdomain sudo[35817]: pam_unix(sudo:session): session closed for user root
Feb 20 07:41:10 np0005625203.localdomain sshd[35816]: Invalid user casaos from 185.246.128.171 port 59430
Feb 20 07:41:11 np0005625203.localdomain sshd[35816]: Disconnecting invalid user casaos 185.246.128.171 port 59430: Change of username or service not allowed: (casaos,ssh-connection) -> (jenkins,ssh-connection) [preauth]
Feb 20 07:41:12 np0005625203.localdomain sshd[35833]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:41:16 np0005625203.localdomain sshd[35833]: Invalid user jenkins from 185.246.128.171 port 13249
Feb 20 07:41:16 np0005625203.localdomain sshd[35835]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:41:17 np0005625203.localdomain sshd[35835]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:41:18 np0005625203.localdomain sshd[25076]: Received disconnect from 192.168.122.100 port 33380:11: disconnected by user
Feb 20 07:41:18 np0005625203.localdomain sshd[25076]: Disconnected from user zuul 192.168.122.100 port 33380
Feb 20 07:41:18 np0005625203.localdomain sshd[25073]: pam_unix(sshd:session): session closed for user zuul
Feb 20 07:41:18 np0005625203.localdomain systemd[1]: session-13.scope: Deactivated successfully.
Feb 20 07:41:18 np0005625203.localdomain systemd[1]: session-13.scope: Consumed 21.609s CPU time.
Feb 20 07:41:18 np0005625203.localdomain systemd-logind[759]: Session 13 logged out. Waiting for processes to exit.
Feb 20 07:41:18 np0005625203.localdomain systemd-logind[759]: Removed session 13.
Feb 20 07:41:23 np0005625203.localdomain sshd[35833]: Disconnecting invalid user jenkins 185.246.128.171 port 13249: Change of username or service not allowed: (jenkins,ssh-connection) -> (oracle,ssh-connection) [preauth]
Feb 20 07:41:26 np0005625203.localdomain sshd[35837]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:41:30 np0005625203.localdomain sshd[35837]: Invalid user oracle from 185.246.128.171 port 20582
Feb 20 07:41:32 np0005625203.localdomain sshd[35837]: error: maximum authentication attempts exceeded for invalid user oracle from 185.246.128.171 port 20582 ssh2 [preauth]
Feb 20 07:41:32 np0005625203.localdomain sshd[35837]: Disconnecting invalid user oracle 185.246.128.171 port 20582: Too many authentication failures [preauth]
Feb 20 07:41:34 np0005625203.localdomain sshd[35839]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:41:37 np0005625203.localdomain sshd[35839]: Invalid user oracle from 185.246.128.171 port 62062
Feb 20 07:41:43 np0005625203.localdomain sshd[35839]: error: maximum authentication attempts exceeded for invalid user oracle from 185.246.128.171 port 62062 ssh2 [preauth]
Feb 20 07:41:43 np0005625203.localdomain sshd[35839]: Disconnecting invalid user oracle 185.246.128.171 port 62062: Too many authentication failures [preauth]
Feb 20 07:41:46 np0005625203.localdomain sshd[35841]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:41:49 np0005625203.localdomain sshd[35841]: Invalid user oracle from 185.246.128.171 port 61936
Feb 20 07:41:56 np0005625203.localdomain sshd[35843]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:41:56 np0005625203.localdomain sshd[35843]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:41:57 np0005625203.localdomain sshd[35841]: error: maximum authentication attempts exceeded for invalid user oracle from 185.246.128.171 port 61936 ssh2 [preauth]
Feb 20 07:41:57 np0005625203.localdomain sshd[35841]: Disconnecting invalid user oracle 185.246.128.171 port 61936: Too many authentication failures [preauth]
Feb 20 07:41:59 np0005625203.localdomain sshd[35845]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:01 np0005625203.localdomain sshd[35845]: Invalid user oracle from 185.246.128.171 port 2685
Feb 20 07:42:07 np0005625203.localdomain sshd[35845]: error: maximum authentication attempts exceeded for invalid user oracle from 185.246.128.171 port 2685 ssh2 [preauth]
Feb 20 07:42:07 np0005625203.localdomain sshd[35845]: Disconnecting invalid user oracle 185.246.128.171 port 2685: Too many authentication failures [preauth]
Feb 20 07:42:07 np0005625203.localdomain sshd[35847]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:09 np0005625203.localdomain sshd[35847]: Invalid user oracle from 185.246.128.171 port 51293
Feb 20 07:42:10 np0005625203.localdomain sudo[35849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:42:10 np0005625203.localdomain sudo[35849]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:42:10 np0005625203.localdomain sudo[35849]: pam_unix(sudo:session): session closed for user root
Feb 20 07:42:10 np0005625203.localdomain sudo[35864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:42:10 np0005625203.localdomain sudo[35864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:42:10 np0005625203.localdomain sshd[35847]: Disconnecting invalid user oracle 185.246.128.171 port 51293: Change of username or service not allowed: (oracle,ssh-connection) -> (sftp_user,ssh-connection) [preauth]
Feb 20 07:42:10 np0005625203.localdomain sudo[35864]: pam_unix(sudo:session): session closed for user root
Feb 20 07:42:11 np0005625203.localdomain sudo[35910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:42:11 np0005625203.localdomain sudo[35910]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:42:11 np0005625203.localdomain sudo[35910]: pam_unix(sudo:session): session closed for user root
Feb 20 07:42:11 np0005625203.localdomain sshd[35925]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:13 np0005625203.localdomain sshd[35925]: Invalid user sftp_user from 185.246.128.171 port 9635
Feb 20 07:42:14 np0005625203.localdomain sshd[35925]: Disconnecting invalid user sftp_user 185.246.128.171 port 9635: Change of username or service not allowed: (sftp_user,ssh-connection) -> (array,ssh-connection) [preauth]
Feb 20 07:42:16 np0005625203.localdomain sshd[35927]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:19 np0005625203.localdomain sshd[35927]: Invalid user array from 185.246.128.171 port 34574
Feb 20 07:42:19 np0005625203.localdomain sshd[35927]: Disconnecting invalid user array 185.246.128.171 port 34574: Change of username or service not allowed: (array,ssh-connection) -> (tazos,ssh-connection) [preauth]
Feb 20 07:42:21 np0005625203.localdomain sshd[35929]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:23 np0005625203.localdomain sshd[35929]: Invalid user tazos from 185.246.128.171 port 64035
Feb 20 07:42:23 np0005625203.localdomain sshd[35929]: Disconnecting invalid user tazos 185.246.128.171 port 64035: Change of username or service not allowed: (tazos,ssh-connection) -> (auditor,ssh-connection) [preauth]
Feb 20 07:42:26 np0005625203.localdomain sshd[35931]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:26 np0005625203.localdomain sshd[35931]: Invalid user n8n from 143.198.161.12 port 50586
Feb 20 07:42:26 np0005625203.localdomain sshd[35931]: Received disconnect from 143.198.161.12 port 50586:11: Bye Bye [preauth]
Feb 20 07:42:26 np0005625203.localdomain sshd[35931]: Disconnected from invalid user n8n 143.198.161.12 port 50586 [preauth]
Feb 20 07:42:26 np0005625203.localdomain sshd[35933]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:29 np0005625203.localdomain sshd[35933]: Invalid user auditor from 185.246.128.171 port 28277
Feb 20 07:42:30 np0005625203.localdomain sshd[35933]: Disconnecting invalid user auditor 185.246.128.171 port 28277: Change of username or service not allowed: (auditor,ssh-connection) -> (byte,ssh-connection) [preauth]
Feb 20 07:42:31 np0005625203.localdomain sshd[35935]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:31 np0005625203.localdomain sshd[35935]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:42:33 np0005625203.localdomain sshd[35937]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:34 np0005625203.localdomain sshd[35937]: Invalid user claude from 40.81.244.142 port 49572
Feb 20 07:42:34 np0005625203.localdomain sshd[35937]: Received disconnect from 40.81.244.142 port 49572:11: Bye Bye [preauth]
Feb 20 07:42:34 np0005625203.localdomain sshd[35937]: Disconnected from invalid user claude 40.81.244.142 port 49572 [preauth]
Feb 20 07:42:35 np0005625203.localdomain sshd[35939]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:37 np0005625203.localdomain sshd[35941]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:38 np0005625203.localdomain sshd[35939]: Invalid user byte from 185.246.128.171 port 13206
Feb 20 07:42:38 np0005625203.localdomain sshd[35939]: Disconnecting invalid user byte 185.246.128.171 port 13206: Change of username or service not allowed: (byte,ssh-connection) -> (zabbix,ssh-connection) [preauth]
Feb 20 07:42:38 np0005625203.localdomain sshd[35941]: Invalid user claude from 123.204.132.127 port 56030
Feb 20 07:42:39 np0005625203.localdomain sshd[35941]: Received disconnect from 123.204.132.127 port 56030:11: Bye Bye [preauth]
Feb 20 07:42:39 np0005625203.localdomain sshd[35941]: Disconnected from invalid user claude 123.204.132.127 port 56030 [preauth]
Feb 20 07:42:39 np0005625203.localdomain sshd[35943]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:43 np0005625203.localdomain sshd[35943]: Invalid user zabbix from 185.246.128.171 port 42336
Feb 20 07:42:45 np0005625203.localdomain sshd[35945]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:45 np0005625203.localdomain sshd[35945]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:42:47 np0005625203.localdomain sshd[35943]: Disconnecting invalid user zabbix 185.246.128.171 port 42336: Change of username or service not allowed: (zabbix,ssh-connection) -> (factorio,ssh-connection) [preauth]
Feb 20 07:42:48 np0005625203.localdomain sshd[35947]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:49 np0005625203.localdomain sshd[35947]: Invalid user claude from 187.87.206.21 port 48128
Feb 20 07:42:49 np0005625203.localdomain sshd[35947]: Received disconnect from 187.87.206.21 port 48128:11: Bye Bye [preauth]
Feb 20 07:42:49 np0005625203.localdomain sshd[35947]: Disconnected from invalid user claude 187.87.206.21 port 48128 [preauth]
Feb 20 07:42:49 np0005625203.localdomain sshd[35949]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:51 np0005625203.localdomain sshd[35949]: Invalid user factorio from 185.246.128.171 port 32630
Feb 20 07:42:52 np0005625203.localdomain sshd[35949]: Disconnecting invalid user factorio 185.246.128.171 port 32630: Change of username or service not allowed: (factorio,ssh-connection) -> (RPM,ssh-connection) [preauth]
Feb 20 07:42:54 np0005625203.localdomain sshd[35951]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:42:59 np0005625203.localdomain sshd[35951]: Invalid user RPM from 185.246.128.171 port 64581
Feb 20 07:43:01 np0005625203.localdomain sshd[35951]: Disconnecting invalid user RPM 185.246.128.171 port 64581: Change of username or service not allowed: (RPM,ssh-connection) -> (share,ssh-connection) [preauth]
Feb 20 07:43:02 np0005625203.localdomain sshd[35953]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:43:06 np0005625203.localdomain sshd[35953]: Invalid user share from 185.246.128.171 port 44549
Feb 20 07:43:06 np0005625203.localdomain sshd[35953]: Disconnecting invalid user share 185.246.128.171 port 44549: Change of username or service not allowed: (share,ssh-connection) -> (landscape,ssh-connection) [preauth]
Feb 20 07:43:09 np0005625203.localdomain sshd[35955]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:43:11 np0005625203.localdomain sudo[35957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:43:11 np0005625203.localdomain sudo[35957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:43:11 np0005625203.localdomain sudo[35957]: pam_unix(sudo:session): session closed for user root
Feb 20 07:43:11 np0005625203.localdomain sudo[35972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:43:11 np0005625203.localdomain sudo[35972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:43:12 np0005625203.localdomain sudo[35972]: pam_unix(sudo:session): session closed for user root
Feb 20 07:43:12 np0005625203.localdomain sudo[36019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:43:13 np0005625203.localdomain sudo[36019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:43:13 np0005625203.localdomain sudo[36019]: pam_unix(sudo:session): session closed for user root
Feb 20 07:43:14 np0005625203.localdomain sshd[35955]: Invalid user landscape from 185.246.128.171 port 20310
Feb 20 07:43:14 np0005625203.localdomain sshd[35955]: Disconnecting invalid user landscape 185.246.128.171 port 20310: Change of username or service not allowed: (landscape,ssh-connection) -> (test,ssh-connection) [preauth]
Feb 20 07:43:17 np0005625203.localdomain sshd[36034]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:43:20 np0005625203.localdomain sshd[36034]: Invalid user test from 185.246.128.171 port 1144
Feb 20 07:43:24 np0005625203.localdomain sshd[36034]: error: maximum authentication attempts exceeded for invalid user test from 185.246.128.171 port 1144 ssh2 [preauth]
Feb 20 07:43:24 np0005625203.localdomain sshd[36034]: Disconnecting invalid user test 185.246.128.171 port 1144: Too many authentication failures [preauth]
Feb 20 07:43:26 np0005625203.localdomain sshd[36036]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:43:32 np0005625203.localdomain sshd[36036]: Invalid user test from 185.246.128.171 port 54740
Feb 20 07:43:34 np0005625203.localdomain sshd[36038]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:43:34 np0005625203.localdomain sshd[36038]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:43:35 np0005625203.localdomain sshd[36036]: error: maximum authentication attempts exceeded for invalid user test from 185.246.128.171 port 54740 ssh2 [preauth]
Feb 20 07:43:35 np0005625203.localdomain sshd[36036]: Disconnecting invalid user test 185.246.128.171 port 54740: Too many authentication failures [preauth]
Feb 20 07:43:37 np0005625203.localdomain sshd[36040]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:43:42 np0005625203.localdomain sshd[36040]: Invalid user test from 185.246.128.171 port 48748
Feb 20 07:43:45 np0005625203.localdomain sshd[36042]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:43:45 np0005625203.localdomain sshd[36042]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:43:46 np0005625203.localdomain sshd[36040]: error: maximum authentication attempts exceeded for invalid user test from 185.246.128.171 port 48748 ssh2 [preauth]
Feb 20 07:43:46 np0005625203.localdomain sshd[36040]: Disconnecting invalid user test 185.246.128.171 port 48748: Too many authentication failures [preauth]
Feb 20 07:43:47 np0005625203.localdomain sshd[36044]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:43:49 np0005625203.localdomain systemd[26644]: Created slice User Background Tasks Slice.
Feb 20 07:43:49 np0005625203.localdomain systemd[26644]: Starting Cleanup of User's Temporary Files and Directories...
Feb 20 07:43:49 np0005625203.localdomain systemd[26644]: Finished Cleanup of User's Temporary Files and Directories.
Feb 20 07:43:49 np0005625203.localdomain sshd[36044]: Invalid user test from 185.246.128.171 port 43859
Feb 20 07:43:55 np0005625203.localdomain sshd[36044]: error: maximum authentication attempts exceeded for invalid user test from 185.246.128.171 port 43859 ssh2 [preauth]
Feb 20 07:43:55 np0005625203.localdomain sshd[36044]: Disconnecting invalid user test 185.246.128.171 port 43859: Too many authentication failures [preauth]
Feb 20 07:43:57 np0005625203.localdomain sshd[36047]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:44:01 np0005625203.localdomain sshd[36047]: Invalid user test from 185.246.128.171 port 38112
Feb 20 07:44:04 np0005625203.localdomain sshd[36047]: error: maximum authentication attempts exceeded for invalid user test from 185.246.128.171 port 38112 ssh2 [preauth]
Feb 20 07:44:04 np0005625203.localdomain sshd[36047]: Disconnecting invalid user test 185.246.128.171 port 38112: Too many authentication failures [preauth]
Feb 20 07:44:08 np0005625203.localdomain sshd[36049]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:44:11 np0005625203.localdomain sshd[36049]: Invalid user test from 185.246.128.171 port 31593
Feb 20 07:44:13 np0005625203.localdomain sshd[36049]: Disconnecting invalid user test 185.246.128.171 port 31593: Change of username or service not allowed: (test,ssh-connection) -> (xandeum,ssh-connection) [preauth]
Feb 20 07:44:13 np0005625203.localdomain sudo[36051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:44:13 np0005625203.localdomain sudo[36051]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:44:13 np0005625203.localdomain sudo[36051]: pam_unix(sudo:session): session closed for user root
Feb 20 07:44:13 np0005625203.localdomain sudo[36066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:44:13 np0005625203.localdomain sudo[36066]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:44:13 np0005625203.localdomain sudo[36066]: pam_unix(sudo:session): session closed for user root
Feb 20 07:44:14 np0005625203.localdomain sudo[36112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:44:14 np0005625203.localdomain sudo[36112]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:44:14 np0005625203.localdomain sudo[36112]: pam_unix(sudo:session): session closed for user root
Feb 20 07:44:14 np0005625203.localdomain sshd[36127]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:44:19 np0005625203.localdomain sshd[36127]: Invalid user xandeum from 185.246.128.171 port 6174
Feb 20 07:44:19 np0005625203.localdomain sshd[36127]: Disconnecting invalid user xandeum 185.246.128.171 port 6174: Change of username or service not allowed: (xandeum,ssh-connection) -> (vscode,ssh-connection) [preauth]
Feb 20 07:44:20 np0005625203.localdomain sshd[36129]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:44:21 np0005625203.localdomain sshd[36131]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:44:21 np0005625203.localdomain sshd[36131]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:44:23 np0005625203.localdomain sshd[36129]: Invalid user vscode from 185.246.128.171 port 39013
Feb 20 07:44:24 np0005625203.localdomain sshd[36129]: Disconnecting invalid user vscode 185.246.128.171 port 39013: Change of username or service not allowed: (vscode,ssh-connection) -> (qemu,ssh-connection) [preauth]
Feb 20 07:44:26 np0005625203.localdomain sshd[36133]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:44:31 np0005625203.localdomain sshd[36133]: Invalid user qemu from 185.246.128.171 port 8557
Feb 20 07:44:32 np0005625203.localdomain sshd[36133]: Disconnecting invalid user qemu 185.246.128.171 port 8557: Change of username or service not allowed: (qemu,ssh-connection) -> (fff,ssh-connection) [preauth]
Feb 20 07:44:34 np0005625203.localdomain sshd[36135]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:44:37 np0005625203.localdomain sshd[36135]: Invalid user fff from 185.246.128.171 port 51187
Feb 20 07:44:38 np0005625203.localdomain sshd[36135]: Disconnecting invalid user fff 185.246.128.171 port 51187: Change of username or service not allowed: (fff,ssh-connection) -> (juan,ssh-connection) [preauth]
Feb 20 07:44:40 np0005625203.localdomain sshd[36138]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:44:44 np0005625203.localdomain sshd[36138]: Invalid user juan from 185.246.128.171 port 24494
Feb 20 07:44:46 np0005625203.localdomain sshd[36138]: Disconnecting invalid user juan 185.246.128.171 port 24494: Change of username or service not allowed: (juan,ssh-connection) -> (hugo,ssh-connection) [preauth]
Feb 20 07:44:48 np0005625203.localdomain sshd[36140]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:44:49 np0005625203.localdomain sshd[36142]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:44:49 np0005625203.localdomain sshd[36142]: Accepted publickey for zuul from 192.168.122.100 port 38160 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 07:44:49 np0005625203.localdomain systemd-logind[759]: New session 27 of user zuul.
Feb 20 07:44:49 np0005625203.localdomain systemd[1]: Started Session 27 of User zuul.
Feb 20 07:44:49 np0005625203.localdomain sshd[36142]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 07:44:50 np0005625203.localdomain sudo[36188]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oubggqlqnpzpduxvudvmqiuemdtorrgk ; /usr/bin/python3
Feb 20 07:44:50 np0005625203.localdomain sudo[36188]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:44:50 np0005625203.localdomain python3[36190]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 20 07:44:50 np0005625203.localdomain sudo[36188]: pam_unix(sudo:session): session closed for user root
Feb 20 07:44:50 np0005625203.localdomain sudo[36233]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-urjixraeuxcwkjazfzulxpeagfmengyy ; /usr/bin/python3
Feb 20 07:44:50 np0005625203.localdomain sudo[36233]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:44:51 np0005625203.localdomain python3[36235]: ansible-setup Invoked with gather_subset=['!facter', '!ohai'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 07:44:51 np0005625203.localdomain sudo[36233]: pam_unix(sudo:session): session closed for user root
Feb 20 07:44:51 np0005625203.localdomain sshd[36140]: Invalid user hugo from 185.246.128.171 port 4550
Feb 20 07:44:51 np0005625203.localdomain sudo[36253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vdfmpwndeuwequpttgothfonljmffmpj ; /usr/bin/python3
Feb 20 07:44:51 np0005625203.localdomain sudo[36253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:44:51 np0005625203.localdomain python3[36255]: ansible-user Invoked with name=tripleo-admin generate_ssh_key=False state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005625203.localdomain update_password=always uid=None group=None groups=None comment=None home=None shell=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb 20 07:44:51 np0005625203.localdomain useradd[36257]: new group: name=tripleo-admin, GID=1003
Feb 20 07:44:51 np0005625203.localdomain useradd[36257]: new user: name=tripleo-admin, UID=1003, GID=1003, home=/home/tripleo-admin, shell=/bin/bash, from=none
Feb 20 07:44:51 np0005625203.localdomain sshd[36140]: Disconnecting invalid user hugo 185.246.128.171 port 4550: Change of username or service not allowed: (hugo,ssh-connection) -> (lenovo,ssh-connection) [preauth]
Feb 20 07:44:51 np0005625203.localdomain sudo[36253]: pam_unix(sudo:session): session closed for user root
Feb 20 07:44:51 np0005625203.localdomain sudo[36309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ocmvaxhdbbenqpzwcnrxappmsauqljpt ; /usr/bin/python3
Feb 20 07:44:51 np0005625203.localdomain sudo[36309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:44:52 np0005625203.localdomain python3[36311]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/tripleo-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:44:52 np0005625203.localdomain sudo[36309]: pam_unix(sudo:session): session closed for user root
Feb 20 07:44:52 np0005625203.localdomain sudo[36352]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ygpkmknfzlftcyxjeqhiylsmzmbwnruw ; /usr/bin/python3
Feb 20 07:44:52 np0005625203.localdomain sudo[36352]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:44:52 np0005625203.localdomain python3[36354]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/tripleo-admin mode=288 owner=root group=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771573491.7854264-66444-90406356339229/source _original_basename=tmplmavgqt1 follow=False checksum=b3e7ecdcc699d217c6b083a91b07208207813d93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:44:52 np0005625203.localdomain sudo[36352]: pam_unix(sudo:session): session closed for user root
Feb 20 07:44:52 np0005625203.localdomain sudo[36382]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqgxhzpulsqtpfgqbtrcjivankunlvsj ; /usr/bin/python3
Feb 20 07:44:52 np0005625203.localdomain sudo[36382]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:44:52 np0005625203.localdomain python3[36384]: ansible-file Invoked with path=/home/tripleo-admin state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:44:52 np0005625203.localdomain sudo[36382]: pam_unix(sudo:session): session closed for user root
Feb 20 07:44:53 np0005625203.localdomain sudo[36398]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdrizqoevdoanpmnlyqlgxnpoigthmji ; /usr/bin/python3
Feb 20 07:44:53 np0005625203.localdomain sudo[36398]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:44:53 np0005625203.localdomain python3[36400]: ansible-file Invoked with path=/home/tripleo-admin/.ssh state=directory owner=tripleo-admin group=tripleo-admin mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:44:53 np0005625203.localdomain sudo[36398]: pam_unix(sudo:session): session closed for user root
Feb 20 07:44:53 np0005625203.localdomain sudo[36414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtdkljoiokueizmejxigbuzxoitidkyj ; /usr/bin/python3
Feb 20 07:44:53 np0005625203.localdomain sudo[36414]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:44:53 np0005625203.localdomain python3[36416]: ansible-file Invoked with path=/home/tripleo-admin/.ssh/authorized_keys state=touch owner=tripleo-admin group=tripleo-admin mode=384 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:44:53 np0005625203.localdomain sudo[36414]: pam_unix(sudo:session): session closed for user root
Feb 20 07:44:53 np0005625203.localdomain sshd[36417]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:44:54 np0005625203.localdomain sudo[36431]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gapljgnwysvjcotnpiijyglzxbadlfnl ; /usr/bin/python3
Feb 20 07:44:54 np0005625203.localdomain sudo[36431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 07:44:54 np0005625203.localdomain python3[36433]: ansible-lineinfile Invoked with path=/home/tripleo-admin/.ssh/authorized_keys line=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDMsWl2iDqidUqzeXIedhPUnMaIwstb6f7CXhkhrGMXj2w21P81bPYesLJBAmIee3BaCMGYsrUpWu7u92b6J2nM7sdG3Lo3YZqie0IUZG/NpfJ+OEJsLIQ5zFT7jGNI77bmiCBX4jibhRJuDVtuombA5bly/q1x7yqRZ5UuwmTahAz49MPmun69R00RpbzOjt8OgQQ6ZdjDclItely6T7y+tDL3UemjBYcKGNckzXB1nWJEjviTyIXiLLqJdcxWarPXzxewMUoyRiTPQU6uRQ1VhuBRFyiofrACRWtFHfLEuWMVqQAL1OtJD5P+KSuVKHkT/MAFSdo0OptGxMmFN5ZvbruOyab6MovPK/9sTaFrhzT6tE78w+fEIFe4kPvM2bw80sKzO9yrkvqs1LL9ToVf9r2TufxRZVCibTFSa9Cw0U9yILFdCrlCZ2GMFNTCFuM7vQDAKwrKndvATAxKrm+D8Xme2+SVCv3oKJtx0m4D6gc9/IMzFhC7Ibh6bJaSj2s= zuul-build-sshkey
                                                          regexp=Generated by TripleO state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:44:54 np0005625203.localdomain sudo[36431]: pam_unix(sudo:session): session closed for user root
Feb 20 07:44:55 np0005625203.localdomain python3[36448]: ansible-ping Invoked with data=pong
Feb 20 07:44:56 np0005625203.localdomain sshd[36417]: Invalid user lenovo from 185.246.128.171 port 30386
Feb 20 07:44:58 np0005625203.localdomain sshd[36449]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:44:58 np0005625203.localdomain sshd[36417]: Disconnecting invalid user lenovo 185.246.128.171 port 30386: Change of username or service not allowed: (lenovo,ssh-connection) -> (sc,ssh-connection) [preauth]
Feb 20 07:44:59 np0005625203.localdomain sshd[36449]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:44:59 np0005625203.localdomain sshd[36451]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:03 np0005625203.localdomain sshd[36451]: Invalid user sc from 185.246.128.171 port 62798
Feb 20 07:45:05 np0005625203.localdomain sshd[36451]: Disconnecting invalid user sc 185.246.128.171 port 62798: Change of username or service not allowed: (sc,ssh-connection) -> (gg,ssh-connection) [preauth]
Feb 20 07:45:06 np0005625203.localdomain sshd[36453]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:06 np0005625203.localdomain sshd[36453]: Accepted publickey for tripleo-admin from 192.168.122.100 port 48952 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 07:45:06 np0005625203.localdomain systemd[1]: Created slice User Slice of UID 1003.
Feb 20 07:45:06 np0005625203.localdomain systemd[1]: Starting User Runtime Directory /run/user/1003...
Feb 20 07:45:06 np0005625203.localdomain systemd-logind[759]: New session 28 of user tripleo-admin.
Feb 20 07:45:06 np0005625203.localdomain systemd[1]: Finished User Runtime Directory /run/user/1003.
Feb 20 07:45:06 np0005625203.localdomain systemd[1]: Starting User Manager for UID 1003...
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: pam_unix(systemd-user:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Queued start job for default target Main User Target.
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Created slice User Application Slice.
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Started Daily Cleanup of User's Temporary Directories.
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Reached target Paths.
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Reached target Timers.
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Starting D-Bus User Message Bus Socket...
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Starting Create User's Volatile Files and Directories...
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Finished Create User's Volatile Files and Directories.
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Listening on D-Bus User Message Bus Socket.
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Reached target Sockets.
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Reached target Basic System.
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Reached target Main User Target.
Feb 20 07:45:06 np0005625203.localdomain systemd[36457]: Startup finished in 122ms.
Feb 20 07:45:06 np0005625203.localdomain systemd[1]: Started User Manager for UID 1003.
Feb 20 07:45:06 np0005625203.localdomain systemd[1]: Started Session 28 of User tripleo-admin.
Feb 20 07:45:06 np0005625203.localdomain sshd[36453]: pam_unix(sshd:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Feb 20 07:45:06 np0005625203.localdomain sudo[36516]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-usilqpevteymbkyybizkyingjszzgmqm ; /usr/bin/python3
Feb 20 07:45:06 np0005625203.localdomain sudo[36516]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:45:07 np0005625203.localdomain python3[36518]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all', 'min'] gather_timeout=45 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 07:45:07 np0005625203.localdomain sudo[36516]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:07 np0005625203.localdomain sshd[36523]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:08 np0005625203.localdomain sshd[36525]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:08 np0005625203.localdomain sshd[36525]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:45:09 np0005625203.localdomain sshd[36527]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:09 np0005625203.localdomain sshd[36523]: Invalid user gg from 185.246.128.171 port 36132
Feb 20 07:45:10 np0005625203.localdomain sshd[36527]: Received disconnect from 189.190.2.14 port 40024:11: Bye Bye [preauth]
Feb 20 07:45:10 np0005625203.localdomain sshd[36527]: Disconnected from authenticating user root 189.190.2.14 port 40024 [preauth]
Feb 20 07:45:11 np0005625203.localdomain sshd[36523]: Disconnecting invalid user gg 185.246.128.171 port 36132: Change of username or service not allowed: (gg,ssh-connection) -> (dan,ssh-connection) [preauth]
Feb 20 07:45:11 np0005625203.localdomain sudo[36542]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yylxixuhofsivywkgiojrxcwauevainm ; /usr/bin/python3
Feb 20 07:45:11 np0005625203.localdomain sudo[36542]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:45:12 np0005625203.localdomain python3[36544]: ansible-selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config
Feb 20 07:45:12 np0005625203.localdomain sudo[36542]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:12 np0005625203.localdomain sudo[36558]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pajxqfnrttsmucceostkwmehosgzjlbb ; /usr/bin/python3
Feb 20 07:45:12 np0005625203.localdomain sudo[36558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:45:12 np0005625203.localdomain python3[36560]: ansible-tempfile Invoked with state=file suffix=tmphosts prefix=ansible. path=None
Feb 20 07:45:12 np0005625203.localdomain sudo[36558]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:13 np0005625203.localdomain sudo[36606]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwtslcxsbahkayjjamyqntuyuuuuetpc ; /usr/bin/python3
Feb 20 07:45:13 np0005625203.localdomain sudo[36606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:45:13 np0005625203.localdomain python3[36608]: ansible-ansible.legacy.copy Invoked with remote_src=True src=/etc/hosts dest=/tmp/ansible.lm6x29gktmphosts mode=preserve backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:45:13 np0005625203.localdomain sudo[36606]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:13 np0005625203.localdomain sudo[36636]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahuhbqefzadsauzfpvyzitxgpkeaipxv ; /usr/bin/python3
Feb 20 07:45:13 np0005625203.localdomain sudo[36636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:45:13 np0005625203.localdomain python3[36638]: ansible-blockinfile Invoked with state=absent path=/tmp/ansible.lm6x29gktmphosts block= marker=# {mark} marker_begin=HEAT_HOSTS_START - Do not edit manually within this section! marker_end=HEAT_HOSTS_END create=False backup=False unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:45:13 np0005625203.localdomain sudo[36636]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:14 np0005625203.localdomain sshd[36639]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:14 np0005625203.localdomain sudo[36640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:45:14 np0005625203.localdomain sudo[36640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:45:14 np0005625203.localdomain sudo[36640]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:14 np0005625203.localdomain sudo[36655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:45:14 np0005625203.localdomain sudo[36655]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:45:14 np0005625203.localdomain sudo[36683]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gbfxxsgztgccvzpqhdnqmaosrpuqzwib ; /usr/bin/python3
Feb 20 07:45:14 np0005625203.localdomain sudo[36683]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:45:14 np0005625203.localdomain python3[36685]: ansible-blockinfile Invoked with create=True path=/tmp/ansible.lm6x29gktmphosts insertbefore=BOF block=172.17.0.106 np0005625202.localdomain np0005625202
                                                         172.18.0.106 np0005625202.storage.localdomain np0005625202.storage
                                                         172.20.0.106 np0005625202.storagemgmt.localdomain np0005625202.storagemgmt
                                                         172.17.0.106 np0005625202.internalapi.localdomain np0005625202.internalapi
                                                         172.19.0.106 np0005625202.tenant.localdomain np0005625202.tenant
                                                         192.168.122.106 np0005625202.ctlplane.localdomain np0005625202.ctlplane
                                                         172.17.0.107 np0005625203.localdomain np0005625203
                                                         172.18.0.107 np0005625203.storage.localdomain np0005625203.storage
                                                         172.20.0.107 np0005625203.storagemgmt.localdomain np0005625203.storagemgmt
                                                         172.17.0.107 np0005625203.internalapi.localdomain np0005625203.internalapi
                                                         172.19.0.107 np0005625203.tenant.localdomain np0005625203.tenant
                                                         192.168.122.107 np0005625203.ctlplane.localdomain np0005625203.ctlplane
                                                         172.17.0.108 np0005625204.localdomain np0005625204
                                                         172.18.0.108 np0005625204.storage.localdomain np0005625204.storage
                                                         172.20.0.108 np0005625204.storagemgmt.localdomain np0005625204.storagemgmt
                                                         172.17.0.108 np0005625204.internalapi.localdomain np0005625204.internalapi
                                                         172.19.0.108 np0005625204.tenant.localdomain np0005625204.tenant
                                                         192.168.122.108 np0005625204.ctlplane.localdomain np0005625204.ctlplane
                                                         172.17.0.103 np0005625199.localdomain np0005625199
                                                         172.18.0.103 np0005625199.storage.localdomain np0005625199.storage
                                                         172.20.0.103 np0005625199.storagemgmt.localdomain np0005625199.storagemgmt
                                                         172.17.0.103 np0005625199.internalapi.localdomain np0005625199.internalapi
                                                         172.19.0.103 np0005625199.tenant.localdomain np0005625199.tenant
                                                         192.168.122.103 np0005625199.ctlplane.localdomain np0005625199.ctlplane
                                                         172.17.0.104 np0005625200.localdomain np0005625200
                                                         172.18.0.104 np0005625200.storage.localdomain np0005625200.storage
                                                         172.20.0.104 np0005625200.storagemgmt.localdomain np0005625200.storagemgmt
                                                         172.17.0.104 np0005625200.internalapi.localdomain np0005625200.internalapi
                                                         172.19.0.104 np0005625200.tenant.localdomain np0005625200.tenant
                                                         192.168.122.104 np0005625200.ctlplane.localdomain np0005625200.ctlplane
                                                         172.17.0.105 np0005625201.localdomain np0005625201
                                                         172.18.0.105 np0005625201.storage.localdomain np0005625201.storage
                                                         172.20.0.105 np0005625201.storagemgmt.localdomain np0005625201.storagemgmt
                                                         172.17.0.105 np0005625201.internalapi.localdomain np0005625201.internalapi
                                                         172.19.0.105 np0005625201.tenant.localdomain np0005625201.tenant
                                                         192.168.122.105 np0005625201.ctlplane.localdomain np0005625201.ctlplane
                                                         
                                                         192.168.122.100 undercloud.ctlplane.localdomain undercloud.ctlplane
                                                         192.168.122.99  overcloud.ctlplane.localdomain
                                                         172.18.0.217  overcloud.storage.localdomain
                                                         172.20.0.250  overcloud.storagemgmt.localdomain
                                                         172.17.0.130  overcloud.internalapi.localdomain
                                                         172.21.0.142  overcloud.localdomain
                                                          marker=# {mark} marker_begin=START_HOST_ENTRIES_FOR_STACK: overcloud marker_end=END_HOST_ENTRIES_FOR_STACK: overcloud state=present backup=False unsafe_writes=False insertafter=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:45:15 np0005625203.localdomain sudo[36683]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:15 np0005625203.localdomain sudo[36655]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:15 np0005625203.localdomain sudo[36730]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ovwvmkzgkqcrxsqydjwbocdkebqnhegj ; /usr/bin/python3
Feb 20 07:45:15 np0005625203.localdomain sudo[36730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:45:15 np0005625203.localdomain python3[36732]: ansible-ansible.legacy.command Invoked with _raw_params=cp "/tmp/ansible.lm6x29gktmphosts" "/etc/hosts" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:45:15 np0005625203.localdomain sudo[36730]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:15 np0005625203.localdomain sudo[36748]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itungubjnvzfhdaykqgozbzdgbrecefq ; /usr/bin/python3
Feb 20 07:45:15 np0005625203.localdomain sudo[36748]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:45:15 np0005625203.localdomain sudo[36751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:45:15 np0005625203.localdomain sudo[36751]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:45:15 np0005625203.localdomain sudo[36751]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:15 np0005625203.localdomain python3[36750]: ansible-file Invoked with path=/tmp/ansible.lm6x29gktmphosts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:45:15 np0005625203.localdomain sudo[36748]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:16 np0005625203.localdomain sudo[36779]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xhxyuadtbtaftzstbydhcijsvaxmbcuk ; /usr/bin/python3
Feb 20 07:45:16 np0005625203.localdomain sudo[36779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:45:16 np0005625203.localdomain python3[36781]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides rhosp-release _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:45:16 np0005625203.localdomain sudo[36779]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:17 np0005625203.localdomain sudo[36796]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eskczobfaeukstqrjookkymyovsvhbru ; /usr/bin/python3
Feb 20 07:45:17 np0005625203.localdomain sudo[36796]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:45:17 np0005625203.localdomain python3[36798]: ansible-ansible.legacy.dnf Invoked with name=['rhosp-release'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:45:17 np0005625203.localdomain sshd[36639]: Invalid user dan from 185.246.128.171 port 7814
Feb 20 07:45:19 np0005625203.localdomain sshd[36639]: Disconnecting invalid user dan 185.246.128.171 port 7814: Change of username or service not allowed: (dan,ssh-connection) -> (www-data,ssh-connection) [preauth]
Feb 20 07:45:20 np0005625203.localdomain sudo[36796]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:21 np0005625203.localdomain sshd[36803]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:21 np0005625203.localdomain sudo[36817]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vlycrmrhlchpkdclivvihvfjumosscpl ; /usr/bin/python3
Feb 20 07:45:21 np0005625203.localdomain sudo[36817]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:45:22 np0005625203.localdomain python3[36819]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:45:22 np0005625203.localdomain sudo[36817]: pam_unix(sudo:session): session closed for user root
Feb 20 07:45:22 np0005625203.localdomain sudo[36834]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sizmwfrbcbtmfharxwefpfdxzgzfxalt ; /usr/bin/python3
Feb 20 07:45:22 np0005625203.localdomain sudo[36834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:45:22 np0005625203.localdomain python3[36836]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'jq', 'nftables', 'openvswitch', 'openstack-heat-agents', 'openstack-selinux', 'os-net-config', 'python3-libselinux', 'python3-pyyaml', 'puppet-tripleo', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:45:25 np0005625203.localdomain sshd[36803]: Invalid user www-data from 185.246.128.171 port 45210
Feb 20 07:45:27 np0005625203.localdomain sshd[36803]: Disconnecting invalid user www-data 185.246.128.171 port 45210: Change of username or service not allowed: (www-data,ssh-connection) -> (inspur,ssh-connection) [preauth]
Feb 20 07:45:27 np0005625203.localdomain sshd[36842]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:30 np0005625203.localdomain sshd[36842]: Invalid user inspur from 185.246.128.171 port 14013
Feb 20 07:45:31 np0005625203.localdomain sshd[36842]: Disconnecting invalid user inspur 185.246.128.171 port 14013: Change of username or service not allowed: (inspur,ssh-connection) -> (saeed,ssh-connection) [preauth]
Feb 20 07:45:35 np0005625203.localdomain sshd[37006]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:35 np0005625203.localdomain groupadd[37009]: group added to /etc/group: name=puppet, GID=52
Feb 20 07:45:35 np0005625203.localdomain groupadd[37009]: group added to /etc/gshadow: name=puppet
Feb 20 07:45:35 np0005625203.localdomain groupadd[37009]: new group: name=puppet, GID=52
Feb 20 07:45:35 np0005625203.localdomain useradd[37016]: new user: name=puppet, UID=52, GID=52, home=/var/lib/puppet, shell=/sbin/nologin, from=none
Feb 20 07:45:39 np0005625203.localdomain sshd[37006]: Invalid user saeed from 185.246.128.171 port 57334
Feb 20 07:45:40 np0005625203.localdomain sshd[37006]: Disconnecting invalid user saeed 185.246.128.171 port 57334: Change of username or service not allowed: (saeed,ssh-connection) -> (josie,ssh-connection) [preauth]
Feb 20 07:45:44 np0005625203.localdomain sshd[37302]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:48 np0005625203.localdomain sshd[37302]: Invalid user josie from 185.246.128.171 port 39202
Feb 20 07:45:50 np0005625203.localdomain sshd[37302]: Disconnecting invalid user josie 185.246.128.171 port 39202: Change of username or service not allowed: (josie,ssh-connection) -> (dspace,ssh-connection) [preauth]
Feb 20 07:45:51 np0005625203.localdomain sshd[37343]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:52 np0005625203.localdomain sshd[37343]: Invalid user ubuntu from 123.204.132.127 port 42990
Feb 20 07:45:52 np0005625203.localdomain sshd[37343]: Received disconnect from 123.204.132.127 port 42990:11: Bye Bye [preauth]
Feb 20 07:45:52 np0005625203.localdomain sshd[37343]: Disconnected from invalid user ubuntu 123.204.132.127 port 42990 [preauth]
Feb 20 07:45:52 np0005625203.localdomain sshd[37351]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:54 np0005625203.localdomain sshd[37351]: Invalid user dspace from 185.246.128.171 port 23055
Feb 20 07:45:55 np0005625203.localdomain sshd[37351]: Disconnecting invalid user dspace 185.246.128.171 port 23055: Change of username or service not allowed: (dspace,ssh-connection) -> (wade,ssh-connection) [preauth]
Feb 20 07:45:55 np0005625203.localdomain sshd[37372]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:45:55 np0005625203.localdomain sshd[37372]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:45:57 np0005625203.localdomain sshd[37389]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:01 np0005625203.localdomain sshd[37389]: Invalid user wade from 185.246.128.171 port 50617
Feb 20 07:46:02 np0005625203.localdomain sshd[37389]: Disconnecting invalid user wade 185.246.128.171 port 50617: Change of username or service not allowed: (wade,ssh-connection) -> (vgilli,ssh-connection) [preauth]
Feb 20 07:46:03 np0005625203.localdomain sshd[37432]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:05 np0005625203.localdomain sshd[37432]: Invalid user vgilli from 185.246.128.171 port 20009
Feb 20 07:46:06 np0005625203.localdomain sshd[37432]: Disconnecting invalid user vgilli 185.246.128.171 port 20009: Change of username or service not allowed: (vgilli,ssh-connection) -> (user4,ssh-connection) [preauth]
Feb 20 07:46:07 np0005625203.localdomain sshd[37461]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:10 np0005625203.localdomain sshd[37461]: Invalid user user4 from 185.246.128.171 port 44203
Feb 20 07:46:11 np0005625203.localdomain sshd[37461]: Disconnecting invalid user user4 185.246.128.171 port 44203: Change of username or service not allowed: (user4,ssh-connection) -> (airflow,ssh-connection) [preauth]
Feb 20 07:46:12 np0005625203.localdomain sshd[37477]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:12 np0005625203.localdomain sshd[37477]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:46:13 np0005625203.localdomain sshd[37479]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:14 np0005625203.localdomain sshd[37481]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:15 np0005625203.localdomain sshd[37481]: Invalid user ubuntu from 40.81.244.142 port 45914
Feb 20 07:46:15 np0005625203.localdomain sshd[37481]: Received disconnect from 40.81.244.142 port 45914:11: Bye Bye [preauth]
Feb 20 07:46:15 np0005625203.localdomain sshd[37481]: Disconnected from invalid user ubuntu 40.81.244.142 port 45914 [preauth]
Feb 20 07:46:16 np0005625203.localdomain sudo[37483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:46:16 np0005625203.localdomain sudo[37483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:46:16 np0005625203.localdomain sudo[37483]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:16 np0005625203.localdomain sudo[37498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:46:16 np0005625203.localdomain sudo[37498]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:46:16 np0005625203.localdomain sshd[37479]: Invalid user airflow from 185.246.128.171 port 11018
Feb 20 07:46:16 np0005625203.localdomain sshd[37479]: Disconnecting invalid user airflow 185.246.128.171 port 11018: Change of username or service not allowed: (airflow,ssh-connection) -> (proxyuser,ssh-connection) [preauth]
Feb 20 07:46:16 np0005625203.localdomain sudo[37498]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:17 np0005625203.localdomain sudo[37545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:46:17 np0005625203.localdomain sudo[37545]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:46:17 np0005625203.localdomain sudo[37545]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:18 np0005625203.localdomain sshd[37561]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:19 np0005625203.localdomain sshd[37561]: Invalid user deployuser from 103.171.84.20 port 36390
Feb 20 07:46:19 np0005625203.localdomain sshd[37561]: Received disconnect from 103.171.84.20 port 36390:11: Bye Bye [preauth]
Feb 20 07:46:19 np0005625203.localdomain sshd[37561]: Disconnected from invalid user deployuser 103.171.84.20 port 36390 [preauth]
Feb 20 07:46:19 np0005625203.localdomain sshd[37566]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:22 np0005625203.localdomain sshd[37566]: Invalid user proxyuser from 185.246.128.171 port 45987
Feb 20 07:46:23 np0005625203.localdomain sshd[37566]: Disconnecting invalid user proxyuser 185.246.128.171 port 45987: Change of username or service not allowed: (proxyuser,ssh-connection) -> (github,ssh-connection) [preauth]
Feb 20 07:46:25 np0005625203.localdomain sshd[37595]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:28 np0005625203.localdomain sshd[37595]: Invalid user github from 185.246.128.171 port 10616
Feb 20 07:46:28 np0005625203.localdomain kernel: SELinux:  Converting 2700 SID table entries...
Feb 20 07:46:28 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 07:46:28 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 07:46:28 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 07:46:28 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 07:46:28 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 07:46:28 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 07:46:28 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 07:46:29 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Feb 20 07:46:29 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 07:46:29 np0005625203.localdomain systemd[1]: Starting man-db-cache-update.service...
Feb 20 07:46:29 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:46:29 np0005625203.localdomain systemd-rc-local-generator[37718]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:46:29 np0005625203.localdomain systemd-sysv-generator[37723]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:46:29 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:46:29 np0005625203.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Feb 20 07:46:29 np0005625203.localdomain sshd[37595]: Disconnecting invalid user github 185.246.128.171 port 10616: Change of username or service not allowed: (github,ssh-connection) -> (omar,ssh-connection) [preauth]
Feb 20 07:46:29 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 07:46:29 np0005625203.localdomain systemd[1]: Finished man-db-cache-update.service.
Feb 20 07:46:29 np0005625203.localdomain systemd[1]: run-rf8a0b6ae9d0f4ee4bf952fd31a48aebd.service: Deactivated successfully.
Feb 20 07:46:30 np0005625203.localdomain sudo[36834]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:33 np0005625203.localdomain sshd[38152]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:33 np0005625203.localdomain sudo[38166]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kkmoliudxtvfaatphrtlhisczyeivspt ; /usr/bin/python3
Feb 20 07:46:33 np0005625203.localdomain sudo[38166]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:33 np0005625203.localdomain python3[38168]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 jq nftables openvswitch openstack-heat-agents openstack-selinux os-net-config python3-libselinux python3-pyyaml puppet-tripleo rsync tmpwatch sysstat iproute-tc _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:46:34 np0005625203.localdomain sudo[38166]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:35 np0005625203.localdomain sudo[38305]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbranfgslksdrkxzmtadmhbcabqbsaxu ; /usr/bin/python3
Feb 20 07:46:35 np0005625203.localdomain sudo[38305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:35 np0005625203.localdomain python3[38307]: ansible-ansible.legacy.systemd Invoked with name=openvswitch enabled=True state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:46:35 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:46:35 np0005625203.localdomain systemd-sysv-generator[38336]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:46:35 np0005625203.localdomain systemd-rc-local-generator[38332]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:46:35 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:46:35 np0005625203.localdomain sudo[38305]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:36 np0005625203.localdomain sudo[38360]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oiufmlddjaflvcsgyevufgvcixgxxqka ; /usr/bin/python3
Feb 20 07:46:36 np0005625203.localdomain sudo[38360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:37 np0005625203.localdomain python3[38362]: ansible-file Invoked with path=/var/lib/heat-config/tripleo-config-download state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:46:37 np0005625203.localdomain sudo[38360]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:37 np0005625203.localdomain sudo[38376]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrcnjyffpruulbjgazinmcjcwecrvwpt ; /usr/bin/python3
Feb 20 07:46:37 np0005625203.localdomain sudo[38376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:37 np0005625203.localdomain python3[38378]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides openstack-network-scripts _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:46:37 np0005625203.localdomain sudo[38376]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:37 np0005625203.localdomain sudo[38393]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ncwednskowkrbjaqripxoluhlsphxlge ; /usr/bin/python3
Feb 20 07:46:37 np0005625203.localdomain sudo[38393]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:38 np0005625203.localdomain python3[38395]: ansible-systemd Invoked with name=NetworkManager enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 20 07:46:38 np0005625203.localdomain sudo[38393]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:38 np0005625203.localdomain sudo[38411]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-acqoemkjxqagvowxryvgdjycdyfmigxz ; /usr/bin/python3
Feb 20 07:46:38 np0005625203.localdomain sudo[38411]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:38 np0005625203.localdomain python3[38413]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=dns value=none backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:46:38 np0005625203.localdomain sudo[38411]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:38 np0005625203.localdomain sudo[38429]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zcpbhhfixwkaeuedabmhzphahvyxfujz ; /usr/bin/python3
Feb 20 07:46:38 np0005625203.localdomain sudo[38429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:39 np0005625203.localdomain python3[38431]: ansible-ini_file Invoked with path=/etc/NetworkManager/NetworkManager.conf state=present no_extra_spaces=True section=main option=rc-manager value=unmanaged backup=True exclusive=True allow_no_value=False create=True unsafe_writes=False values=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:46:39 np0005625203.localdomain sudo[38429]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:39 np0005625203.localdomain sshd[38152]: Invalid user omar from 185.246.128.171 port 58226
Feb 20 07:46:39 np0005625203.localdomain sudo[38447]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xbcaneiydvtecglieynnxyvhagoyopag ; /usr/bin/python3
Feb 20 07:46:39 np0005625203.localdomain sudo[38447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:39 np0005625203.localdomain python3[38449]: ansible-ansible.legacy.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 07:46:39 np0005625203.localdomain systemd[1]: Reloading Network Manager...
Feb 20 07:46:39 np0005625203.localdomain NetworkManager[5968]: <info>  [1771573599.7336] audit: op="reload" arg="0" pid=38452 uid=0 result="success"
Feb 20 07:46:39 np0005625203.localdomain NetworkManager[5968]: <info>  [1771573599.7342] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode,rc-manager (/etc/NetworkManager/NetworkManager.conf (lib: 00-server.conf) (run: 15-carrier-timeout.conf))
Feb 20 07:46:39 np0005625203.localdomain NetworkManager[5968]: <info>  [1771573599.7342] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Feb 20 07:46:39 np0005625203.localdomain systemd[1]: Reloaded Network Manager.
Feb 20 07:46:39 np0005625203.localdomain sudo[38447]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:40 np0005625203.localdomain sudo[38466]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qlkmfjjassozmqnyxbyywbzenrboyrjd ; /usr/bin/python3
Feb 20 07:46:40 np0005625203.localdomain sudo[38466]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:40 np0005625203.localdomain python3[38468]: ansible-ansible.legacy.command Invoked with _raw_params=ln -f -s /usr/share/openstack-puppet/modules/* /etc/puppet/modules/ _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:46:40 np0005625203.localdomain sudo[38466]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:40 np0005625203.localdomain sudo[38483]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wtnugoxgagtljbqgvjsorvnnctcahmyp ; /usr/bin/python3
Feb 20 07:46:40 np0005625203.localdomain sudo[38483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:40 np0005625203.localdomain python3[38485]: ansible-stat Invoked with path=/usr/bin/ansible-playbook follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:46:40 np0005625203.localdomain sudo[38483]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:40 np0005625203.localdomain sshd[38152]: Disconnecting invalid user omar 185.246.128.171 port 58226: Change of username or service not allowed: (omar,ssh-connection) -> (satisfactory,ssh-connection) [preauth]
Feb 20 07:46:40 np0005625203.localdomain sudo[38501]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lnuywzzhkgwxlqkewirzvgixfwhaptpk ; /usr/bin/python3
Feb 20 07:46:40 np0005625203.localdomain sudo[38501]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:41 np0005625203.localdomain python3[38503]: ansible-stat Invoked with path=/usr/bin/ansible-playbook-3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:46:41 np0005625203.localdomain sudo[38501]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:41 np0005625203.localdomain sudo[38517]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdljpyjmqcvpjymfbosrmytxszsrwvyk ; /usr/bin/python3
Feb 20 07:46:41 np0005625203.localdomain sudo[38517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:41 np0005625203.localdomain python3[38519]: ansible-file Invoked with state=link src=/usr/bin/ansible-playbook path=/usr/bin/ansible-playbook-3 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:46:41 np0005625203.localdomain sudo[38517]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:42 np0005625203.localdomain sudo[38533]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nuprdoojvbxvjqqzcvilbllbjlnzxfsv ; /usr/bin/python3
Feb 20 07:46:42 np0005625203.localdomain sudo[38533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:42 np0005625203.localdomain python3[38535]: ansible-tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 20 07:46:42 np0005625203.localdomain sudo[38533]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:42 np0005625203.localdomain sudo[38549]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-afsdfosjjiogetnvfpaoyhbytueaalnx ; /usr/bin/python3
Feb 20 07:46:42 np0005625203.localdomain sudo[38549]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:42 np0005625203.localdomain python3[38551]: ansible-stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:46:42 np0005625203.localdomain sudo[38549]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:43 np0005625203.localdomain sshd[38552]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:43 np0005625203.localdomain sudo[38566]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jqappmvlwbpqdphkwzqpkydorbdwrlhp ; /usr/bin/python3
Feb 20 07:46:43 np0005625203.localdomain sudo[38566]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:43 np0005625203.localdomain python3[38568]: ansible-blockinfile Invoked with path=/tmp/ansible._rkys7wa block=[192.168.122.106]*,[np0005625202.ctlplane.localdomain]*,[172.17.0.106]*,[np0005625202.internalapi.localdomain]*,[172.18.0.106]*,[np0005625202.storage.localdomain]*,[172.20.0.106]*,[np0005625202.storagemgmt.localdomain]*,[172.19.0.106]*,[np0005625202.tenant.localdomain]*,[np0005625202.localdomain]*,[np0005625202]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDr8sejencX7nSCX6AegGtTuiZL3yclu/L7ZVN4B6dKPdmHqVr33QJD40sEk28GHpx8BrkPU2Qj1de9H6mGtrlwhmJr7Pccg/YqzKoTCQD5rZQ4youU8H70As6YX5ZlXyulwI1SH70XjMm37x4ptKALFOjRnHg0WIXah/tAmzrY/orh+/eCcns7APVjN9B1o+MqP4r47WrWrGU/KxtsHc6dflWxZW7BWUCCNS0e3C4yWLRjy8Hhj7Qkpssv/UBcj+olVHadUUOYiaQZ5Y33MjxwIg8o1MuC7C1dNIn8eXOXXiA8jd/lJd9kImrCGUtkVqj8VQgsMh4vRYMD+0SNLYRDVwxdemOzJYgwQhgiWZ0G+cVhnTBpMmXyIws2OpOKU8R3HjTC3jz+BxvjwEvMDoQfpGgsHB9NCXnkQzs2F8EA8LpA823Ef1SMgPdDCaQzvN5oQPZkWAPMVHvq31xpN9q+KXg/bg0uDaIZXUxW2rGnem7pFS78rRUGL6MfSMn1zs=
                                                         [192.168.122.107]*,[np0005625203.ctlplane.localdomain]*,[172.17.0.107]*,[np0005625203.internalapi.localdomain]*,[172.18.0.107]*,[np0005625203.storage.localdomain]*,[172.20.0.107]*,[np0005625203.storagemgmt.localdomain]*,[172.19.0.107]*,[np0005625203.tenant.localdomain]*,[np0005625203.localdomain]*,[np0005625203]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtf1NXQ3EGQGdpLLLxuODKBdTGwqsiHL2QZ6zcfpGAa7EhDIxuEcLboqOGjQO0FM3u+kl2gIgKF0UsY5Vjcv4mDCMp7A7srq7TVo5lE5cCppbbXr0/PH2L/naHU3W+W83aT5RE17XPJ0Acn3W51WFBoICCCc4jjWTGmkNEgurKBJmdr0n8NeIcUWZ7Abrs/N2xzNftEFIjAPwebxgEwgCx0hMbdjTFhKbB/V7CjKaCU/UjirWMW5aDQJQEfrCM9u4NHuGaWKzJgar4/shNHaRvkCDbVrRPTCyfNebE04J/R42X3yWmvww4TMZVpRROd/u6Pgg1P2tbPGfQ0XvS0rfY6W4/VnHcyRDqxILH5BoeCAbTuVFmR0hbQu9fNbNxTP+o+na9mHEbNxbhcREnkal8+M0l11YftCRkr4132JITxe7y93gN/dwxE3nJLHLXRuRskWc3GTDT2MVU2Sj64yizD9KOM3oiMBXdPbNbgZywu3hqQvpO00GVg6QRjEJoiFc=
                                                         [192.168.122.108]*,[np0005625204.ctlplane.localdomain]*,[172.17.0.108]*,[np0005625204.internalapi.localdomain]*,[172.18.0.108]*,[np0005625204.storage.localdomain]*,[172.20.0.108]*,[np0005625204.storagemgmt.localdomain]*,[172.19.0.108]*,[np0005625204.tenant.localdomain]*,[np0005625204.localdomain]*,[np0005625204]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAo6exxFtNk/Y5qEGYenJyhnCsS7iZmCGsFaQtJElNSeTTX9a1P0P2EmjtHolRxnljCZ2X8HgWx/irhJvWLoS+dzF5l+KcyQy83+048h51mbnj7zV2uG9i8LkO0egs1uBBp5E+hauHMsuf0nIDFl45W86ZXuf+MfFEKCInhjB5gfE9tTjwmKwKhgO1DE7Vpx3OYy1FHkq0YDBCqQHuuhYPrLZPjfVv3vGOaHH/XCsxX3h8/ixsZbobD56dDBKF/8CFyC/guH8pNUhZHG0dEhz5BT8PcE2Q/M9pPttzmRQksfg9+q7lVy9eCoOVpzqfTgjE1cm5yISwuMZzaNxwjJKB54EWpfl5xxnkC14B+xdvowxpl1PcMNZ0q1fWofJF4TrJAwWCUYZf45aUV2yb5R8WavUT0pX32xmd4zFbXusoafiw2FcgnxoGz3N4ZgIxTPPmgUe13blr1SK44huXWPioaolFBo82xVVFHc+01vfLF3xvs86d6EpqpLH+yaCeUjE=
                                                         [192.168.122.103]*,[np0005625199.ctlplane.localdomain]*,[172.17.0.103]*,[np0005625199.internalapi.localdomain]*,[172.18.0.103]*,[np0005625199.storage.localdomain]*,[172.20.0.103]*,[np0005625199.storagemgmt.localdomain]*,[172.19.0.103]*,[np0005625199.tenant.localdomain]*,[np0005625199.localdomain]*,[np0005625199]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrnsozeOPJKYg9sx2Tj6QOLRhujK5RVh5RZQ3sb0pk+DbWHQKqS1YvJUg2hV4WxbxPnNUCBtJ+RZ8lVm6RLM+hc3ffe2sOMOz5upO/hTlIpBSfJpQORkiNW+XIXdDVxgE418veFd2hASFmiCmKoFSKXsvnmFU9oTEpja1plcXSqCobFMVYKlhcRo66O0ySlGOR+o3Ar2yNJQjFErEGvZLoDEa/VlA6zreYmTaIsnlUDie0gbm5teTlsCcEYkvWcTzcfOEX2kXQRQbS5qlPtGg7c+KMv5e40rE+2QOigLmOOPVGwNYuLuhb/EHT0C8hK8otW4tiXxBlSZ5ONKY6YYQOpy7krNkWRxNXzK0LfXo2bt2apDaMzebPOvuBj1YyBiLpa6/aLvS/dtGolQNPDpFivPbP/mSpat1qTs0W3/2HyBovwWSGJDW8MMYxbZJ0Z6tnuOwdrPTdkhIibfW9wxgL7EHrDYrGx5CvA2vUM4KDKRntz/cCMGE/zKacSJ48nNk=
                                                         [192.168.122.104]*,[np0005625200.ctlplane.localdomain]*,[172.17.0.104]*,[np0005625200.internalapi.localdomain]*,[172.18.0.104]*,[np0005625200.storage.localdomain]*,[172.20.0.104]*,[np0005625200.storagemgmt.localdomain]*,[172.19.0.104]*,[np0005625200.tenant.localdomain]*,[np0005625200.localdomain]*,[np0005625200]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDW88346W6zU6nxCpqapHtIr5nRG8Jn9LFit3r5klBfauCkmAGONb4X8IwKjo8MD9etebUVbo6aX9gBMBMSs7bSoHzsEQuMLpBDrweSbahQj+gqZ5TmQ/xvwbhws04z3/IJxapAk2xWu7khVGjvOPUE1CROkP+1LiGktQ6Xj1ar1TbLNud2Dq/R5ZalbpK0OT3+no3x0oAJT3W649tW4nmCWcNaxykPsLREsUlH2qVoceAzLEDCSde9/1TONc/URyB4acVqmEwJDHeX51bh31tpQwp/WSe0vKQ6eUw63Tmpn+dRI9xbnFhc6mgGAPcEw7cAUkM7oM6bYMSvVxYDmzMhuXUU/9i3mdMnDBkMyZ5Oed6ZSmFQIJe5k7cz3783d35ZXfl/HsYMqoZ3lmDgbeS59pQrI+BldKyv3sTnoCDahfcmzmiHssxqa7tT5KOuR444q7Nj6wJEIZMEEJEHtMlh1iSBRJZOEOaKjo7h+jV7KMe75aPRasvu9K1v0dqyG6U=
                                                         [192.168.122.105]*,[np0005625201.ctlplane.localdomain]*,[172.17.0.105]*,[np0005625201.internalapi.localdomain]*,[172.18.0.105]*,[np0005625201.storage.localdomain]*,[172.20.0.105]*,[np0005625201.storagemgmt.localdomain]*,[172.19.0.105]*,[np0005625201.tenant.localdomain]*,[np0005625201.localdomain]*,[np0005625201]* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyGkX26ECIsvqnvJegedSF6KicDAAqjaifawEd//OuK9zdHIWqO3XmlEszZqWPsdQhPFkelfzXR+sy3gbPNv+yjT7phsw1sq7zHXeogQFlP5iOQZrf6hCnfXxVk2ckIXMT0UJVZ8FCTwsQi+HKkR/IEj08pR7EjrXGWxHkjv5wNj76spF3FJxtwycS4+KzY3UFy7gYWVn2jB0ha966YgjHMPhzQnT33W9myxGH33M1L5ZCGlfH19hLnqTUNMfzIfw3afxHkL5BFZbhthUPmIfLdLtKmZEkpSTBO/CrNA6CmMfY6xnT78hmwXytEQ+jeiRdKXdr9xQ2j6wVmPzckFKBsBYRe4DprKGt93fnKS9Z6A3Sv626DyZgDa8/NXbtAaBxtyix5Vdt872hYvCzYyB/OuSV6PR5DOq8z3fquOwgtka3rA6qL5gxhFJcO5TqtBM76DzOLd9OLM9bIO1yK9sCmbYynMojkXylzhDfcI8kytS5xs9FJEfwTElZRHkEIQE=
                                                          create=True state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:46:43 np0005625203.localdomain sudo[38566]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:43 np0005625203.localdomain sshd[38571]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:43 np0005625203.localdomain sudo[38585]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfuojvtwnfnilqoagggpwzgjdoqxfuws ; /usr/bin/python3
Feb 20 07:46:43 np0005625203.localdomain sudo[38585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:43 np0005625203.localdomain sshd[38571]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:46:43 np0005625203.localdomain python3[38587]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible._rkys7wa' > /etc/ssh/ssh_known_hosts _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:46:43 np0005625203.localdomain sudo[38585]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:44 np0005625203.localdomain sudo[38603]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxsspgabxnsxkdwvytuweslfkpihbylb ; /usr/bin/python3
Feb 20 07:46:44 np0005625203.localdomain sudo[38603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:44 np0005625203.localdomain python3[38605]: ansible-file Invoked with path=/tmp/ansible._rkys7wa state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:46:44 np0005625203.localdomain sudo[38603]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:44 np0005625203.localdomain sshd[38552]: Invalid user satisfactory from 185.246.128.171 port 44753
Feb 20 07:46:44 np0005625203.localdomain sudo[38619]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zcszkdavjttzcbwqunltknlxbfzfrjuc ; /usr/bin/python3
Feb 20 07:46:44 np0005625203.localdomain sudo[38619]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:45 np0005625203.localdomain python3[38621]: ansible-file Invoked with path=/var/log/journal state=directory mode=0750 owner=root group=root setype=var_log_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:46:45 np0005625203.localdomain sudo[38619]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:45 np0005625203.localdomain sudo[38635]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ftqcnzanchjgdxlvxlmlqbuevzogyknq ; /usr/bin/python3
Feb 20 07:46:45 np0005625203.localdomain sudo[38635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:45 np0005625203.localdomain python3[38637]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active cloud-init.service || systemctl is-enabled cloud-init.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:46:45 np0005625203.localdomain sudo[38635]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:45 np0005625203.localdomain sudo[38653]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-elvacqyceabrvklivdyilhuevutbinfr ; /usr/bin/python3
Feb 20 07:46:45 np0005625203.localdomain sudo[38653]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:45 np0005625203.localdomain python3[38655]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline | grep -q cloud-init=disabled _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:46:45 np0005625203.localdomain sudo[38653]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:46 np0005625203.localdomain sudo[38672]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swguuhbtavpypigifyfpjmpxqynxzjna ; /usr/bin/python3
Feb 20 07:46:46 np0005625203.localdomain sudo[38672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:46 np0005625203.localdomain python3[38674]: ansible-community.general.cloud_init_data_facts Invoked with filter=status
Feb 20 07:46:46 np0005625203.localdomain sudo[38672]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:46 np0005625203.localdomain sudo[38688]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zqyyyvpvilxytccmopizgyzlrswafktq ; /usr/bin/python3
Feb 20 07:46:46 np0005625203.localdomain sudo[38688]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:46 np0005625203.localdomain sudo[38688]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:46 np0005625203.localdomain sshd[38552]: Disconnecting invalid user satisfactory 185.246.128.171 port 44753: Change of username or service not allowed: (satisfactory,ssh-connection) -> (minima,ssh-connection) [preauth]
Feb 20 07:46:46 np0005625203.localdomain sudo[38736]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlwhjxmzzwmuuvzzzqybryjhjzhbfnhe ; /usr/bin/python3
Feb 20 07:46:46 np0005625203.localdomain sudo[38736]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:46 np0005625203.localdomain sshd[38739]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:47 np0005625203.localdomain sudo[38736]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:47 np0005625203.localdomain sudo[38781]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vylvafxvrnjxjatjpunkmvzqvlsoyepi ; /usr/bin/python3
Feb 20 07:46:47 np0005625203.localdomain sudo[38781]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:47 np0005625203.localdomain sudo[38781]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:47 np0005625203.localdomain sshd[38739]: Received disconnect from 187.87.206.21 port 39722:11: Bye Bye [preauth]
Feb 20 07:46:47 np0005625203.localdomain sshd[38739]: Disconnected from authenticating user root 187.87.206.21 port 39722 [preauth]
Feb 20 07:46:48 np0005625203.localdomain sshd[38798]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:48 np0005625203.localdomain sudo[38813]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwlcwwmafnazsroqhrnpokfixdqaymsi ; /usr/bin/python3
Feb 20 07:46:48 np0005625203.localdomain sudo[38813]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:48 np0005625203.localdomain python3[38815]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:46:48 np0005625203.localdomain sudo[38813]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:48 np0005625203.localdomain sudo[38830]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znkofnqsamaceiabckhlxgnzbtfwslly ; /usr/bin/python3
Feb 20 07:46:48 np0005625203.localdomain sudo[38830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:49 np0005625203.localdomain python3[38832]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:46:51 np0005625203.localdomain sshd[38798]: Invalid user minima from 185.246.128.171 port 8735
Feb 20 07:46:52 np0005625203.localdomain dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Feb 20 07:46:52 np0005625203.localdomain dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Feb 20 07:46:52 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 07:46:52 np0005625203.localdomain systemd[1]: Starting man-db-cache-update.service...
Feb 20 07:46:52 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:46:52 np0005625203.localdomain systemd-sysv-generator[38903]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:46:52 np0005625203.localdomain systemd-rc-local-generator[38899]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:46:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:46:52 np0005625203.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Feb 20 07:46:52 np0005625203.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 20 07:46:52 np0005625203.localdomain systemd[1]: tuned.service: Deactivated successfully.
Feb 20 07:46:52 np0005625203.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 20 07:46:52 np0005625203.localdomain systemd[1]: tuned.service: Consumed 1.571s CPU time.
Feb 20 07:46:52 np0005625203.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 20 07:46:53 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 07:46:53 np0005625203.localdomain systemd[1]: Finished man-db-cache-update.service.
Feb 20 07:46:53 np0005625203.localdomain systemd[1]: run-re899c27d48a745d4a95827748298a4be.service: Deactivated successfully.
Feb 20 07:46:54 np0005625203.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Feb 20 07:46:54 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 07:46:54 np0005625203.localdomain systemd[1]: Starting man-db-cache-update.service...
Feb 20 07:46:54 np0005625203.localdomain sshd[38798]: Disconnecting invalid user minima 185.246.128.171 port 8735: Change of username or service not allowed: (minima,ssh-connection) -> (git,ssh-connection) [preauth]
Feb 20 07:46:54 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 07:46:54 np0005625203.localdomain systemd[1]: Finished man-db-cache-update.service.
Feb 20 07:46:54 np0005625203.localdomain systemd[1]: run-rcb874107d1e0423888135bd7927c95bc.service: Deactivated successfully.
Feb 20 07:46:55 np0005625203.localdomain sudo[38830]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:55 np0005625203.localdomain sudo[39266]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-voiuzmekrjidfvohewbqtxhaymjbtbiy ; /usr/bin/python3
Feb 20 07:46:55 np0005625203.localdomain sudo[39266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:55 np0005625203.localdomain python3[39268]: ansible-systemd Invoked with name=tuned state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:46:56 np0005625203.localdomain sshd[39270]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:46:56 np0005625203.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 20 07:46:56 np0005625203.localdomain systemd[1]: tuned.service: Deactivated successfully.
Feb 20 07:46:56 np0005625203.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 20 07:46:56 np0005625203.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 20 07:46:58 np0005625203.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Feb 20 07:46:58 np0005625203.localdomain sudo[39266]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:58 np0005625203.localdomain sudo[39463]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltjxjvfnspauqvyvzyxdlaajwcweowzs ; PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python3
Feb 20 07:46:58 np0005625203.localdomain sudo[39463]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:58 np0005625203.localdomain python3[39465]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:46:58 np0005625203.localdomain sudo[39463]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:58 np0005625203.localdomain sudo[39480]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izjeehzmqdzxzkjfoqtmaoxqqikocoxh ; /usr/bin/python3
Feb 20 07:46:58 np0005625203.localdomain sudo[39480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:59 np0005625203.localdomain python3[39482]: ansible-slurp Invoked with src=/etc/tuned/active_profile
Feb 20 07:46:59 np0005625203.localdomain sudo[39480]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:59 np0005625203.localdomain sudo[39496]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdcygdfanpjjhvrazjdetpxkovfqtmxm ; /usr/bin/python3
Feb 20 07:46:59 np0005625203.localdomain sudo[39496]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:59 np0005625203.localdomain python3[39498]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:46:59 np0005625203.localdomain sudo[39496]: pam_unix(sudo:session): session closed for user root
Feb 20 07:46:59 np0005625203.localdomain sshd[39270]: Invalid user git from 185.246.128.171 port 52162
Feb 20 07:46:59 np0005625203.localdomain sudo[39512]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-palfawifzneceteusljvrroyunavbauf ; PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python3
Feb 20 07:46:59 np0005625203.localdomain sudo[39512]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:46:59 np0005625203.localdomain python3[39514]: ansible-ansible.legacy.command Invoked with _raw_params=tuned-adm profile throughput-performance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:47:01 np0005625203.localdomain sudo[39512]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:01 np0005625203.localdomain sudo[39532]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylbyeckhdgrdahrntvyulzxjwklwproc ; /usr/bin/python3
Feb 20 07:47:01 np0005625203.localdomain sudo[39532]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:01 np0005625203.localdomain python3[39534]: ansible-ansible.legacy.command Invoked with _raw_params=cat /proc/cmdline _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:47:01 np0005625203.localdomain sudo[39532]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:02 np0005625203.localdomain sudo[39549]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vchpjhxxzmbywofsrfnjfqgkuocnbnar ; /usr/bin/python3
Feb 20 07:47:02 np0005625203.localdomain sudo[39549]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:02 np0005625203.localdomain python3[39551]: ansible-stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:47:02 np0005625203.localdomain sudo[39549]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:04 np0005625203.localdomain sudo[39565]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-njztriqrztjajobdrrulqndmwgsvxrqr ; /usr/bin/python3
Feb 20 07:47:04 np0005625203.localdomain sudo[39565]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:05 np0005625203.localdomain python3[39567]: ansible-replace Invoked with regexp=TRIPLEO_HEAT_TEMPLATE_KERNEL_ARGS dest=/etc/default/grub replace= path=/etc/default/grub backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:05 np0005625203.localdomain sudo[39565]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:05 np0005625203.localdomain sshd[39270]: Disconnecting invalid user git 185.246.128.171 port 52162: Change of username or service not allowed: (git,ssh-connection) -> (test4,ssh-connection) [preauth]
Feb 20 07:47:07 np0005625203.localdomain sshd[39568]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:47:09 np0005625203.localdomain sudo[39583]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zlaybwntdhhixmikrtvagzuppnwbbkfj ; /usr/bin/python3
Feb 20 07:47:09 np0005625203.localdomain sudo[39583]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:09 np0005625203.localdomain python3[39585]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:09 np0005625203.localdomain sudo[39583]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:09 np0005625203.localdomain sudo[39631]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zbmuhimunkhtyjgnyryomwlpgfkjtddp ; /usr/bin/python3
Feb 20 07:47:09 np0005625203.localdomain sudo[39631]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:09 np0005625203.localdomain python3[39633]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:09 np0005625203.localdomain sudo[39631]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:09 np0005625203.localdomain sudo[39676]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iftgsckjzkcowscrkhpsdizbbvahonwr ; /usr/bin/python3
Feb 20 07:47:09 np0005625203.localdomain sudo[39676]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:10 np0005625203.localdomain python3[39678]: ansible-ansible.legacy.copy Invoked with mode=384 dest=/etc/puppet/hiera.yaml src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573629.4981787-70979-276835900810812/source _original_basename=tmpf3_ck40n follow=False checksum=aaf3699defba931d532f4955ae152f505046749a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:10 np0005625203.localdomain sudo[39676]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:10 np0005625203.localdomain sshd[39568]: Invalid user test4 from 185.246.128.171 port 48805
Feb 20 07:47:10 np0005625203.localdomain sudo[39706]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eomhxcsrnoeieuculpeijyysnpeajnzt ; /usr/bin/python3
Feb 20 07:47:10 np0005625203.localdomain sudo[39706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:10 np0005625203.localdomain python3[39708]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:10 np0005625203.localdomain sudo[39706]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:10 np0005625203.localdomain sshd[39568]: Disconnecting invalid user test4 185.246.128.171 port 48805: Change of username or service not allowed: (test4,ssh-connection) -> (kali,ssh-connection) [preauth]
Feb 20 07:47:11 np0005625203.localdomain sudo[39754]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfovzbrrinszgsqmuwxrxghfdhdxinok ; /usr/bin/python3
Feb 20 07:47:11 np0005625203.localdomain sudo[39754]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:11 np0005625203.localdomain python3[39756]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:11 np0005625203.localdomain sudo[39754]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:11 np0005625203.localdomain sudo[39797]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hrmyojsdhlncpwrahegzhicozavwcqvo ; /usr/bin/python3
Feb 20 07:47:11 np0005625203.localdomain sudo[39797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:11 np0005625203.localdomain python3[39799]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573631.1219566-71086-30773217634293/source dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json follow=False checksum=5387ef5e5a4b3d23a203db65b8a130e906dc0536 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:11 np0005625203.localdomain sudo[39797]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:12 np0005625203.localdomain sudo[39859]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqqlnheudtfjljfbcpytssoaijbybsug ; /usr/bin/python3
Feb 20 07:47:12 np0005625203.localdomain sudo[39859]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:12 np0005625203.localdomain python3[39861]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:12 np0005625203.localdomain sudo[39859]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:12 np0005625203.localdomain sudo[39902]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jmgqwluljmxpfisxnyaocwtokwwhwwrc ; /usr/bin/python3
Feb 20 07:47:12 np0005625203.localdomain sudo[39902]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:12 np0005625203.localdomain python3[39904]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573632.015754-71145-110068206137580/source dest=/etc/puppet/hieradata/bootstrap_node.json mode=None follow=False _original_basename=bootstrap_node.j2 checksum=b3e2a3c34ad78c32d8298bcfb96fa0bd48de4c29 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:12 np0005625203.localdomain sudo[39902]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:13 np0005625203.localdomain sudo[39964]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ekpzpluinwjkztimxoogulpjhlckszlm ; /usr/bin/python3
Feb 20 07:47:13 np0005625203.localdomain sudo[39964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:13 np0005625203.localdomain python3[39966]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:13 np0005625203.localdomain sudo[39964]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:13 np0005625203.localdomain sshd[39967]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:47:13 np0005625203.localdomain sudo[40008]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eaktoemigzvebykeigubxlgwkwjzkfag ; /usr/bin/python3
Feb 20 07:47:13 np0005625203.localdomain sudo[40008]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:13 np0005625203.localdomain python3[40010]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573632.860289-71145-205112960081126/source dest=/etc/puppet/hieradata/vip_data.json mode=None follow=False _original_basename=vip_data.j2 checksum=9360c8b01c30dc9677a403a9f11e562b9309fb54 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:13 np0005625203.localdomain sudo[40008]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:14 np0005625203.localdomain sudo[40070]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xdyflxppttrwrvufedabsrglnkdtqqkd ; /usr/bin/python3
Feb 20 07:47:14 np0005625203.localdomain sudo[40070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:14 np0005625203.localdomain python3[40072]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:14 np0005625203.localdomain sudo[40070]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:14 np0005625203.localdomain sudo[40113]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pevvgtsuqavsdbdcxgyluxdoblryxnti ; /usr/bin/python3
Feb 20 07:47:14 np0005625203.localdomain sudo[40113]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:14 np0005625203.localdomain python3[40115]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573633.7569263-71145-206050310650245/source dest=/etc/puppet/hieradata/net_ip_map.json mode=None follow=False _original_basename=net_ip_map.j2 checksum=175c760950d63a47f443f25b58088dba962f090b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:14 np0005625203.localdomain sudo[40113]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:14 np0005625203.localdomain sudo[40176]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lplnhcbztydvbtyjgnwumvrqmcrrobtc ; /usr/bin/python3
Feb 20 07:47:14 np0005625203.localdomain sudo[40176]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:15 np0005625203.localdomain python3[40178]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:15 np0005625203.localdomain sudo[40176]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:15 np0005625203.localdomain sudo[40219]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dupydwtxdnpfkeumaezdymifgupyryer ; /usr/bin/python3
Feb 20 07:47:15 np0005625203.localdomain sudo[40219]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:15 np0005625203.localdomain python3[40221]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573634.7386243-71145-218340058039122/source dest=/etc/puppet/hieradata/cloud_domain.json mode=None follow=False _original_basename=cloud_domain.j2 checksum=5dd835a63e6a03d74797c2e2eadf4bea1cecd9d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:15 np0005625203.localdomain sudo[40219]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:15 np0005625203.localdomain sudo[40281]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwofhhnaxgjsraieimatzacqcyvgtrrc ; /usr/bin/python3
Feb 20 07:47:15 np0005625203.localdomain sudo[40281]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:15 np0005625203.localdomain python3[40283]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:15 np0005625203.localdomain sudo[40281]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:16 np0005625203.localdomain sudo[40324]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjacihkhunmxxzkacupudfdubbqxesgh ; /usr/bin/python3
Feb 20 07:47:16 np0005625203.localdomain sudo[40324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:16 np0005625203.localdomain python3[40326]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573635.5621474-71145-43280231889787/source dest=/etc/puppet/hieradata/fqdn.json mode=None follow=False _original_basename=fqdn.j2 checksum=6d8fe2dd9b8d1332a7b3dadb0a8d26835b6f297b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:16 np0005625203.localdomain sudo[40324]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:16 np0005625203.localdomain sudo[40386]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dkvbwkgsqslwbhcinpfkwpfqduotewmp ; /usr/bin/python3
Feb 20 07:47:16 np0005625203.localdomain sudo[40386]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:16 np0005625203.localdomain python3[40388]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:16 np0005625203.localdomain sudo[40386]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:16 np0005625203.localdomain sudo[40429]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-plibvlmuxqfgmbaaikzfyhjysefgexjp ; /usr/bin/python3
Feb 20 07:47:16 np0005625203.localdomain sudo[40429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:17 np0005625203.localdomain sshd[39967]: Invalid user kali from 185.246.128.171 port 14240
Feb 20 07:47:17 np0005625203.localdomain python3[40431]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573636.4393482-71145-125126296304663/source dest=/etc/puppet/hieradata/service_names.json mode=None follow=False _original_basename=service_names.j2 checksum=ff586b96402d8ae133745cf06f17e772b2f22d52 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:17 np0005625203.localdomain sudo[40429]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:17 np0005625203.localdomain sudo[40491]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cqihbgduohctudxnxjaxlerngxdxwxyy ; /usr/bin/python3
Feb 20 07:47:17 np0005625203.localdomain sudo[40491]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:17 np0005625203.localdomain sudo[40494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:47:17 np0005625203.localdomain sudo[40494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:47:17 np0005625203.localdomain sudo[40494]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:17 np0005625203.localdomain python3[40493]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:17 np0005625203.localdomain sudo[40509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:47:17 np0005625203.localdomain sudo[40491]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:17 np0005625203.localdomain sudo[40509]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:47:17 np0005625203.localdomain sudo[40564]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eprhjtorcvvisbdbdvafhsnbhvvgqqtr ; /usr/bin/python3
Feb 20 07:47:17 np0005625203.localdomain sudo[40564]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:17 np0005625203.localdomain python3[40566]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573637.3190076-71145-193337999874525/source dest=/etc/puppet/hieradata/service_configs.json mode=None follow=False _original_basename=service_configs.j2 checksum=105f529004e67673ca4edd886c338642e88dedf6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:17 np0005625203.localdomain sudo[40564]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:18 np0005625203.localdomain sudo[40509]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:18 np0005625203.localdomain sudo[40659]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grsdsmyzhaxnrlvowudysnbbolgsbmiq ; /usr/bin/python3
Feb 20 07:47:18 np0005625203.localdomain sudo[40659]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:18 np0005625203.localdomain sshd[39967]: Disconnecting invalid user kali 185.246.128.171 port 14240: Change of username or service not allowed: (kali,ssh-connection) -> (p,ssh-connection) [preauth]
Feb 20 07:47:18 np0005625203.localdomain python3[40661]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:18 np0005625203.localdomain sudo[40659]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:18 np0005625203.localdomain sudo[40702]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ywbfufvznfugthsowsihtaoupmwgwcxd ; /usr/bin/python3
Feb 20 07:47:18 np0005625203.localdomain sudo[40702]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:18 np0005625203.localdomain python3[40704]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573638.1169686-71145-8987081163711/source dest=/etc/puppet/hieradata/extraconfig.json mode=None follow=False _original_basename=extraconfig.j2 checksum=5f36b2ea290645ee34d943220a14b54ee5ea5be5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:18 np0005625203.localdomain sudo[40702]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:18 np0005625203.localdomain sudo[40705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:47:18 np0005625203.localdomain sudo[40705]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:47:18 np0005625203.localdomain sudo[40705]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:19 np0005625203.localdomain sudo[40779]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-emzjcqmagurvexmrgokmsyvasbofownc ; /usr/bin/python3
Feb 20 07:47:19 np0005625203.localdomain sudo[40779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:19 np0005625203.localdomain python3[40781]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:19 np0005625203.localdomain sudo[40779]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:19 np0005625203.localdomain sudo[40822]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ajjaauasoqmdiscnkaqcryuyuxrmkjjg ; /usr/bin/python3
Feb 20 07:47:19 np0005625203.localdomain sudo[40822]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:19 np0005625203.localdomain python3[40824]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573638.9336066-71145-177341447394003/source dest=/etc/puppet/hieradata/role_extraconfig.json mode=None follow=False _original_basename=role_extraconfig.j2 checksum=34875968bf996542162e620523f9dcfb3deac331 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:19 np0005625203.localdomain sudo[40822]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:19 np0005625203.localdomain sudo[40884]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aolbdmywdnrqmoymnjgzuaxnwzulcdeg ; /usr/bin/python3
Feb 20 07:47:19 np0005625203.localdomain sudo[40884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:20 np0005625203.localdomain python3[40886]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:20 np0005625203.localdomain sudo[40884]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:20 np0005625203.localdomain sudo[40927]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nmoiilwsykfhngejsilmnnwhmnjpxwlf ; /usr/bin/python3
Feb 20 07:47:20 np0005625203.localdomain sudo[40927]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:20 np0005625203.localdomain python3[40929]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573639.7610402-71145-112461590679700/source dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json mode=None follow=False _original_basename=ovn_chassis_mac_map.j2 checksum=40c8c065b55f6be92e71010b12be04c88b8e86c7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:20 np0005625203.localdomain sudo[40927]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:20 np0005625203.localdomain sshd[40944]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:47:20 np0005625203.localdomain sudo[40958]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxnfdmxhrfnkzeqyccqdvxgnipzfdffq ; /usr/bin/python3
Feb 20 07:47:20 np0005625203.localdomain sudo[40958]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:21 np0005625203.localdomain python3[40960]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:47:21 np0005625203.localdomain sudo[40958]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:21 np0005625203.localdomain sudo[41006]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uuyrkpcayrpixvqoyrqhyqmwjpjeqckr ; /usr/bin/python3
Feb 20 07:47:21 np0005625203.localdomain sudo[41006]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:21 np0005625203.localdomain python3[41008]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:47:21 np0005625203.localdomain sudo[41006]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:21 np0005625203.localdomain sudo[41050]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-txxrryvfpdchqbuaclbodtsnxmuoxzkd ; /usr/bin/python3
Feb 20 07:47:21 np0005625203.localdomain sudo[41050]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:22 np0005625203.localdomain python3[41052]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/ansible_managed.json owner=root group=root mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573641.3553188-71957-7411203412549/source _original_basename=tmpb98b8qhx follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:47:22 np0005625203.localdomain sudo[41050]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:23 np0005625203.localdomain sshd[40944]: Invalid user p from 185.246.128.171 port 50850
Feb 20 07:47:24 np0005625203.localdomain sshd[40944]: Disconnecting invalid user p 185.246.128.171 port 50850: Change of username or service not allowed: (p,ssh-connection) -> (smb,ssh-connection) [preauth]
Feb 20 07:47:26 np0005625203.localdomain sshd[41067]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:47:26 np0005625203.localdomain sudo[41081]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ukcopyvhpqwrmyebukxwexoyaduiftep ; /usr/bin/python3
Feb 20 07:47:26 np0005625203.localdomain sudo[41081]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:26 np0005625203.localdomain python3[41083]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_default_ipv4'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 07:47:27 np0005625203.localdomain sudo[41081]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:27 np0005625203.localdomain sudo[41143]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ktwuvsxyukgyltcgjunaifuwgvkcctoy ; /usr/bin/python3
Feb 20 07:47:27 np0005625203.localdomain sudo[41143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:27 np0005625203.localdomain python3[41145]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 38.102.83.1 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:47:28 np0005625203.localdomain sshd[41067]: Invalid user smb from 185.246.128.171 port 16952
Feb 20 07:47:29 np0005625203.localdomain sshd[41067]: Disconnecting invalid user smb 185.246.128.171 port 16952: Change of username or service not allowed: (smb,ssh-connection) -> (onlime_r,ssh-connection) [preauth]
Feb 20 07:47:30 np0005625203.localdomain sshd[41147]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:47:31 np0005625203.localdomain sudo[41143]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:31 np0005625203.localdomain sudo[41162]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ludqymbdxwmcfdtqdyujtajkvcizizjk ; /usr/bin/python3
Feb 20 07:47:31 np0005625203.localdomain sudo[41162]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:32 np0005625203.localdomain python3[41164]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.10 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:47:32 np0005625203.localdomain sshd[41166]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:47:32 np0005625203.localdomain sshd[41166]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:47:32 np0005625203.localdomain sshd[41147]: Invalid user onlime_r from 185.246.128.171 port 36516
Feb 20 07:47:33 np0005625203.localdomain sshd[41147]: Disconnecting invalid user onlime_r 185.246.128.171 port 36516: Change of username or service not allowed: (onlime_r,ssh-connection) -> (lucas,ssh-connection) [preauth]
Feb 20 07:47:33 np0005625203.localdomain sshd[41168]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:47:35 np0005625203.localdomain sshd[41170]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:47:36 np0005625203.localdomain sudo[41162]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:36 np0005625203.localdomain sshd[41168]: Invalid user lucas from 185.246.128.171 port 52278
Feb 20 07:47:37 np0005625203.localdomain sudo[41184]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-konlfigxtkttmpbavwxjlieurgzsyxqr ; /usr/bin/python3
Feb 20 07:47:37 np0005625203.localdomain sudo[41184]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:37 np0005625203.localdomain sshd[41168]: Disconnecting invalid user lucas 185.246.128.171 port 52278: Change of username or service not allowed: (lucas,ssh-connection) -> (cindy,ssh-connection) [preauth]
Feb 20 07:47:37 np0005625203.localdomain python3[41186]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 192.168.122.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:47:37 np0005625203.localdomain sudo[41184]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:37 np0005625203.localdomain sshd[41170]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:47:37 np0005625203.localdomain sudo[41208]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nymhnhbydakjnaeiuysoscifybaqmrek ; /usr/bin/python3
Feb 20 07:47:37 np0005625203.localdomain sudo[41208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:37 np0005625203.localdomain python3[41210]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 192.168.122.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:47:39 np0005625203.localdomain sshd[41212]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:47:41 np0005625203.localdomain sudo[41208]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:42 np0005625203.localdomain sudo[41227]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mopddokbdquqdgglsvmmldedydanlbhy ; /usr/bin/python3
Feb 20 07:47:42 np0005625203.localdomain sudo[41227]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:42 np0005625203.localdomain python3[41229]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.18.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:47:42 np0005625203.localdomain sudo[41227]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:42 np0005625203.localdomain sudo[41250]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjcognddwrmvitgjrijkrsnkgnhpjfbw ; /usr/bin/python3
Feb 20 07:47:42 np0005625203.localdomain sudo[41250]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:42 np0005625203.localdomain sshd[41212]: Invalid user cindy from 185.246.128.171 port 16776
Feb 20 07:47:42 np0005625203.localdomain python3[41252]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:47:43 np0005625203.localdomain sshd[41212]: Disconnecting invalid user cindy 185.246.128.171 port 16776: Change of username or service not allowed: (cindy,ssh-connection) -> (nginx,ssh-connection) [preauth]
Feb 20 07:47:46 np0005625203.localdomain sudo[41250]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:47 np0005625203.localdomain sudo[41267]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ntorggotxbuvooziybjzprfbekseltzr ; /usr/bin/python3
Feb 20 07:47:47 np0005625203.localdomain sudo[41267]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:47 np0005625203.localdomain sshd[41270]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:47:47 np0005625203.localdomain python3[41269]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.18.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:47:49 np0005625203.localdomain systemd[36457]: Starting Mark boot as successful...
Feb 20 07:47:49 np0005625203.localdomain systemd[36457]: Finished Mark boot as successful.
Feb 20 07:47:50 np0005625203.localdomain sshd[41270]: Invalid user nginx from 185.246.128.171 port 55229
Feb 20 07:47:51 np0005625203.localdomain sudo[41267]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:51 np0005625203.localdomain sudo[41287]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jaeaobgylthcknmkuwjmnwlogjqtfrpv ; /usr/bin/python3
Feb 20 07:47:51 np0005625203.localdomain sudo[41287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:51 np0005625203.localdomain python3[41289]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.20.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:47:51 np0005625203.localdomain sudo[41287]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:52 np0005625203.localdomain sudo[41310]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-igttogevuklsjcopraainhzvoynzgbnm ; /usr/bin/python3
Feb 20 07:47:52 np0005625203.localdomain sudo[41310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:52 np0005625203.localdomain python3[41312]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:47:53 np0005625203.localdomain sshd[41270]: Disconnecting invalid user nginx 185.246.128.171 port 55229: Change of username or service not allowed: (nginx,ssh-connection) -> (nodemanager,ssh-connection) [preauth]
Feb 20 07:47:55 np0005625203.localdomain sshd[41314]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:47:56 np0005625203.localdomain sudo[41310]: pam_unix(sudo:session): session closed for user root
Feb 20 07:47:56 np0005625203.localdomain sudo[41329]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-noynpvphdvqezbwvyqenyqlofrvscjcl ; /usr/bin/python3
Feb 20 07:47:56 np0005625203.localdomain sudo[41329]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:47:56 np0005625203.localdomain python3[41331]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.20.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:47:58 np0005625203.localdomain sshd[41314]: Invalid user nodemanager from 185.246.128.171 port 31951
Feb 20 07:47:59 np0005625203.localdomain sshd[41314]: Disconnecting invalid user nodemanager 185.246.128.171 port 31951: Change of username or service not allowed: (nodemanager,ssh-connection) -> (monitor,ssh-connection) [preauth]
Feb 20 07:48:01 np0005625203.localdomain sudo[41329]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:01 np0005625203.localdomain sudo[41346]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jlnkasrcuvobivfdoqfjhegjwyuywnmp ; /usr/bin/python3
Feb 20 07:48:01 np0005625203.localdomain sudo[41346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:01 np0005625203.localdomain python3[41348]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.17.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:48:01 np0005625203.localdomain sudo[41346]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:01 np0005625203.localdomain sudo[41369]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqniaqhbnjsdduysrakuuhjnbxdtwnld ; /usr/bin/python3
Feb 20 07:48:01 np0005625203.localdomain sudo[41369]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:02 np0005625203.localdomain python3[41371]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:48:02 np0005625203.localdomain sshd[41373]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:48:06 np0005625203.localdomain sudo[41369]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:06 np0005625203.localdomain sudo[41388]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vitmaxewcxkylygxrjmfhlnorbgjgagi ; /usr/bin/python3
Feb 20 07:48:06 np0005625203.localdomain sudo[41388]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:06 np0005625203.localdomain python3[41390]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.17.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:48:07 np0005625203.localdomain sshd[41373]: Invalid user monitor from 185.246.128.171 port 3315
Feb 20 07:48:10 np0005625203.localdomain sshd[41373]: Disconnecting invalid user monitor 185.246.128.171 port 3315: Change of username or service not allowed: (monitor,ssh-connection) -> (ahmed,ssh-connection) [preauth]
Feb 20 07:48:10 np0005625203.localdomain sudo[41388]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:10 np0005625203.localdomain sudo[41405]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dsibokhmyadtbkonvhnzvnzylmhenxwy ; /usr/bin/python3
Feb 20 07:48:10 np0005625203.localdomain sudo[41405]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:11 np0005625203.localdomain python3[41407]: ansible-ansible.legacy.command Invoked with _raw_params=INT=$(ip ro get 172.19.0.106 | head -1 | sed -nr "s/.* dev (\w+) .*/\1/p")
                                                         MTU=$(cat /sys/class/net/${INT}/mtu 2>/dev/null || echo "0")
                                                         echo "$INT $MTU"
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:48:11 np0005625203.localdomain sudo[41405]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:11 np0005625203.localdomain sudo[41428]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-atkdnnvnukjymjkgezhdkjzhusdtwgak ; /usr/bin/python3
Feb 20 07:48:11 np0005625203.localdomain sudo[41428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:11 np0005625203.localdomain python3[41430]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:48:12 np0005625203.localdomain sshd[41432]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:48:15 np0005625203.localdomain sshd[41432]: Invalid user ahmed from 185.246.128.171 port 51947
Feb 20 07:48:15 np0005625203.localdomain sudo[41428]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:15 np0005625203.localdomain sudo[41447]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgxxqxhtamsaywziwvuudhbaqzzytxjo ; /usr/bin/python3
Feb 20 07:48:15 np0005625203.localdomain sudo[41447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:16 np0005625203.localdomain python3[41449]: ansible-ansible.legacy.command Invoked with _raw_params=ping -w 10 -s 1472 -c 5 172.19.0.106 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:48:18 np0005625203.localdomain sshd[41432]: Disconnecting invalid user ahmed 185.246.128.171 port 51947: Change of username or service not allowed: (ahmed,ssh-connection) -> (ionguest,ssh-connection) [preauth]
Feb 20 07:48:19 np0005625203.localdomain sudo[41451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:48:19 np0005625203.localdomain sudo[41451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:48:19 np0005625203.localdomain sudo[41451]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:19 np0005625203.localdomain sudo[41466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:48:19 np0005625203.localdomain sudo[41466]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:48:19 np0005625203.localdomain sudo[41466]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:20 np0005625203.localdomain sshd[41512]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:48:20 np0005625203.localdomain sudo[41447]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:21 np0005625203.localdomain sudo[41527]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-axgomeoqeylvhrchuyhaeuniyyyxvrdg ; /usr/bin/python3
Feb 20 07:48:21 np0005625203.localdomain sudo[41527]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:21 np0005625203.localdomain sshd[41530]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:48:21 np0005625203.localdomain python3[41529]: ansible-file Invoked with path=/etc/puppet/hieradata state=directory mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:21 np0005625203.localdomain sudo[41527]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:21 np0005625203.localdomain sshd[41530]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:48:21 np0005625203.localdomain sudo[41577]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wuzthwpcdlbyydindkjlwcxaxsqpniaa ; /usr/bin/python3
Feb 20 07:48:21 np0005625203.localdomain sudo[41577]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:21 np0005625203.localdomain python3[41579]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hiera.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:21 np0005625203.localdomain sudo[41577]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:21 np0005625203.localdomain sudo[41595]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gddqchqchwuvdwoziscyqxcxjeauphiu ; /usr/bin/python3
Feb 20 07:48:21 np0005625203.localdomain sudo[41595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:21 np0005625203.localdomain sudo[41598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:48:21 np0005625203.localdomain sudo[41598]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:48:21 np0005625203.localdomain sudo[41598]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:21 np0005625203.localdomain python3[41597]: ansible-ansible.legacy.file Invoked with mode=384 dest=/etc/puppet/hiera.yaml _original_basename=tmpb88xox1m recurse=False state=file path=/etc/puppet/hiera.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:21 np0005625203.localdomain sudo[41595]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:22 np0005625203.localdomain sudo[41640]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wtowvuqrqgtjyemfpqgiezdxgifeblym ; /usr/bin/python3
Feb 20 07:48:22 np0005625203.localdomain sudo[41640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:22 np0005625203.localdomain python3[41642]: ansible-file Invoked with src=/etc/puppet/hiera.yaml dest=/etc/hiera.yaml state=link force=True path=/etc/hiera.yaml recurse=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:22 np0005625203.localdomain sudo[41640]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:22 np0005625203.localdomain sudo[41688]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-erpcrrjuonxnqvbpqvskpsufqpnhovge ; /usr/bin/python3
Feb 20 07:48:22 np0005625203.localdomain sudo[41688]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:23 np0005625203.localdomain python3[41690]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/all_nodes.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:23 np0005625203.localdomain sudo[41688]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:23 np0005625203.localdomain sudo[41706]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zcldjnpqkrtaiskmxmxjowdlrcwmrhuz ; /usr/bin/python3
Feb 20 07:48:23 np0005625203.localdomain sudo[41706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:23 np0005625203.localdomain python3[41708]: ansible-ansible.legacy.file Invoked with dest=/etc/puppet/hieradata/all_nodes.json _original_basename=overcloud.json recurse=False state=file path=/etc/puppet/hieradata/all_nodes.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:23 np0005625203.localdomain sudo[41706]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:23 np0005625203.localdomain sudo[41768]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dpndcgjnzcolvodbfpbgkjjzyrzoeiro ; /usr/bin/python3
Feb 20 07:48:23 np0005625203.localdomain sudo[41768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:23 np0005625203.localdomain python3[41770]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/bootstrap_node.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:23 np0005625203.localdomain sudo[41768]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:23 np0005625203.localdomain sudo[41786]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wwxqowmggrogpjnrpcqkkczuytxrkdwu ; /usr/bin/python3
Feb 20 07:48:23 np0005625203.localdomain sudo[41786]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:24 np0005625203.localdomain python3[41788]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/bootstrap_node.json _original_basename=bootstrap_node.j2 recurse=False state=file path=/etc/puppet/hieradata/bootstrap_node.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:24 np0005625203.localdomain sudo[41786]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:24 np0005625203.localdomain sshd[41512]: Invalid user ionguest from 185.246.128.171 port 24389
Feb 20 07:48:24 np0005625203.localdomain sudo[41848]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gumqojxpugzzhjacpgjtahqclsatmsxg ; /usr/bin/python3
Feb 20 07:48:24 np0005625203.localdomain sudo[41848]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:24 np0005625203.localdomain python3[41850]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/vip_data.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:24 np0005625203.localdomain sudo[41848]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:24 np0005625203.localdomain sudo[41866]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kezracdtdftbdladrmimdncpvyrizxtq ; /usr/bin/python3
Feb 20 07:48:24 np0005625203.localdomain sudo[41866]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:24 np0005625203.localdomain sshd[41512]: Disconnecting invalid user ionguest 185.246.128.171 port 24389: Change of username or service not allowed: (ionguest,ssh-connection) -> (user3,ssh-connection) [preauth]
Feb 20 07:48:24 np0005625203.localdomain python3[41868]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/vip_data.json _original_basename=vip_data.j2 recurse=False state=file path=/etc/puppet/hieradata/vip_data.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:24 np0005625203.localdomain sudo[41866]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:25 np0005625203.localdomain sudo[41928]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmsmomiyvwcpurlpamuauzcdrhblbtxo ; /usr/bin/python3
Feb 20 07:48:25 np0005625203.localdomain sudo[41928]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:25 np0005625203.localdomain python3[41930]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/net_ip_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:25 np0005625203.localdomain sudo[41928]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:25 np0005625203.localdomain sudo[41946]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fmnjhduwbaccgngjqdgupzkxlhkhcgtf ; /usr/bin/python3
Feb 20 07:48:25 np0005625203.localdomain sudo[41946]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:25 np0005625203.localdomain python3[41948]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/net_ip_map.json _original_basename=net_ip_map.j2 recurse=False state=file path=/etc/puppet/hieradata/net_ip_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:25 np0005625203.localdomain sudo[41946]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:26 np0005625203.localdomain sudo[42008]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdjzpovsemittlqunueqpmplywnbcxad ; /usr/bin/python3
Feb 20 07:48:26 np0005625203.localdomain sudo[42008]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:26 np0005625203.localdomain sshd[42011]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:48:26 np0005625203.localdomain python3[42010]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/cloud_domain.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:26 np0005625203.localdomain sudo[42008]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:26 np0005625203.localdomain sudo[42027]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmqxufswhwmbywfbbdavqstqcclzzhco ; /usr/bin/python3
Feb 20 07:48:26 np0005625203.localdomain sudo[42027]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:26 np0005625203.localdomain python3[42029]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/cloud_domain.json _original_basename=cloud_domain.j2 recurse=False state=file path=/etc/puppet/hieradata/cloud_domain.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:26 np0005625203.localdomain sudo[42027]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:26 np0005625203.localdomain sudo[42089]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sildgvifijnegkwpwxwydjdemzntielk ; /usr/bin/python3
Feb 20 07:48:26 np0005625203.localdomain sudo[42089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:27 np0005625203.localdomain python3[42091]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/fqdn.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:27 np0005625203.localdomain sudo[42089]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:27 np0005625203.localdomain sudo[42107]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbjeumtvlvqqygrxtcoczozbemfgvbhr ; /usr/bin/python3
Feb 20 07:48:27 np0005625203.localdomain sudo[42107]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:27 np0005625203.localdomain python3[42109]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/fqdn.json _original_basename=fqdn.j2 recurse=False state=file path=/etc/puppet/hieradata/fqdn.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:27 np0005625203.localdomain sudo[42107]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:27 np0005625203.localdomain sudo[42169]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxbxowvlmjgahdgaabuvnzgihxujkhip ; /usr/bin/python3
Feb 20 07:48:27 np0005625203.localdomain sudo[42169]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:27 np0005625203.localdomain python3[42171]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_names.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:27 np0005625203.localdomain sudo[42169]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:27 np0005625203.localdomain sudo[42187]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cfpsppfungsyllfwzuadivejqswxivul ; /usr/bin/python3
Feb 20 07:48:27 np0005625203.localdomain sudo[42187]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:28 np0005625203.localdomain python3[42189]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_names.json _original_basename=service_names.j2 recurse=False state=file path=/etc/puppet/hieradata/service_names.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:28 np0005625203.localdomain sudo[42187]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:28 np0005625203.localdomain sudo[42249]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wrjlzaqmeadhmppsclaqsdfrnmcfbhvt ; /usr/bin/python3
Feb 20 07:48:28 np0005625203.localdomain sudo[42249]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:28 np0005625203.localdomain python3[42251]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/service_configs.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:28 np0005625203.localdomain sudo[42249]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:28 np0005625203.localdomain sudo[42268]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dtilbykmcfiinscfxkfkrqrysffcxwgy ; /usr/bin/python3
Feb 20 07:48:28 np0005625203.localdomain sudo[42268]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:28 np0005625203.localdomain python3[42270]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/service_configs.json _original_basename=service_configs.j2 recurse=False state=file path=/etc/puppet/hieradata/service_configs.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:28 np0005625203.localdomain sudo[42268]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:29 np0005625203.localdomain sudo[42330]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-slvejqfkpvbxdpijqirtasojmefhtacw ; /usr/bin/python3
Feb 20 07:48:29 np0005625203.localdomain sudo[42330]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:29 np0005625203.localdomain python3[42332]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:29 np0005625203.localdomain sudo[42330]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:29 np0005625203.localdomain sudo[42348]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-musplfcbycugfvcsildjttwemuvyxvfy ; /usr/bin/python3
Feb 20 07:48:29 np0005625203.localdomain sudo[42348]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:29 np0005625203.localdomain python3[42350]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/extraconfig.json _original_basename=extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:29 np0005625203.localdomain sudo[42348]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:29 np0005625203.localdomain sudo[42410]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kncafvvnsywepdtginmdjcoxdtbbjnvy ; /usr/bin/python3
Feb 20 07:48:29 np0005625203.localdomain sudo[42410]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:29 np0005625203.localdomain python3[42412]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/role_extraconfig.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:29 np0005625203.localdomain sudo[42410]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:30 np0005625203.localdomain sudo[42428]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfwitiylkimotcqvjybuhzsdczeqsbkr ; /usr/bin/python3
Feb 20 07:48:30 np0005625203.localdomain sudo[42428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:30 np0005625203.localdomain python3[42430]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/role_extraconfig.json _original_basename=role_extraconfig.j2 recurse=False state=file path=/etc/puppet/hieradata/role_extraconfig.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:30 np0005625203.localdomain sudo[42428]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:30 np0005625203.localdomain sudo[42490]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tzfrxxtliopgqpstpzvpbmqqziwujiqv ; /usr/bin/python3
Feb 20 07:48:30 np0005625203.localdomain sudo[42490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:30 np0005625203.localdomain python3[42492]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ovn_chassis_mac_map.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:30 np0005625203.localdomain sudo[42490]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:30 np0005625203.localdomain sudo[42508]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lwizgccoybmwakkiupkmqluqmsplzkvt ; /usr/bin/python3
Feb 20 07:48:30 np0005625203.localdomain sudo[42508]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:30 np0005625203.localdomain sshd[42011]: Invalid user user3 from 185.246.128.171 port 55209
Feb 20 07:48:31 np0005625203.localdomain python3[42510]: ansible-ansible.legacy.file Invoked with mode=None dest=/etc/puppet/hieradata/ovn_chassis_mac_map.json _original_basename=ovn_chassis_mac_map.j2 recurse=False state=file path=/etc/puppet/hieradata/ovn_chassis_mac_map.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:31 np0005625203.localdomain sudo[42508]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:31 np0005625203.localdomain sudo[42538]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ojczcytmtvoiovxibhnuarumkceugzcl ; /usr/bin/python3
Feb 20 07:48:31 np0005625203.localdomain sudo[42538]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:31 np0005625203.localdomain python3[42540]: ansible-stat Invoked with path={'src': '/etc/puppet/hieradata/ansible_managed.json'} follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:48:31 np0005625203.localdomain sudo[42538]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:32 np0005625203.localdomain sudo[42586]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xnnixvbtxwwjkfaxvhqapbeehiprgffc ; /usr/bin/python3
Feb 20 07:48:32 np0005625203.localdomain sudo[42586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:32 np0005625203.localdomain python3[42588]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/ansible_managed.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:32 np0005625203.localdomain sudo[42586]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:32 np0005625203.localdomain sudo[42604]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exohtgjsckivuljfzqjfbigkephnuuwl ; /usr/bin/python3
Feb 20 07:48:32 np0005625203.localdomain sudo[42604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:32 np0005625203.localdomain python3[42606]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=0644 dest=/etc/puppet/hieradata/ansible_managed.json _original_basename=tmp6mxdt4dp recurse=False state=file path=/etc/puppet/hieradata/ansible_managed.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:32 np0005625203.localdomain sudo[42604]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:33 np0005625203.localdomain sshd[42011]: Disconnecting invalid user user3 185.246.128.171 port 55209: Change of username or service not allowed: (user3,ssh-connection) -> (cristi,ssh-connection) [preauth]
Feb 20 07:48:36 np0005625203.localdomain sshd[42621]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:48:36 np0005625203.localdomain sudo[42636]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmfldtyxzzfevcanyaseasrgczznvgru ; /usr/bin/python3
Feb 20 07:48:36 np0005625203.localdomain sudo[42636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:36 np0005625203.localdomain python3[42638]: ansible-dnf Invoked with name=['firewalld'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:48:37 np0005625203.localdomain sshd[42621]: Invalid user cristi from 185.246.128.171 port 39228
Feb 20 07:48:38 np0005625203.localdomain sshd[42621]: Disconnecting invalid user cristi 185.246.128.171 port 39228: Change of username or service not allowed: (cristi,ssh-connection) -> (odoo16,ssh-connection) [preauth]
Feb 20 07:48:38 np0005625203.localdomain sshd[42640]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:48:39 np0005625203.localdomain sudo[42636]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:40 np0005625203.localdomain sshd[42640]: Invalid user odoo16 from 185.246.128.171 port 52086
Feb 20 07:48:41 np0005625203.localdomain sudo[42655]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fpxnuhoslvdypwyridqwpbjaahgvykns ; /usr/bin/python3
Feb 20 07:48:41 np0005625203.localdomain sudo[42655]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:41 np0005625203.localdomain python3[42657]: ansible-ansible.builtin.systemd Invoked with name=iptables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:48:41 np0005625203.localdomain sshd[42640]: Disconnecting invalid user odoo16 185.246.128.171 port 52086: Change of username or service not allowed: (odoo16,ssh-connection) -> (toto,ssh-connection) [preauth]
Feb 20 07:48:42 np0005625203.localdomain sudo[42655]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:42 np0005625203.localdomain sudo[42673]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rrmdnlyygdqhupdokvyhlfgxldthwrla ; /usr/bin/python3
Feb 20 07:48:42 np0005625203.localdomain sudo[42673]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:42 np0005625203.localdomain python3[42675]: ansible-ansible.builtin.systemd Invoked with name=ip6tables.service state=stopped enabled=False daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:48:42 np0005625203.localdomain sudo[42673]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:42 np0005625203.localdomain sudo[42691]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ixqxhagsdoseyayyfjsvqefqokifshzb ; /usr/bin/python3
Feb 20 07:48:42 np0005625203.localdomain sudo[42691]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:43 np0005625203.localdomain python3[42693]: ansible-ansible.builtin.systemd Invoked with name=nftables state=started enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:48:43 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:48:43 np0005625203.localdomain systemd-rc-local-generator[42716]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:48:43 np0005625203.localdomain systemd-sysv-generator[42720]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:48:43 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:48:43 np0005625203.localdomain systemd[1]: Starting Netfilter Tables...
Feb 20 07:48:43 np0005625203.localdomain systemd[1]: Finished Netfilter Tables.
Feb 20 07:48:43 np0005625203.localdomain sudo[42691]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:44 np0005625203.localdomain sudo[42781]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-llyyoqbzmjzaemqdxkbmplbbwuiqauat ; /usr/bin/python3
Feb 20 07:48:44 np0005625203.localdomain sudo[42781]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:44 np0005625203.localdomain python3[42783]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:44 np0005625203.localdomain sudo[42781]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:44 np0005625203.localdomain sudo[42824]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-szvfkpnsbsibclcpmfmqxlelxglduabh ; /usr/bin/python3
Feb 20 07:48:44 np0005625203.localdomain sudo[42824]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:44 np0005625203.localdomain python3[42826]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573723.9830372-74819-249368216417923/source _original_basename=iptables.nft follow=False checksum=ede9860c99075946a7bc827210247aac639bc84a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:44 np0005625203.localdomain sudo[42824]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:44 np0005625203.localdomain sudo[42854]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rfjmnpaidtyrqqrnuazkbjbtwbgrxefr ; /usr/bin/python3
Feb 20 07:48:44 np0005625203.localdomain sudo[42854]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:45 np0005625203.localdomain python3[42856]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:48:45 np0005625203.localdomain sudo[42854]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:45 np0005625203.localdomain sudo[42872]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-happcozwiefeovhyikmxclhnnnddkiea ; /usr/bin/python3
Feb 20 07:48:45 np0005625203.localdomain sudo[42872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:45 np0005625203.localdomain python3[42874]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:48:45 np0005625203.localdomain sudo[42872]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:45 np0005625203.localdomain sshd[42908]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:48:45 np0005625203.localdomain sudo[42922]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yuvkidmskqrqendagfuyuqbtaqnlnfyt ; /usr/bin/python3
Feb 20 07:48:45 np0005625203.localdomain sudo[42922]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:46 np0005625203.localdomain python3[42924]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:46 np0005625203.localdomain sudo[42922]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:46 np0005625203.localdomain sudo[42965]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dprembqbbstzicslwvbhtrdlqsdpdbcd ; /usr/bin/python3
Feb 20 07:48:46 np0005625203.localdomain sudo[42965]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:46 np0005625203.localdomain python3[42967]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573725.7737257-74935-49507823086676/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:46 np0005625203.localdomain sudo[42965]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:46 np0005625203.localdomain sudo[43028]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahnotqvrnuqqyakeuimnfjysnnqyyvsa ; /usr/bin/python3
Feb 20 07:48:46 np0005625203.localdomain sudo[43028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:47 np0005625203.localdomain python3[43030]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-update-jumps.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:47 np0005625203.localdomain sudo[43028]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:47 np0005625203.localdomain sudo[43071]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gkdxmegpqnmhyxswwzcjwlqxhyoiwuht ; /usr/bin/python3
Feb 20 07:48:47 np0005625203.localdomain sudo[43071]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:47 np0005625203.localdomain python3[43073]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-update-jumps.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573726.633085-75136-79201002117970/source mode=None follow=False _original_basename=jump-chain.j2 checksum=eec306c3276262a27663d76bd0ea526457445afa backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:47 np0005625203.localdomain sudo[43071]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:47 np0005625203.localdomain sudo[43133]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-agqfglyewfyyofuqkmrdnezhqvpxpigl ; /usr/bin/python3
Feb 20 07:48:47 np0005625203.localdomain sudo[43133]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:47 np0005625203.localdomain python3[43135]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-flushes.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:47 np0005625203.localdomain sudo[43133]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:48 np0005625203.localdomain sudo[43176]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfllyhlqxiyumaeysjohpiydtwjptwfe ; /usr/bin/python3
Feb 20 07:48:48 np0005625203.localdomain sudo[43176]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:48 np0005625203.localdomain python3[43178]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-flushes.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573727.6305547-75204-172999367271912/source mode=None follow=False _original_basename=flush-chain.j2 checksum=e8e7b8db0d61a7fe393441cc91613f470eb34a6e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:48 np0005625203.localdomain sudo[43176]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:48 np0005625203.localdomain sudo[43238]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpwgxfhmhroyzthftsxzesgrpedhhdmt ; /usr/bin/python3
Feb 20 07:48:48 np0005625203.localdomain sudo[43238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:48 np0005625203.localdomain python3[43240]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-chains.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:48 np0005625203.localdomain sudo[43238]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:49 np0005625203.localdomain sudo[43281]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cwmnyxkdohxqrhlwcnkkyezbcasyuvvx ; /usr/bin/python3
Feb 20 07:48:49 np0005625203.localdomain sudo[43281]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:49 np0005625203.localdomain python3[43283]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-chains.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573728.5335207-75260-78117989698998/source mode=None follow=False _original_basename=chains.j2 checksum=e60ee651f5014e83924f4e901ecc8e25b1906610 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:49 np0005625203.localdomain sudo[43281]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:49 np0005625203.localdomain sshd[42908]: Invalid user toto from 185.246.128.171 port 23413
Feb 20 07:48:49 np0005625203.localdomain sshd[42908]: Disconnecting invalid user toto 185.246.128.171 port 23413: Change of username or service not allowed: (toto,ssh-connection) -> (sipv,ssh-connection) [preauth]
Feb 20 07:48:50 np0005625203.localdomain sudo[43343]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydxiojldagrfbpdwkujjmuasaudetnhs ; /usr/bin/python3
Feb 20 07:48:50 np0005625203.localdomain sudo[43343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:50 np0005625203.localdomain python3[43345]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/tripleo-rules.nft follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:48:50 np0005625203.localdomain sudo[43343]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:50 np0005625203.localdomain sudo[43386]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yeiehbhzdgugfjvquvgyvgfelpfzipmx ; /usr/bin/python3
Feb 20 07:48:50 np0005625203.localdomain sudo[43386]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:50 np0005625203.localdomain python3[43388]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/tripleo-rules.nft src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573729.4403408-75300-267754242727697/source mode=None follow=False _original_basename=ruleset.j2 checksum=0444e4206083f91e2fb2aabfa2928244c2db35ed backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:50 np0005625203.localdomain sudo[43386]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:50 np0005625203.localdomain sudo[43416]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sgwegccymctdkepqxxtgasqxhjoaxwut ; /usr/bin/python3
Feb 20 07:48:50 np0005625203.localdomain sudo[43416]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:51 np0005625203.localdomain python3[43418]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-chains.nft /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft /etc/nftables/tripleo-jumps.nft | nft -c -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:48:51 np0005625203.localdomain sudo[43416]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:51 np0005625203.localdomain sudo[43481]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-onmueygkmqrivvidsiinovosdvgmbjnq ; /usr/bin/python3
Feb 20 07:48:51 np0005625203.localdomain sudo[43481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:51 np0005625203.localdomain python3[43483]: ansible-ansible.builtin.blockinfile Invoked with path=/etc/sysconfig/nftables.conf backup=False validate=nft -c -f %s block=include "/etc/nftables/iptables.nft"
                                                         include "/etc/nftables/tripleo-chains.nft"
                                                         include "/etc/nftables/tripleo-rules.nft"
                                                         include "/etc/nftables/tripleo-jumps.nft"
                                                          state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:48:51 np0005625203.localdomain sudo[43481]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:51 np0005625203.localdomain sudo[43498]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qlwjtmaubnahgzomyzjgozsuqlcqnkic ; /usr/bin/python3
Feb 20 07:48:51 np0005625203.localdomain sudo[43498]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:52 np0005625203.localdomain python3[43500]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/tripleo-chains.nft _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:48:52 np0005625203.localdomain sudo[43498]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:52 np0005625203.localdomain sudo[43515]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kqdhjttvqxmzthpsnujqlducpsfpycjp ; /usr/bin/python3
Feb 20 07:48:52 np0005625203.localdomain sudo[43515]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:52 np0005625203.localdomain python3[43517]: ansible-ansible.legacy.command Invoked with _raw_params=cat /etc/nftables/tripleo-flushes.nft /etc/nftables/tripleo-rules.nft /etc/nftables/tripleo-update-jumps.nft | nft -f - _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:48:52 np0005625203.localdomain sudo[43515]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:52 np0005625203.localdomain sshd[43521]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:48:52 np0005625203.localdomain sudo[43535]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxfhvzyusuvtjupuemsklzajyjnqqxuh ; /usr/bin/python3
Feb 20 07:48:52 np0005625203.localdomain sudo[43535]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:52 np0005625203.localdomain python3[43537]: ansible-file Invoked with mode=0750 path=/var/log/containers/collectd setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:48:52 np0005625203.localdomain sudo[43535]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:53 np0005625203.localdomain sudo[43551]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-torspidhjhhotltagrojsedgusidinqw ; /usr/bin/python3
Feb 20 07:48:53 np0005625203.localdomain sudo[43551]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:53 np0005625203.localdomain python3[43553]: ansible-file Invoked with mode=0755 path=/var/lib/container-user-scripts/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:48:53 np0005625203.localdomain sudo[43551]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:53 np0005625203.localdomain sudo[43568]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kuonkvewqnnwikzspcfbsmtzrjmpkhig ; /usr/bin/python3
Feb 20 07:48:53 np0005625203.localdomain sudo[43568]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:53 np0005625203.localdomain python3[43570]: ansible-file Invoked with mode=0750 path=/var/log/containers/ceilometer setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:48:53 np0005625203.localdomain sudo[43568]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:53 np0005625203.localdomain sudo[43584]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qstddxadvzravwcqbcrxfuvmhwymugre ; /usr/bin/python3
Feb 20 07:48:53 np0005625203.localdomain sudo[43584]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:53 np0005625203.localdomain python3[43586]: ansible-seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 20 07:48:54 np0005625203.localdomain sudo[43584]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:54 np0005625203.localdomain sudo[43604]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fsrgqvkgifrtopllsqlfbtomqvrsvozn ; /usr/bin/python3
Feb 20 07:48:54 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Feb 20 07:48:54 np0005625203.localdomain sudo[43604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:55 np0005625203.localdomain python3[43606]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Feb 20 07:48:55 np0005625203.localdomain sshd[43521]: Invalid user sipv from 185.246.128.171 port 57377
Feb 20 07:48:55 np0005625203.localdomain sshd[43521]: Disconnecting invalid user sipv 185.246.128.171 port 57377: Change of username or service not allowed: (sipv,ssh-connection) -> (guest,ssh-connection) [preauth]
Feb 20 07:48:55 np0005625203.localdomain kernel: SELinux:  Converting 2704 SID table entries...
Feb 20 07:48:55 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 07:48:55 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 07:48:55 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 07:48:55 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 07:48:55 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 07:48:55 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 07:48:55 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 07:48:56 np0005625203.localdomain sudo[43604]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:56 np0005625203.localdomain sudo[43625]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mckszhqsnvmnorzmeuvxpizfempkduqg ; /usr/bin/python3
Feb 20 07:48:56 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Feb 20 07:48:56 np0005625203.localdomain sudo[43625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:56 np0005625203.localdomain python3[43627]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/etc/target(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Feb 20 07:48:56 np0005625203.localdomain sshd[43629]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:48:57 np0005625203.localdomain kernel: SELinux:  Converting 2704 SID table entries...
Feb 20 07:48:57 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 07:48:57 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 07:48:57 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 07:48:57 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 07:48:57 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 07:48:57 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 07:48:57 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 07:48:57 np0005625203.localdomain sudo[43625]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:57 np0005625203.localdomain sudo[43648]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqhwxjvdfdejnaiklyrjgclkerkscufz ; /usr/bin/python3
Feb 20 07:48:57 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Feb 20 07:48:57 np0005625203.localdomain sudo[43648]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:57 np0005625203.localdomain python3[43651]: ansible-community.general.sefcontext Invoked with setype=container_file_t state=present target=/var/lib/iscsi(/.*)? ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Feb 20 07:48:58 np0005625203.localdomain kernel: SELinux:  Converting 2704 SID table entries...
Feb 20 07:48:58 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 07:48:58 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 07:48:58 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 07:48:58 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 07:48:58 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 07:48:58 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 07:48:58 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 07:48:58 np0005625203.localdomain sudo[43648]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:58 np0005625203.localdomain sudo[43670]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxhksbmfybbhqzleehawxlcpwgkzzaqi ; /usr/bin/python3
Feb 20 07:48:58 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Feb 20 07:48:58 np0005625203.localdomain sudo[43670]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:59 np0005625203.localdomain python3[43672]: ansible-file Invoked with path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:48:59 np0005625203.localdomain sudo[43670]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:59 np0005625203.localdomain sudo[43686]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pemebrpbrbzglhrziqhrhkqbomnjposa ; /usr/bin/python3
Feb 20 07:48:59 np0005625203.localdomain sudo[43686]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:59 np0005625203.localdomain python3[43688]: ansible-file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:48:59 np0005625203.localdomain sudo[43686]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:59 np0005625203.localdomain sudo[43702]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izswqwnvlhshcvuxkejvbrgpyqiufhzi ; /usr/bin/python3
Feb 20 07:48:59 np0005625203.localdomain sudo[43702]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:48:59 np0005625203.localdomain python3[43704]: ansible-file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:48:59 np0005625203.localdomain sudo[43702]: pam_unix(sudo:session): session closed for user root
Feb 20 07:48:59 np0005625203.localdomain sudo[43718]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mssnsxcuvygaaxcvbeemqdcnnfsdfvak ; /usr/bin/python3
Feb 20 07:48:59 np0005625203.localdomain sudo[43718]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:00 np0005625203.localdomain python3[43720]: ansible-stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:49:00 np0005625203.localdomain sudo[43718]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:00 np0005625203.localdomain sudo[43734]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fvlukkmoxidgnwfypztaxszertyzpsua ; /usr/bin/python3
Feb 20 07:49:00 np0005625203.localdomain sudo[43734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:00 np0005625203.localdomain python3[43736]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-enabled --quiet iscsi.service _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:49:00 np0005625203.localdomain sudo[43734]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:00 np0005625203.localdomain sshd[43629]: Invalid user guest from 185.246.128.171 port 14511
Feb 20 07:49:00 np0005625203.localdomain sudo[43751]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-knprshenpzrxtgplnxsiziiunngvwtdm ; /usr/bin/python3
Feb 20 07:49:00 np0005625203.localdomain sudo[43751]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:01 np0005625203.localdomain python3[43753]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:49:03 np0005625203.localdomain sshd[43629]: error: maximum authentication attempts exceeded for invalid user guest from 185.246.128.171 port 14511 ssh2 [preauth]
Feb 20 07:49:03 np0005625203.localdomain sshd[43629]: Disconnecting invalid user guest 185.246.128.171 port 14511: Too many authentication failures [preauth]
Feb 20 07:49:03 np0005625203.localdomain sshd[43755]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:04 np0005625203.localdomain sudo[43751]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:04 np0005625203.localdomain sudo[43770]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-epbprbrtmziqdilswkgtfgruwzkevgkc ; /usr/bin/python3
Feb 20 07:49:04 np0005625203.localdomain sudo[43770]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:04 np0005625203.localdomain python3[43772]: ansible-file Invoked with path=/etc/modules-load.d state=directory mode=493 owner=root group=root setype=etc_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:04 np0005625203.localdomain sudo[43770]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:05 np0005625203.localdomain sudo[43818]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nwrfvycajihmjpimtydanommjwhmhhht ; /usr/bin/python3
Feb 20 07:49:05 np0005625203.localdomain sudo[43818]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:05 np0005625203.localdomain python3[43820]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:49:05 np0005625203.localdomain sudo[43818]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:05 np0005625203.localdomain sudo[43861]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ejchuihjhyqkftmjupemvvdwpbblguir ; /usr/bin/python3
Feb 20 07:49:05 np0005625203.localdomain sudo[43861]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:05 np0005625203.localdomain python3[43863]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573744.8976386-76111-10244922018915/source dest=/etc/modules-load.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:05 np0005625203.localdomain sudo[43861]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:05 np0005625203.localdomain sudo[43891]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gdtyjtcoyrnbgeviwfoavrvivzbmitym ; /usr/bin/python3
Feb 20 07:49:05 np0005625203.localdomain sudo[43891]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:05 np0005625203.localdomain sshd[43755]: Invalid user guest from 185.246.128.171 port 49089
Feb 20 07:49:06 np0005625203.localdomain python3[43893]: ansible-systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 07:49:06 np0005625203.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 20 07:49:06 np0005625203.localdomain systemd[1]: Stopped Load Kernel Modules.
Feb 20 07:49:06 np0005625203.localdomain systemd[1]: Stopping Load Kernel Modules...
Feb 20 07:49:06 np0005625203.localdomain systemd[1]: Starting Load Kernel Modules...
Feb 20 07:49:06 np0005625203.localdomain kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 20 07:49:06 np0005625203.localdomain systemd-modules-load[43896]: Inserted module 'br_netfilter'
Feb 20 07:49:06 np0005625203.localdomain kernel: Bridge firewalling registered
Feb 20 07:49:06 np0005625203.localdomain systemd-modules-load[43896]: Module 'msr' is built in
Feb 20 07:49:06 np0005625203.localdomain systemd[1]: Finished Load Kernel Modules.
Feb 20 07:49:06 np0005625203.localdomain sudo[43891]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:06 np0005625203.localdomain sudo[43945]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nyavxnpzgndstkckczeytfymqtyfpwvj ; /usr/bin/python3
Feb 20 07:49:06 np0005625203.localdomain sudo[43945]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:06 np0005625203.localdomain python3[43947]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-tripleo.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:49:06 np0005625203.localdomain sudo[43945]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:06 np0005625203.localdomain sudo[43988]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mebmuenytozfbxjakgccdmzzhgiemdnv ; /usr/bin/python3
Feb 20 07:49:06 np0005625203.localdomain sudo[43988]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:07 np0005625203.localdomain python3[43990]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573746.3960798-76164-250988417162494/source dest=/etc/sysctl.d/99-tripleo.conf mode=420 owner=root group=root setype=etc_t follow=False _original_basename=tripleo-sysctl.conf.j2 checksum=cddb9401fdafaaf28a4a94b98448f98ae93c94c9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:07 np0005625203.localdomain sudo[43988]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:07 np0005625203.localdomain sudo[44018]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-njzanshainftreldtskibrxlytoslgiq ; /usr/bin/python3
Feb 20 07:49:07 np0005625203.localdomain sudo[44018]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:07 np0005625203.localdomain python3[44020]: ansible-sysctl Invoked with name=fs.aio-max-nr value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:07 np0005625203.localdomain sudo[44018]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:07 np0005625203.localdomain sudo[44035]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kicbputarjdogvrwkeemtrlwhkcnjjsw ; /usr/bin/python3
Feb 20 07:49:07 np0005625203.localdomain sudo[44035]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:07 np0005625203.localdomain python3[44037]: ansible-sysctl Invoked with name=fs.inotify.max_user_instances value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:07 np0005625203.localdomain sudo[44035]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:07 np0005625203.localdomain sudo[44053]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvnifmyragtfnnclvrstijqceuenlcwe ; /usr/bin/python3
Feb 20 07:49:07 np0005625203.localdomain sudo[44053]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:08 np0005625203.localdomain python3[44055]: ansible-sysctl Invoked with name=kernel.pid_max value=1048576 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:08 np0005625203.localdomain sudo[44053]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:08 np0005625203.localdomain sudo[44071]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-teqeffujhitdfadwcujpfugqcwbqpelc ; /usr/bin/python3
Feb 20 07:49:08 np0005625203.localdomain sudo[44071]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:08 np0005625203.localdomain python3[44073]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-arptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:08 np0005625203.localdomain sudo[44071]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:08 np0005625203.localdomain sudo[44088]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uywcbmqtijmdmawcijvkxaojpqgeexfw ; /usr/bin/python3
Feb 20 07:49:08 np0005625203.localdomain sudo[44088]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:08 np0005625203.localdomain python3[44090]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-ip6tables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:08 np0005625203.localdomain sudo[44088]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:08 np0005625203.localdomain sudo[44105]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdcnrwwaylvkltonohgulzcqxffiemgw ; /usr/bin/python3
Feb 20 07:49:08 np0005625203.localdomain sudo[44105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:08 np0005625203.localdomain python3[44107]: ansible-sysctl Invoked with name=net.bridge.bridge-nf-call-iptables value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:09 np0005625203.localdomain sudo[44105]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:09 np0005625203.localdomain sudo[44122]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wnumayfnuzxfbgvbdibrmnaduogbmnrd ; /usr/bin/python3
Feb 20 07:49:09 np0005625203.localdomain sudo[44122]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:09 np0005625203.localdomain sshd[43755]: Disconnecting invalid user guest 185.246.128.171 port 49089: Change of username or service not allowed: (guest,ssh-connection) -> (adriana,ssh-connection) [preauth]
Feb 20 07:49:09 np0005625203.localdomain python3[44124]: ansible-sysctl Invoked with name=net.ipv4.conf.all.rp_filter value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:09 np0005625203.localdomain sudo[44122]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:09 np0005625203.localdomain sudo[44140]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-srkkikjvmsmkjenxsnduiyvzhxmdybch ; /usr/bin/python3
Feb 20 07:49:09 np0005625203.localdomain sudo[44140]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:09 np0005625203.localdomain python3[44142]: ansible-sysctl Invoked with name=net.ipv4.ip_forward value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:09 np0005625203.localdomain sudo[44140]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:09 np0005625203.localdomain sudo[44158]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kucqhafempcszgaphgawuzcvlddkvzzk ; /usr/bin/python3
Feb 20 07:49:09 np0005625203.localdomain sudo[44158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:09 np0005625203.localdomain sshd[44161]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:09 np0005625203.localdomain python3[44160]: ansible-sysctl Invoked with name=net.ipv4.ip_local_reserved_ports value=35357,49000-49001 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:09 np0005625203.localdomain sudo[44158]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:09 np0005625203.localdomain sshd[44161]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:49:10 np0005625203.localdomain sudo[44178]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mhfsrzyckvpbggtfqhcjfqfxuwvnxtgk ; /usr/bin/python3
Feb 20 07:49:10 np0005625203.localdomain sudo[44178]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:10 np0005625203.localdomain python3[44180]: ansible-sysctl Invoked with name=net.ipv4.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:11 np0005625203.localdomain sudo[44178]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:11 np0005625203.localdomain sudo[44196]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ugqyorhimkorrlgtldietgttwxwpqayy ; /usr/bin/python3
Feb 20 07:49:11 np0005625203.localdomain sudo[44196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:11 np0005625203.localdomain python3[44198]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh1 value=1024 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:11 np0005625203.localdomain sudo[44196]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:11 np0005625203.localdomain sudo[44214]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lphtjsmlsixuumdnjrmhirbbuhfzoyxd ; /usr/bin/python3
Feb 20 07:49:11 np0005625203.localdomain sudo[44214]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:11 np0005625203.localdomain python3[44216]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh2 value=2048 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:11 np0005625203.localdomain sudo[44214]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:12 np0005625203.localdomain sudo[44232]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nmijmesvgqhduwaibrzcbyfiaooxzedn ; /usr/bin/python3
Feb 20 07:49:12 np0005625203.localdomain sudo[44232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:12 np0005625203.localdomain python3[44234]: ansible-sysctl Invoked with name=net.ipv4.neigh.default.gc_thresh3 value=4096 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:12 np0005625203.localdomain sudo[44232]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:12 np0005625203.localdomain sudo[44250]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kitxiifgntpallorkgcxtcqlnsxyzcdn ; /usr/bin/python3
Feb 20 07:49:12 np0005625203.localdomain sudo[44250]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:12 np0005625203.localdomain python3[44252]: ansible-sysctl Invoked with name=net.ipv6.conf.all.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:12 np0005625203.localdomain sudo[44250]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:12 np0005625203.localdomain sudo[44267]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zocxipcutnydsevxclptweslxudxaezc ; /usr/bin/python3
Feb 20 07:49:12 np0005625203.localdomain sudo[44267]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:12 np0005625203.localdomain sshd[44270]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:12 np0005625203.localdomain python3[44269]: ansible-sysctl Invoked with name=net.ipv6.conf.all.forwarding value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:12 np0005625203.localdomain sudo[44267]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:12 np0005625203.localdomain sudo[44285]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bslgaxxjffqqhbiujxpqpwhrjtsrefec ; /usr/bin/python3
Feb 20 07:49:12 np0005625203.localdomain sudo[44285]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:13 np0005625203.localdomain python3[44287]: ansible-sysctl Invoked with name=net.ipv6.conf.default.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:13 np0005625203.localdomain sudo[44285]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:13 np0005625203.localdomain sudo[44303]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxiwssqvzitodyknhyinyprlrtscnkvp ; /usr/bin/python3
Feb 20 07:49:13 np0005625203.localdomain sudo[44303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:13 np0005625203.localdomain python3[44305]: ansible-sysctl Invoked with name=net.ipv6.conf.lo.disable_ipv6 value=0 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:13 np0005625203.localdomain sudo[44303]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:13 np0005625203.localdomain sudo[44320]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-khufknotzneeuxdugwsfwukbqsrfzkpv ; /usr/bin/python3
Feb 20 07:49:13 np0005625203.localdomain sudo[44320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:13 np0005625203.localdomain python3[44322]: ansible-sysctl Invoked with name=net.ipv6.ip_nonlocal_bind value=1 sysctl_set=True state=present sysctl_file=/etc/sysctl.d/99-tripleo.conf reload=False ignoreerrors=False
Feb 20 07:49:13 np0005625203.localdomain sudo[44320]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:13 np0005625203.localdomain sshd[44325]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:13 np0005625203.localdomain sudo[44340]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzhvkkxmydvivkxyavuhhtemshnfviyu ; /usr/bin/python3
Feb 20 07:49:13 np0005625203.localdomain sudo[44340]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:14 np0005625203.localdomain python3[44342]: ansible-systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 07:49:14 np0005625203.localdomain systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 20 07:49:14 np0005625203.localdomain systemd[1]: Stopped Apply Kernel Variables.
Feb 20 07:49:14 np0005625203.localdomain systemd[1]: Stopping Apply Kernel Variables...
Feb 20 07:49:14 np0005625203.localdomain systemd[1]: Starting Apply Kernel Variables...
Feb 20 07:49:14 np0005625203.localdomain systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 20 07:49:14 np0005625203.localdomain systemd[1]: Finished Apply Kernel Variables.
Feb 20 07:49:14 np0005625203.localdomain sudo[44340]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:14 np0005625203.localdomain sudo[44360]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kyyxdcenbjwwvlbulcljpjkqemzvdqpf ; /usr/bin/python3
Feb 20 07:49:14 np0005625203.localdomain sudo[44360]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:14 np0005625203.localdomain python3[44362]: ansible-file Invoked with mode=0750 path=/var/log/containers/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:14 np0005625203.localdomain sudo[44360]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:14 np0005625203.localdomain sudo[44376]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nfcwvxmiruhfkuuizsykpkgeoxuqoutb ; /usr/bin/python3
Feb 20 07:49:14 np0005625203.localdomain sudo[44376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:15 np0005625203.localdomain python3[44378]: ansible-file Invoked with path=/var/lib/metrics_qdr setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:15 np0005625203.localdomain sudo[44376]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:15 np0005625203.localdomain sshd[44325]: Received disconnect from 123.204.132.127 port 47474:11: Bye Bye [preauth]
Feb 20 07:49:15 np0005625203.localdomain sshd[44325]: Disconnected from authenticating user root 123.204.132.127 port 47474 [preauth]
Feb 20 07:49:15 np0005625203.localdomain sudo[44392]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhlqtecsctzndzawyilcjkhzecllanhx ; /usr/bin/python3
Feb 20 07:49:15 np0005625203.localdomain sudo[44392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:15 np0005625203.localdomain python3[44394]: ansible-file Invoked with mode=0750 path=/var/log/containers/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:15 np0005625203.localdomain sudo[44392]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:15 np0005625203.localdomain sudo[44408]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vjxdvnjxtwagicewabwnysmzcrbkvdgk ; /usr/bin/python3
Feb 20 07:49:15 np0005625203.localdomain sudo[44408]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:15 np0005625203.localdomain python3[44410]: ansible-stat Invoked with path=/var/lib/nova/instances follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:49:15 np0005625203.localdomain sudo[44408]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:15 np0005625203.localdomain sudo[44424]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qezsrfxtenpnmcoqpxysofnehdxwltzl ; /usr/bin/python3
Feb 20 07:49:15 np0005625203.localdomain sudo[44424]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:16 np0005625203.localdomain python3[44426]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:16 np0005625203.localdomain sudo[44424]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:16 np0005625203.localdomain sshd[44270]: Invalid user adriana from 185.246.128.171 port 29803
Feb 20 07:49:16 np0005625203.localdomain sudo[44440]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ntlhorknxzkbdgkfwearzrdasazvembx ; /usr/bin/python3
Feb 20 07:49:16 np0005625203.localdomain sudo[44440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:16 np0005625203.localdomain sshd[44270]: Disconnecting invalid user adriana 185.246.128.171 port 29803: Change of username or service not allowed: (adriana,ssh-connection) -> (manager,ssh-connection) [preauth]
Feb 20 07:49:16 np0005625203.localdomain python3[44442]: ansible-file Invoked with path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:16 np0005625203.localdomain sudo[44440]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:16 np0005625203.localdomain sudo[44456]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xdayppwreqmwcsrgiqbabvdimqlxjqqi ; /usr/bin/python3
Feb 20 07:49:16 np0005625203.localdomain sudo[44456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:16 np0005625203.localdomain python3[44458]: ansible-file Invoked with path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:16 np0005625203.localdomain sudo[44456]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:16 np0005625203.localdomain sudo[44472]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vnxxoipykubrzzreilccdxcujdolishm ; /usr/bin/python3
Feb 20 07:49:16 np0005625203.localdomain sudo[44472]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:16 np0005625203.localdomain python3[44474]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:16 np0005625203.localdomain sudo[44472]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:17 np0005625203.localdomain sudo[44488]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjbkrrthlovsfolvhgfgrfkspccciixa ; /usr/bin/python3
Feb 20 07:49:17 np0005625203.localdomain sudo[44488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:17 np0005625203.localdomain python3[44490]: ansible-file Invoked with path=/etc/tmpfiles.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:17 np0005625203.localdomain sudo[44488]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:17 np0005625203.localdomain sudo[44536]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yligqjjsamdkfyqpurrpiagbcsschdpc ; /usr/bin/python3
Feb 20 07:49:17 np0005625203.localdomain sudo[44536]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:17 np0005625203.localdomain python3[44538]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-nova.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:49:17 np0005625203.localdomain sudo[44536]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:17 np0005625203.localdomain sudo[44579]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ssbrabjnfcpuymxolpqecjxdoxqohxql ; /usr/bin/python3
Feb 20 07:49:17 np0005625203.localdomain sudo[44579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:18 np0005625203.localdomain python3[44581]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-nova.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573757.333837-76645-221634037574609/source _original_basename=tmp9j6rrw1a follow=False checksum=f834349098718ec09c7562bcb470b717a83ff411 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:18 np0005625203.localdomain sudo[44579]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:18 np0005625203.localdomain sshd[44596]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:18 np0005625203.localdomain sshd[44597]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:18 np0005625203.localdomain sudo[44611]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dzhufsikskbkryulyavtfwymyhcuxthv ; /usr/bin/python3
Feb 20 07:49:18 np0005625203.localdomain sudo[44611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:18 np0005625203.localdomain python3[44613]: ansible-ansible.legacy.command Invoked with _raw_params=systemd-tmpfiles --create _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:49:18 np0005625203.localdomain sudo[44611]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:18 np0005625203.localdomain sshd[44597]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:49:20 np0005625203.localdomain sudo[44630]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fisbvzkikuietxmlgalprzlsyiujfyjm ; /usr/bin/python3
Feb 20 07:49:20 np0005625203.localdomain sudo[44630]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:20 np0005625203.localdomain python3[44632]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:20 np0005625203.localdomain sudo[44630]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:20 np0005625203.localdomain sshd[44596]: Invalid user manager from 185.246.128.171 port 57458
Feb 20 07:49:20 np0005625203.localdomain sudo[44678]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cfefewfrkmogmjbufspxqdpimrxqoqdy ; /usr/bin/python3
Feb 20 07:49:20 np0005625203.localdomain sudo[44678]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:20 np0005625203.localdomain python3[44680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/delay-nova-compute follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:49:20 np0005625203.localdomain sudo[44678]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:21 np0005625203.localdomain sudo[44721]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdsniwbfibvqqlbzvpjusmhbrezbworn ; /usr/bin/python3
Feb 20 07:49:21 np0005625203.localdomain sudo[44721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:21 np0005625203.localdomain python3[44723]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/nova/delay-nova-compute mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573760.5760894-76928-64100520559147/source _original_basename=tmpwzp8epti follow=False checksum=f07ad3e8cf3766b3b3b07ae8278826a0ef3bb5e3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:21 np0005625203.localdomain sudo[44721]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:21 np0005625203.localdomain sshd[44596]: Disconnecting invalid user manager 185.246.128.171 port 57458: Change of username or service not allowed: (manager,ssh-connection) -> (andy,ssh-connection) [preauth]
Feb 20 07:49:21 np0005625203.localdomain sudo[44751]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zrwahvqmqsqxtvqkewfpqdhnidvxhhaz ; /usr/bin/python3
Feb 20 07:49:21 np0005625203.localdomain sudo[44751]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:21 np0005625203.localdomain python3[44753]: ansible-file Invoked with mode=0750 path=/var/log/containers/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:21 np0005625203.localdomain sudo[44751]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:21 np0005625203.localdomain sudo[44767]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bkxjiqwoiburprtphedofebvgnbgkbvv ; /usr/bin/python3
Feb 20 07:49:21 np0005625203.localdomain sudo[44767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:22 np0005625203.localdomain sudo[44770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:49:22 np0005625203.localdomain sudo[44770]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:49:22 np0005625203.localdomain sudo[44770]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:22 np0005625203.localdomain python3[44769]: ansible-file Invoked with path=/etc/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:22 np0005625203.localdomain sudo[44767]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:22 np0005625203.localdomain sudo[44785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 07:49:22 np0005625203.localdomain sudo[44785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:49:22 np0005625203.localdomain sudo[44813]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qavcxgwxtgjhjxfvwyiwmlxvewqcwegt ; /usr/bin/python3
Feb 20 07:49:22 np0005625203.localdomain sudo[44813]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:22 np0005625203.localdomain python3[44815]: ansible-file Invoked with path=/etc/libvirt/secrets setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:22 np0005625203.localdomain sudo[44813]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:22 np0005625203.localdomain sudo[44843]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hnvcrvfdlwxntionrearypowspsuwhcr ; /usr/bin/python3
Feb 20 07:49:22 np0005625203.localdomain sudo[44843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:22 np0005625203.localdomain sudo[44785]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:22 np0005625203.localdomain python3[44850]: ansible-file Invoked with path=/etc/libvirt/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:22 np0005625203.localdomain sudo[44843]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:22 np0005625203.localdomain sudo[44865]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-olrkjyobybpgtnouuccwjjurjvbqkjjr ; /usr/bin/python3
Feb 20 07:49:22 np0005625203.localdomain sudo[44865]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:23 np0005625203.localdomain sudo[44868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:49:23 np0005625203.localdomain sudo[44868]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:49:23 np0005625203.localdomain sudo[44868]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:23 np0005625203.localdomain sudo[44883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:49:23 np0005625203.localdomain sudo[44883]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:49:23 np0005625203.localdomain sshd[44898]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:23 np0005625203.localdomain python3[44867]: ansible-file Invoked with path=/var/lib/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:23 np0005625203.localdomain sudo[44865]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:23 np0005625203.localdomain sudo[44912]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jddiituvdddhzxzycfrdveyppivuxhse ; /usr/bin/python3
Feb 20 07:49:23 np0005625203.localdomain sudo[44912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:23 np0005625203.localdomain python3[44914]: ansible-file Invoked with path=/var/cache/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:23 np0005625203.localdomain sudo[44912]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:23 np0005625203.localdomain sudo[44946]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vktgerifoszpjtnhwhjuvkfzwfyrrbep ; /usr/bin/python3
Feb 20 07:49:23 np0005625203.localdomain sudo[44946]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:23 np0005625203.localdomain sudo[44883]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:23 np0005625203.localdomain python3[44948]: ansible-file Invoked with path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:23 np0005625203.localdomain sudo[44946]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:23 np0005625203.localdomain sudo[44977]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ygatyimqplguynvscfakxskcvuytwjod ; /usr/bin/python3
Feb 20 07:49:23 np0005625203.localdomain sudo[44977]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:24 np0005625203.localdomain python3[44979]: ansible-file Invoked with path=/run/libvirt state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:24 np0005625203.localdomain sudo[44977]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:24 np0005625203.localdomain sudo[44980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:49:24 np0005625203.localdomain sudo[44980]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:49:24 np0005625203.localdomain sudo[44980]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:24 np0005625203.localdomain sudo[45007]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qykeymvtgbskddpszvvohoaopxdzwufd ; /usr/bin/python3
Feb 20 07:49:24 np0005625203.localdomain sudo[45007]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:24 np0005625203.localdomain python3[45010]: ansible-file Invoked with mode=0770 path=/var/log/containers/libvirt/swtpm setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:24 np0005625203.localdomain sudo[45007]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:24 np0005625203.localdomain sudo[45024]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rgmvlesjkxyqzgbswcpvifwzloqnimwk ; /usr/bin/python3
Feb 20 07:49:24 np0005625203.localdomain sudo[45024]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:24 np0005625203.localdomain python3[45026]: ansible-group Invoked with gid=107 name=qemu state=present system=False local=False non_unique=False
Feb 20 07:49:24 np0005625203.localdomain groupadd[45027]: group added to /etc/group: name=qemu, GID=107
Feb 20 07:49:24 np0005625203.localdomain groupadd[45027]: group added to /etc/gshadow: name=qemu
Feb 20 07:49:24 np0005625203.localdomain groupadd[45027]: new group: name=qemu, GID=107
Feb 20 07:49:24 np0005625203.localdomain sudo[45024]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:24 np0005625203.localdomain sudo[45046]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxpblusqsqfduaxijciipwnvwvagcgpy ; /usr/bin/python3
Feb 20 07:49:24 np0005625203.localdomain sudo[45046]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:25 np0005625203.localdomain python3[45048]: ansible-user Invoked with comment=qemu user group=qemu name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005625203.localdomain update_password=always groups=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Feb 20 07:49:25 np0005625203.localdomain useradd[45050]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=none
Feb 20 07:49:25 np0005625203.localdomain sudo[45046]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:25 np0005625203.localdomain sudo[45070]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vibcpfpofbeynmnpqcrkdytesbapqpxq ; /usr/bin/python3
Feb 20 07:49:25 np0005625203.localdomain sudo[45070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:25 np0005625203.localdomain python3[45072]: ansible-file Invoked with group=qemu owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None serole=None selevel=None attributes=None
Feb 20 07:49:25 np0005625203.localdomain sudo[45070]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:25 np0005625203.localdomain sudo[45086]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nnoranktuyilxvnovtocteegarptlmko ; /usr/bin/python3
Feb 20 07:49:25 np0005625203.localdomain sudo[45086]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:25 np0005625203.localdomain python3[45088]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/rpm -q libvirt-daemon _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:49:25 np0005625203.localdomain sudo[45086]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:26 np0005625203.localdomain sshd[44898]: Invalid user andy from 185.246.128.171 port 18055
Feb 20 07:49:26 np0005625203.localdomain sudo[45135]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-otmrupvkygjwoobuydwubpjwupmulajf ; /usr/bin/python3
Feb 20 07:49:26 np0005625203.localdomain sudo[45135]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:26 np0005625203.localdomain python3[45137]: ansible-ansible.legacy.stat Invoked with path=/etc/tmpfiles.d/run-libvirt.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:49:26 np0005625203.localdomain sudo[45135]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:26 np0005625203.localdomain sudo[45178]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btvkvktwxhfhdggbgxwailnholskmnft ; /usr/bin/python3
Feb 20 07:49:26 np0005625203.localdomain sudo[45178]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:26 np0005625203.localdomain python3[45180]: ansible-ansible.legacy.copy Invoked with dest=/etc/tmpfiles.d/run-libvirt.conf src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573766.1636875-77216-108097747111419/source _original_basename=tmp5lp8b_jp follow=False checksum=57f3ff94c666c6aae69ae22e23feb750cf9e8b13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:26 np0005625203.localdomain sudo[45178]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:27 np0005625203.localdomain sudo[45208]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nvwpnfdvfwjzrptkwujdzzrsbcldhter ; /usr/bin/python3
Feb 20 07:49:27 np0005625203.localdomain sudo[45208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:27 np0005625203.localdomain python3[45210]: ansible-seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb 20 07:49:27 np0005625203.localdomain sshd[44898]: Disconnecting invalid user andy 185.246.128.171 port 18055: Change of username or service not allowed: (andy,ssh-connection) -> (Admin,ssh-connection) [preauth]
Feb 20 07:49:27 np0005625203.localdomain sudo[45208]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:28 np0005625203.localdomain sudo[45228]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yybhhlaudkduropznchwoezblnopyvqa ; /usr/bin/python3
Feb 20 07:49:28 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Feb 20 07:49:28 np0005625203.localdomain sudo[45228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:28 np0005625203.localdomain python3[45230]: ansible-file Invoked with path=/etc/crypto-policies/local.d/gnutls-qemu.config state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:28 np0005625203.localdomain sudo[45228]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:28 np0005625203.localdomain sudo[45244]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mgohnizssmgpumsbykeokzgnjmjmsfci ; /usr/bin/python3
Feb 20 07:49:28 np0005625203.localdomain sudo[45244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:28 np0005625203.localdomain python3[45246]: ansible-file Invoked with path=/run/libvirt setype=virt_var_run_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:28 np0005625203.localdomain sudo[45244]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:28 np0005625203.localdomain sudo[45260]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfvyrecqvrpijtheboqaisttwstqkvgh ; /usr/bin/python3
Feb 20 07:49:28 np0005625203.localdomain sudo[45260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:29 np0005625203.localdomain python3[45262]: ansible-seboolean Invoked with name=logrotate_read_inside_containers persistent=True state=True ignore_selinux_state=False
Feb 20 07:49:29 np0005625203.localdomain sudo[45260]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:30 np0005625203.localdomain sshd[45267]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:30 np0005625203.localdomain sudo[45282]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ludsoepmsjeoapbahbxdtwlgesnymbww ; /usr/bin/python3
Feb 20 07:49:30 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Feb 20 07:49:30 np0005625203.localdomain sudo[45282]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:30 np0005625203.localdomain python3[45284]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:49:31 np0005625203.localdomain sshd[45267]: Invalid user Admin from 185.246.128.171 port 53932
Feb 20 07:49:32 np0005625203.localdomain sshd[45267]: error: maximum authentication attempts exceeded for invalid user Admin from 185.246.128.171 port 53932 ssh2 [preauth]
Feb 20 07:49:32 np0005625203.localdomain sshd[45267]: Disconnecting invalid user Admin 185.246.128.171 port 53932: Too many authentication failures [preauth]
Feb 20 07:49:32 np0005625203.localdomain sshd[45286]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:33 np0005625203.localdomain sudo[45282]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:33 np0005625203.localdomain sudo[45301]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdczmmgvmtnstqtikhwbgarhwcekhuxr ; /usr/bin/python3
Feb 20 07:49:33 np0005625203.localdomain sudo[45301]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:33 np0005625203.localdomain python3[45303]: ansible-setup Invoked with gather_subset=['!all', '!min', 'network'] filter=['ansible_interfaces'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 07:49:33 np0005625203.localdomain sudo[45301]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:34 np0005625203.localdomain sudo[45362]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxvnprfsehwjhtxrrmwjulfpinlnfzbr ; /usr/bin/python3
Feb 20 07:49:34 np0005625203.localdomain sudo[45362]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:34 np0005625203.localdomain python3[45364]: ansible-file Invoked with path=/etc/containers/networks state=directory recurse=True mode=493 owner=root group=root force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:34 np0005625203.localdomain sudo[45362]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:34 np0005625203.localdomain sudo[45378]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-haiyoynbauyuesnuahbiytcejxmfmedq ; /usr/bin/python3
Feb 20 07:49:34 np0005625203.localdomain sudo[45378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:34 np0005625203.localdomain sshd[45286]: Invalid user Admin from 185.246.128.171 port 5164
Feb 20 07:49:34 np0005625203.localdomain python3[45380]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                          _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:49:34 np0005625203.localdomain sudo[45378]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:35 np0005625203.localdomain sudo[45438]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thwoeqphfdfpekpqqbtrrtjumcfkugij ; /usr/bin/python3
Feb 20 07:49:35 np0005625203.localdomain sudo[45438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:35 np0005625203.localdomain python3[45440]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:49:35 np0005625203.localdomain sudo[45438]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:35 np0005625203.localdomain sudo[45481]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cbkcamaufotwysfepdoadhkhvrnxmdwy ; /usr/bin/python3
Feb 20 07:49:35 np0005625203.localdomain sudo[45481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:35 np0005625203.localdomain python3[45483]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573775.0379512-77595-145658590869278/source dest=/etc/containers/networks/podman.json mode=0644 owner=root group=root follow=False _original_basename=podman_network_config.j2 checksum=a8b58ed46dde0eee1fc634882c3fdb516eb76a7d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:35 np0005625203.localdomain sudo[45481]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:36 np0005625203.localdomain sudo[45543]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yfwtmbxqoqhbkgchyprluufuptzyvybx ; /usr/bin/python3
Feb 20 07:49:36 np0005625203.localdomain sudo[45543]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:36 np0005625203.localdomain python3[45545]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:49:36 np0005625203.localdomain sudo[45543]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:36 np0005625203.localdomain sudo[45588]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bujuyyxoiuqpeixhgxaopfoeszzsozbf ; /usr/bin/python3
Feb 20 07:49:36 np0005625203.localdomain sudo[45588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:36 np0005625203.localdomain python3[45590]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573775.9659848-77652-267597722500650/source dest=/etc/containers/registries.conf owner=root group=root setype=etc_t mode=0644 follow=False _original_basename=registries.conf.j2 checksum=710a00cfb11a4c3eba9c028ef1984a9fea9ba83a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:36 np0005625203.localdomain sudo[45588]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:36 np0005625203.localdomain sudo[45618]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uvujfvpvfcvhduruhdwulmbxpugfyimv ; /usr/bin/python3
Feb 20 07:49:36 np0005625203.localdomain sudo[45618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:37 np0005625203.localdomain sshd[45286]: Disconnecting invalid user Admin 185.246.128.171 port 5164: Change of username or service not allowed: (Admin,ssh-connection) -> (abc,ssh-connection) [preauth]
Feb 20 07:49:37 np0005625203.localdomain python3[45620]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=containers option=pids_limit value=4096 backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:37 np0005625203.localdomain sudo[45618]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:37 np0005625203.localdomain sudo[45634]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dastsldxvpfqnqnbxsatiaaoynzvvyfe ; /usr/bin/python3
Feb 20 07:49:37 np0005625203.localdomain sudo[45634]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:37 np0005625203.localdomain python3[45636]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=events_logger value="journald" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:37 np0005625203.localdomain sudo[45634]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:37 np0005625203.localdomain sudo[45650]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-stvbgvkcuaioqexqlcypuuwfwdgfmmjs ; /usr/bin/python3
Feb 20 07:49:37 np0005625203.localdomain sudo[45650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:37 np0005625203.localdomain python3[45652]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=engine option=runtime value="crun" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:37 np0005625203.localdomain sudo[45650]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:37 np0005625203.localdomain sudo[45666]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtpydotfxcueomajfnocmmfnmtwdhgtp ; /usr/bin/python3
Feb 20 07:49:37 np0005625203.localdomain sudo[45666]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:37 np0005625203.localdomain python3[45668]: ansible-ini_file Invoked with path=/etc/containers/containers.conf owner=root group=root setype=etc_t mode=0644 create=True section=network option=network_backend value="netavark" backup=False state=present exclusive=True no_extra_spaces=False allow_no_value=False unsafe_writes=False values=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:38 np0005625203.localdomain sudo[45666]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:38 np0005625203.localdomain sudo[45714]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gyrmkicruefbdfntmnjvobwlwraipbny ; /usr/bin/python3
Feb 20 07:49:38 np0005625203.localdomain sudo[45714]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:38 np0005625203.localdomain python3[45716]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:49:38 np0005625203.localdomain sudo[45714]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:38 np0005625203.localdomain sudo[45757]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bsrktfjhcvulshjcrryibchsemquydws ; /usr/bin/python3
Feb 20 07:49:38 np0005625203.localdomain sudo[45757]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:39 np0005625203.localdomain python3[45759]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573778.4268653-77762-124052636978326/source _original_basename=tmphogx2i4j follow=False checksum=0bfbc70e9a4740c9004b9947da681f723d529c83 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:39 np0005625203.localdomain sudo[45757]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:39 np0005625203.localdomain sudo[45787]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zsxbxrtndegluqqlhkfourgciponutfs ; /usr/bin/python3
Feb 20 07:49:39 np0005625203.localdomain sudo[45787]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:39 np0005625203.localdomain python3[45789]: ansible-file Invoked with mode=0750 path=/var/log/containers/rsyslog setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:39 np0005625203.localdomain sudo[45787]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:39 np0005625203.localdomain sudo[45803]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-urgdjjivhtrixanlxbkbxhrekwqnnchr ; /usr/bin/python3
Feb 20 07:49:39 np0005625203.localdomain sudo[45803]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:39 np0005625203.localdomain python3[45805]: ansible-file Invoked with path=/var/lib/rsyslog.container setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:39 np0005625203.localdomain sudo[45803]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:39 np0005625203.localdomain sshd[45806]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:40 np0005625203.localdomain sudo[45820]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ovkpbcqwiluwuvtndkbbgarhkgglqlns ; /usr/bin/python3
Feb 20 07:49:40 np0005625203.localdomain sudo[45820]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:40 np0005625203.localdomain python3[45822]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:49:42 np0005625203.localdomain sudo[45820]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:43 np0005625203.localdomain sudo[45870]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xknbgpgwaskkegnwccyexdlcblexlatt ; /usr/bin/python3
Feb 20 07:49:43 np0005625203.localdomain sudo[45870]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:43 np0005625203.localdomain sshd[45806]: Invalid user abc from 185.246.128.171 port 39429
Feb 20 07:49:43 np0005625203.localdomain python3[45872]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:49:43 np0005625203.localdomain sudo[45870]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:44 np0005625203.localdomain sudo[45915]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-povmytgquhgzyzsddmoquxokddjqzrbh ; /usr/bin/python3
Feb 20 07:49:44 np0005625203.localdomain sudo[45915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:44 np0005625203.localdomain python3[45917]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573783.4981253-77951-139080079648076/source validate=/usr/sbin/sshd -T -f %s mode=None follow=False _original_basename=sshd_config_block.j2 checksum=913c99ed7d5c33615bfb07a6792a4ef143dcfd2b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:44 np0005625203.localdomain sudo[45915]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:44 np0005625203.localdomain sudo[45946]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-anghcpxrqkvqbsxmtesunevvcwsrzijh ; /usr/bin/python3
Feb 20 07:49:44 np0005625203.localdomain sudo[45946]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:44 np0005625203.localdomain python3[45948]: ansible-systemd Invoked with name=sshd state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: Stopping OpenSSH server daemon...
Feb 20 07:49:44 np0005625203.localdomain sshd[1132]: Received signal 15; terminating.
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: sshd.service: Deactivated successfully.
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: sshd.service: Unit process 45806 (sshd) remains running after unit stopped.
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: sshd.service: Unit process 45823 (sshd) remains running after unit stopped.
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: Stopped OpenSSH server daemon.
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: sshd.service: Consumed 16.182s CPU time, read 1.9M from disk, written 1.2M to disk.
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: Stopped target sshd-keygen.target.
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: Stopping sshd-keygen.target...
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: Reached target sshd-keygen.target.
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: Starting OpenSSH server daemon...
Feb 20 07:49:44 np0005625203.localdomain sshd[45952]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:44 np0005625203.localdomain sshd[45952]: Server listening on 0.0.0.0 port 22.
Feb 20 07:49:44 np0005625203.localdomain sshd[45952]: Server listening on :: port 22.
Feb 20 07:49:44 np0005625203.localdomain systemd[1]: Started OpenSSH server daemon.
Feb 20 07:49:44 np0005625203.localdomain sudo[45946]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:44 np0005625203.localdomain sudo[45966]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cnqcjdosemmaydmciyyfawjbtwigvvti ; /usr/bin/python3
Feb 20 07:49:45 np0005625203.localdomain sudo[45966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:45 np0005625203.localdomain python3[45968]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:49:45 np0005625203.localdomain sudo[45966]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:45 np0005625203.localdomain sudo[45984]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-altqrmmhrxibjymyworlmwxreshoiqbc ; /usr/bin/python3
Feb 20 07:49:45 np0005625203.localdomain sudo[45984]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:45 np0005625203.localdomain python3[45986]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ntpd.service || systemctl is-enabled ntpd.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:49:46 np0005625203.localdomain sudo[45984]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:46 np0005625203.localdomain sshd[45806]: Disconnecting invalid user abc 185.246.128.171 port 39429: Change of username or service not allowed: (abc,ssh-connection) -> (training,ssh-connection) [preauth]
Feb 20 07:49:46 np0005625203.localdomain sudo[46002]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ogfpyoabxrfkktlnrutogiuwuczuhlkj ; /usr/bin/python3
Feb 20 07:49:46 np0005625203.localdomain sudo[46002]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:46 np0005625203.localdomain python3[46004]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:49:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 07:49:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Cumulative writes: 3405 writes, 16K keys, 3405 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.03 MB/s
                                                          Cumulative WAL: 3405 writes, 206 syncs, 16.53 writes per sync, written: 0.01 GB, 0.03 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 3405 writes, 16K keys, 3405 commit groups, 1.0 writes per commit group, ingest: 15.29 MB, 0.03 MB/s
                                                          Interval WAL: 3405 writes, 206 syncs, 16.53 writes per sync, written: 0.01 GB, 0.03 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fb610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fb610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fb610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Feb 20 07:49:48 np0005625203.localdomain sudo[46002]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:49 np0005625203.localdomain sudo[46051]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqthpvcdoqgdkdctcegbgnxacwvzjvzm ; /usr/bin/python3
Feb 20 07:49:49 np0005625203.localdomain sudo[46051]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:49 np0005625203.localdomain sshd[46054]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:50 np0005625203.localdomain python3[46053]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:49:50 np0005625203.localdomain sudo[46051]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:50 np0005625203.localdomain sudo[46070]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rpsxazappppfxrphrhzadbujzncnuehe ; /usr/bin/python3
Feb 20 07:49:50 np0005625203.localdomain sudo[46070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:50 np0005625203.localdomain python3[46072]: ansible-ansible.legacy.file Invoked with owner=root group=root mode=420 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:50 np0005625203.localdomain sudo[46070]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:50 np0005625203.localdomain sudo[46101]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jrcjufnnjmpjfhlfjjmtgsxhdxuihsrq ; /usr/bin/python3
Feb 20 07:49:50 np0005625203.localdomain sudo[46101]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:51 np0005625203.localdomain python3[46103]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:49:51 np0005625203.localdomain sudo[46101]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:51 np0005625203.localdomain sudo[46151]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mgeqpveyzvxqxobumzpdzdokphopvkep ; /usr/bin/python3
Feb 20 07:49:51 np0005625203.localdomain sudo[46151]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:51 np0005625203.localdomain python3[46153]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/chrony-online.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:49:51 np0005625203.localdomain sudo[46151]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:51 np0005625203.localdomain sudo[46169]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-okfbhzauppoeydhfntezdpymaxddplno ; /usr/bin/python3
Feb 20 07:49:51 np0005625203.localdomain sudo[46169]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 07:49:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Cumulative writes: 3248 writes, 16K keys, 3248 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                                          Cumulative WAL: 3248 writes, 140 syncs, 23.20 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 3248 writes, 16K keys, 3248 commit groups, 1.0 writes per commit group, ingest: 14.61 MB, 0.02 MB/s
                                                          Interval WAL: 3248 writes, 140 syncs, 23.20 writes per sync, written: 0.01 GB, 0.02 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c3610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c3610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c3610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 600.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Feb 20 07:49:52 np0005625203.localdomain python3[46171]: ansible-ansible.legacy.file Invoked with dest=/etc/systemd/system/chrony-online.service _original_basename=chrony-online.service recurse=False state=file path=/etc/systemd/system/chrony-online.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:52 np0005625203.localdomain sudo[46169]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:52 np0005625203.localdomain sudo[46199]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ginhzzfcyfqkgajvxdhokenunmwwzsuy ; /usr/bin/python3
Feb 20 07:49:52 np0005625203.localdomain sudo[46199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:52 np0005625203.localdomain python3[46201]: ansible-systemd Invoked with state=started name=chrony-online.service enabled=True daemon-reload=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:49:52 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:49:52 np0005625203.localdomain systemd-rc-local-generator[46223]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:49:52 np0005625203.localdomain systemd-sysv-generator[46226]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:49:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:49:52 np0005625203.localdomain sshd[46054]: Invalid user training from 185.246.128.171 port 27166
Feb 20 07:49:52 np0005625203.localdomain systemd[1]: Starting chronyd online sources service...
Feb 20 07:49:52 np0005625203.localdomain chronyc[46241]: 200 OK
Feb 20 07:49:52 np0005625203.localdomain systemd[1]: chrony-online.service: Deactivated successfully.
Feb 20 07:49:52 np0005625203.localdomain systemd[1]: Finished chronyd online sources service.
Feb 20 07:49:52 np0005625203.localdomain sudo[46199]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:53 np0005625203.localdomain sudo[46255]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zmqhflegpsaqlkhmlnaykxmxzdomdyns ; /usr/bin/python3
Feb 20 07:49:53 np0005625203.localdomain sudo[46255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:53 np0005625203.localdomain python3[46257]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:49:53 np0005625203.localdomain chronyd[26395]: System clock was stepped by 0.000079 seconds
Feb 20 07:49:53 np0005625203.localdomain sudo[46255]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:53 np0005625203.localdomain sshd[46054]: Disconnecting invalid user training 185.246.128.171 port 27166: Change of username or service not allowed: (training,ssh-connection) -> (adib,ssh-connection) [preauth]
Feb 20 07:49:53 np0005625203.localdomain sudo[46272]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jxbqcdjaeaebsfpveqgbiqfbqsgpbsci ; /usr/bin/python3
Feb 20 07:49:53 np0005625203.localdomain sudo[46272]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:53 np0005625203.localdomain python3[46274]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:49:53 np0005625203.localdomain sudo[46272]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:53 np0005625203.localdomain sudo[46289]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abvegzxyakakfrmjgijqikuxmkoybsni ; /usr/bin/python3
Feb 20 07:49:53 np0005625203.localdomain sudo[46289]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:54 np0005625203.localdomain python3[46291]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc makestep _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:49:54 np0005625203.localdomain chronyd[26395]: System clock was stepped by 0.000000 seconds
Feb 20 07:49:54 np0005625203.localdomain sudo[46289]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:54 np0005625203.localdomain sudo[46306]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vpavfqghjxatcdypiwdzbptlwosyscaq ; /usr/bin/python3
Feb 20 07:49:54 np0005625203.localdomain sudo[46306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:54 np0005625203.localdomain python3[46308]: ansible-ansible.legacy.command Invoked with _raw_params=chronyc waitsync 30 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:49:54 np0005625203.localdomain sudo[46306]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:54 np0005625203.localdomain sudo[46323]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqhpckmosvfjzmtgaimrdrotufabklxz ; /usr/bin/python3
Feb 20 07:49:54 np0005625203.localdomain sudo[46323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:54 np0005625203.localdomain python3[46325]: ansible-timezone Invoked with name=UTC hwclock=None
Feb 20 07:49:54 np0005625203.localdomain systemd[1]: Starting Time & Date Service...
Feb 20 07:49:54 np0005625203.localdomain systemd[1]: Started Time & Date Service.
Feb 20 07:49:54 np0005625203.localdomain sudo[46323]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:55 np0005625203.localdomain sshd[46330]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:55 np0005625203.localdomain sudo[46345]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wcdiutjhqidlvidvbaarzxiortjznmqn ; /usr/bin/python3
Feb 20 07:49:55 np0005625203.localdomain sudo[46345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:55 np0005625203.localdomain sshd[46347]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:49:56 np0005625203.localdomain sshd[46330]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:49:56 np0005625203.localdomain python3[46348]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q --whatprovides tuned tuned-profiles-cpu-partitioning _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:49:56 np0005625203.localdomain sudo[46345]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:56 np0005625203.localdomain sudo[46364]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iluximqdbzwxjrxaoouzjumdgroimkrt ; PATH=/bin:/usr/bin:/sbin:/usr/sbin /usr/bin/python3
Feb 20 07:49:56 np0005625203.localdomain sudo[46364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:56 np0005625203.localdomain python3[46366]: ansible-ansible.legacy.command Invoked with _raw_params=which tuned-adm _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:49:56 np0005625203.localdomain sudo[46364]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:57 np0005625203.localdomain sudo[46381]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vqwyzjtzwkhkxepbgsbuoccxtzojgtku ; /usr/bin/python3
Feb 20 07:49:57 np0005625203.localdomain sudo[46381]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:57 np0005625203.localdomain python3[46383]: ansible-slurp Invoked with src=/etc/tuned/active_profile
Feb 20 07:49:57 np0005625203.localdomain sudo[46381]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:57 np0005625203.localdomain sudo[46397]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eprdoebhxaudpblqxcrgguurqxmlmjom ; /usr/bin/python3
Feb 20 07:49:57 np0005625203.localdomain sudo[46397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:57 np0005625203.localdomain python3[46399]: ansible-stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:49:57 np0005625203.localdomain sudo[46397]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:58 np0005625203.localdomain sudo[46413]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sywlngmadliqtjdljqxvycqcoqosuyiw ; /usr/bin/python3
Feb 20 07:49:58 np0005625203.localdomain sudo[46413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:58 np0005625203.localdomain python3[46415]: ansible-file Invoked with mode=0750 path=/var/log/containers/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:58 np0005625203.localdomain sudo[46413]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:58 np0005625203.localdomain sudo[46429]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-coonjnbxbbfbredytgsmnthixijfnegk ; /usr/bin/python3
Feb 20 07:49:58 np0005625203.localdomain sudo[46429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:58 np0005625203.localdomain python3[46431]: ansible-file Invoked with path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:49:58 np0005625203.localdomain sudo[46429]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:58 np0005625203.localdomain sudo[46477]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ihrgjhpkayqmubgqtfqgtayhyiorwbba ; /usr/bin/python3
Feb 20 07:49:58 np0005625203.localdomain sudo[46477]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:59 np0005625203.localdomain python3[46479]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/neutron-cleanup follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:49:59 np0005625203.localdomain sudo[46477]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:59 np0005625203.localdomain sshd[46347]: Invalid user adib from 185.246.128.171 port 58962
Feb 20 07:49:59 np0005625203.localdomain sudo[46520]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qkervhzrhgjimnfqyfanftmopkrxvcaz ; /usr/bin/python3
Feb 20 07:49:59 np0005625203.localdomain sudo[46520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:59 np0005625203.localdomain python3[46522]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/neutron-cleanup force=True mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573798.8261507-79029-236212545019071/source _original_basename=tmp2rcwbmcn follow=False checksum=f9cc7d1e91fbae49caa7e35eb2253bba146a73b4 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:49:59 np0005625203.localdomain sudo[46520]: pam_unix(sudo:session): session closed for user root
Feb 20 07:49:59 np0005625203.localdomain sshd[46347]: Disconnecting invalid user adib 185.246.128.171 port 58962: Change of username or service not allowed: (adib,ssh-connection) -> (soporte,ssh-connection) [preauth]
Feb 20 07:49:59 np0005625203.localdomain sudo[46582]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jukxrxwphsrmmgpuuoatvuosgzdxsgqj ; /usr/bin/python3
Feb 20 07:49:59 np0005625203.localdomain sudo[46582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:49:59 np0005625203.localdomain python3[46584]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/neutron-cleanup.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:50:00 np0005625203.localdomain sudo[46582]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:00 np0005625203.localdomain sudo[46625]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swakbbvqipacimkbtijkiyigyxvfuvuq ; /usr/bin/python3
Feb 20 07:50:00 np0005625203.localdomain sudo[46625]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:00 np0005625203.localdomain python3[46627]: ansible-ansible.legacy.copy Invoked with dest=/usr/lib/systemd/system/neutron-cleanup.service force=True src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573799.7124853-79077-14834392801398/source _original_basename=tmpp9qinurr follow=False checksum=6b6cd9f074903a28d054eb530a10c7235d0c39fc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:50:00 np0005625203.localdomain sudo[46625]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:00 np0005625203.localdomain sshd[46655]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:50:00 np0005625203.localdomain sudo[46656]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tfupkjxdwsxnlucahrdrsudtheyzykfm ; /usr/bin/python3
Feb 20 07:50:00 np0005625203.localdomain sudo[46656]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:00 np0005625203.localdomain python3[46659]: ansible-ansible.legacy.systemd Invoked with enabled=True name=neutron-cleanup daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 20 07:50:01 np0005625203.localdomain sshd[46661]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:50:01 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:50:01 np0005625203.localdomain systemd-sysv-generator[46690]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:50:01 np0005625203.localdomain systemd-rc-local-generator[46686]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:50:01 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:50:01 np0005625203.localdomain sudo[46656]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:01 np0005625203.localdomain sshd[46655]: Invalid user oracle from 40.81.244.142 port 40806
Feb 20 07:50:02 np0005625203.localdomain sudo[46713]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lcckzoykidmctbajzosoouqujrrggcjj ; /usr/bin/python3
Feb 20 07:50:02 np0005625203.localdomain sudo[46713]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:02 np0005625203.localdomain python3[46715]: ansible-file Invoked with mode=0750 path=/var/log/containers/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:50:02 np0005625203.localdomain sudo[46713]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:02 np0005625203.localdomain sshd[46655]: Received disconnect from 40.81.244.142 port 40806:11: Bye Bye [preauth]
Feb 20 07:50:02 np0005625203.localdomain sshd[46655]: Disconnected from invalid user oracle 40.81.244.142 port 40806 [preauth]
Feb 20 07:50:02 np0005625203.localdomain sudo[46729]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djhoecjqxzsvjcujyzzfsxeiogmwvsed ; /usr/bin/python3
Feb 20 07:50:02 np0005625203.localdomain sudo[46729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:02 np0005625203.localdomain python3[46731]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns add ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:50:02 np0005625203.localdomain sudo[46729]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:02 np0005625203.localdomain sudo[46746]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wiwmsglahuvqvoxaplzsvqdyqfxdvwkl ; /usr/bin/python3
Feb 20 07:50:02 np0005625203.localdomain sudo[46746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:02 np0005625203.localdomain python3[46748]: ansible-ansible.legacy.command Invoked with _raw_params=ip netns delete ns_temp _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:50:02 np0005625203.localdomain systemd[1]: run-netns-ns_temp.mount: Deactivated successfully.
Feb 20 07:50:02 np0005625203.localdomain sudo[46746]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:03 np0005625203.localdomain sudo[46763]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bzmkcawqpniurnjkqtmjpvvgzjanmlmf ; /usr/bin/python3
Feb 20 07:50:03 np0005625203.localdomain sudo[46763]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:03 np0005625203.localdomain python3[46765]: ansible-file Invoked with path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:50:03 np0005625203.localdomain sudo[46763]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:03 np0005625203.localdomain sudo[46779]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wvitsxekbgkqfjulzenbtceerpgukcfm ; /usr/bin/python3
Feb 20 07:50:03 np0005625203.localdomain sudo[46779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:03 np0005625203.localdomain python3[46781]: ansible-file Invoked with path=/var/lib/neutron/kill_scripts state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:50:03 np0005625203.localdomain sshd[46661]: Invalid user soporte from 185.246.128.171 port 21088
Feb 20 07:50:03 np0005625203.localdomain sudo[46779]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:03 np0005625203.localdomain sudo[46827]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kstsvzwsesvpebjcbotwjtbcaifjlnba ; /usr/bin/python3
Feb 20 07:50:03 np0005625203.localdomain sudo[46827]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:03 np0005625203.localdomain sshd[46661]: Disconnecting invalid user soporte 185.246.128.171 port 21088: Change of username or service not allowed: (soporte,ssh-connection) -> (azure,ssh-connection) [preauth]
Feb 20 07:50:04 np0005625203.localdomain python3[46829]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:50:04 np0005625203.localdomain sudo[46827]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:04 np0005625203.localdomain sudo[46870]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zifbawhcezcrdqbfrxviklbgqzxmzcnx ; /usr/bin/python3
Feb 20 07:50:04 np0005625203.localdomain sudo[46870]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:04 np0005625203.localdomain python3[46872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=493 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573803.738551-79276-112686897904735/source _original_basename=tmpfb5ps45h follow=False checksum=2f369fbe8f83639cdfd4efc53e7feb4ee77d1ed7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:50:04 np0005625203.localdomain sudo[46870]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:07 np0005625203.localdomain sshd[46887]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:50:12 np0005625203.localdomain sshd[46887]: Invalid user azure from 185.246.128.171 port 55507
Feb 20 07:50:14 np0005625203.localdomain sshd[46887]: Disconnecting invalid user azure 185.246.128.171 port 55507: Change of username or service not allowed: (azure,ssh-connection) -> (openmediavault,ssh-connection) [preauth]
Feb 20 07:50:17 np0005625203.localdomain sshd[46889]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:50:21 np0005625203.localdomain sshd[46889]: Invalid user openmediavault from 185.246.128.171 port 42760
Feb 20 07:50:23 np0005625203.localdomain sshd[46889]: Disconnecting invalid user openmediavault 185.246.128.171 port 42760: Change of username or service not allowed: (openmediavault,ssh-connection) -> (newusername,ssh-conne [preauth]
Feb 20 07:50:24 np0005625203.localdomain sudo[46891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:50:24 np0005625203.localdomain sudo[46891]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:50:24 np0005625203.localdomain sudo[46891]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:24 np0005625203.localdomain sudo[46906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:50:24 np0005625203.localdomain sudo[46906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:50:24 np0005625203.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 20 07:50:24 np0005625203.localdomain sudo[46906]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:26 np0005625203.localdomain sshd[46955]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:50:26 np0005625203.localdomain sudo[46956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:50:26 np0005625203.localdomain sudo[46956]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:50:26 np0005625203.localdomain sudo[46956]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:27 np0005625203.localdomain sudo[46985]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vexpumeomzqnetyafvofvamtvxjdrysx ; /usr/bin/python3
Feb 20 07:50:27 np0005625203.localdomain sudo[46985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:28 np0005625203.localdomain python3[46987]: ansible-file Invoked with path=/var/log/containers state=directory setype=container_file_t selevel=s0 mode=488 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 07:50:28 np0005625203.localdomain sudo[46985]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:28 np0005625203.localdomain sudo[47001]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-icyaxyqmtocfcirbnjmukrqhgoccoyla ; /usr/bin/python3
Feb 20 07:50:28 np0005625203.localdomain sudo[47001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:28 np0005625203.localdomain python3[47003]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None setype=None attributes=None
Feb 20 07:50:28 np0005625203.localdomain sudo[47001]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:28 np0005625203.localdomain sudo[47017]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wdowbcqpirwejdqlgodwmqomdxbqartk ; /usr/bin/python3
Feb 20 07:50:28 np0005625203.localdomain sudo[47017]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:28 np0005625203.localdomain python3[47019]: ansible-file Invoked with path=/var/lib/tripleo-config state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 07:50:28 np0005625203.localdomain sudo[47017]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:28 np0005625203.localdomain sudo[47033]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ffeoaodgeihwlsbmmfjlnertiwmehiev ; /usr/bin/python3
Feb 20 07:50:28 np0005625203.localdomain sudo[47033]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:29 np0005625203.localdomain python3[47035]: ansible-file Invoked with path=/var/lib/container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:50:29 np0005625203.localdomain sudo[47033]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:29 np0005625203.localdomain sudo[47049]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nfrdbmzmfqbefoplwkfoekroeqzbexij ; /usr/bin/python3
Feb 20 07:50:29 np0005625203.localdomain sudo[47049]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:29 np0005625203.localdomain python3[47051]: ansible-file Invoked with path=/var/lib/docker-container-startup-configs.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:50:29 np0005625203.localdomain sudo[47049]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:29 np0005625203.localdomain sudo[47065]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qnbsnenuvomzutboskvgttwadghlnupc ; /usr/bin/python3
Feb 20 07:50:29 np0005625203.localdomain sudo[47065]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:29 np0005625203.localdomain python3[47067]: ansible-community.general.sefcontext Invoked with target=/var/lib/container-config-scripts(/.*)? setype=container_file_t state=present ignore_selinux_state=False ftype=a reload=True seuser=None selevel=None
Feb 20 07:50:29 np0005625203.localdomain sshd[47068]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:50:30 np0005625203.localdomain kernel: SELinux:  Converting 2707 SID table entries...
Feb 20 07:50:30 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 07:50:30 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 07:50:30 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 07:50:30 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 07:50:30 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 07:50:30 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 07:50:30 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 07:50:30 np0005625203.localdomain sshd[47068]: Invalid user n8n from 189.190.2.14 port 52926
Feb 20 07:50:30 np0005625203.localdomain sshd[47068]: Received disconnect from 189.190.2.14 port 52926:11: Bye Bye [preauth]
Feb 20 07:50:30 np0005625203.localdomain sshd[47068]: Disconnected from invalid user n8n 189.190.2.14 port 52926 [preauth]
Feb 20 07:50:30 np0005625203.localdomain sudo[47065]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:30 np0005625203.localdomain sudo[47089]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylfhptzfpytnngkwieobiazoeqdmhtha ; /usr/bin/python3
Feb 20 07:50:30 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Feb 20 07:50:30 np0005625203.localdomain sudo[47089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:31 np0005625203.localdomain sshd[46955]: Invalid user newusername from 185.246.128.171 port 23838
Feb 20 07:50:31 np0005625203.localdomain python3[47091]: ansible-file Invoked with path=/var/lib/container-config-scripts state=directory setype=container_file_t recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:50:31 np0005625203.localdomain sudo[47089]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:31 np0005625203.localdomain sudo[47105]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xcthifsdmkjwqonddoqacjtqxwppkvbj ; /usr/bin/python3
Feb 20 07:50:31 np0005625203.localdomain sudo[47105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:31 np0005625203.localdomain sshd[46955]: Disconnecting invalid user newusername 185.246.128.171 port 23838: Change of username or service not allowed: (newusername,ssh-connection) -> (user,ssh-connection) [preauth]
Feb 20 07:50:31 np0005625203.localdomain sudo[47105]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:31 np0005625203.localdomain sudo[47153]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gyqzvixwwrishldttikqpojkemnkvyji ; /usr/bin/python3
Feb 20 07:50:31 np0005625203.localdomain sudo[47153]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:31 np0005625203.localdomain sudo[47153]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:32 np0005625203.localdomain sudo[47196]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iuutpmozkyrjugamrrlmtgwiieomfdzc ; /usr/bin/python3
Feb 20 07:50:32 np0005625203.localdomain sudo[47196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:32 np0005625203.localdomain sudo[47196]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:32 np0005625203.localdomain sudo[47226]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zsofzcltfinlazngngamjzrotbnerpev ; /usr/bin/python3
Feb 20 07:50:32 np0005625203.localdomain sudo[47226]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:32 np0005625203.localdomain python3[47228]: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-config/container-startup-config config_data={'step_1': {'metrics_qdr': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, 'metrics_qdr_init_logs': {'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}}, 'step_2': {'create_haproxy_wrapper': {'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, 'create_virtlogd_wrapper': {'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, 'nova_compute_init_log': {'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, 'nova_virtqemud_init_logs': {'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}}, 'step_3': {'ceilometer_init_log': {'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'collectd': {'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, 'iscsid': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, 'nova_statedir_owner': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, 'nova_virtlogd_wrapper': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, 'nova_virtnodedevd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtproxyd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtqemud': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, 'nova_virtsecretd': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, 'nova_virtstoraged': {'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, 'rsyslog': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}}, 'step_4': {'ceilometer_agent_compute': {'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'ceilometer_agent_ipmi': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, 'configure_cms_options': {'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, 'logrotate_crond': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, 'nova_libvirt_init_secret': {'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, 'nova_migration_target': {'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, 'ovn_controller': {'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, 'ovn_metadata_agent': {'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, 'setup_ovs_manager': {'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}}, 'step_5': {'nova_compute': {'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, 'nova_wait_for_compute_service': {'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}}}
Feb 20 07:50:32 np0005625203.localdomain sudo[47226]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:32 np0005625203.localdomain sshd[47229]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:50:33 np0005625203.localdomain sudo[47243]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lhzpdzblabmnbnbrpwiecswrvkswsnzj ; /usr/bin/python3
Feb 20 07:50:33 np0005625203.localdomain rsyslogd[758]: message too long (31243) with configured size 8096, begin of message is: ansible-container_startup_config Invoked with config_base_dir=/var/lib/tripleo-c [v8.2102.0-111.el9 try https://www.rsyslog.com/e/2445 ]
Feb 20 07:50:33 np0005625203.localdomain sudo[47243]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:33 np0005625203.localdomain python3[47245]: ansible-file Invoked with path=/var/lib/kolla/config_files state=directory setype=container_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 07:50:33 np0005625203.localdomain sudo[47243]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:33 np0005625203.localdomain sudo[47260]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uhhdnwkmfvubdxrrqguibgrlmlpadnfk ; /usr/bin/python3
Feb 20 07:50:33 np0005625203.localdomain sudo[47260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:33 np0005625203.localdomain python3[47262]: ansible-file Invoked with path=/var/lib/config-data mode=493 state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 07:50:33 np0005625203.localdomain sudo[47260]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:33 np0005625203.localdomain sudo[47276]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgvxwxkajbztbjtwkjjkbgbynugtvvaw ; /usr/bin/python3
Feb 20 07:50:33 np0005625203.localdomain sudo[47276]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:34 np0005625203.localdomain python3[47278]: ansible-tripleo_container_configs Invoked with config_data={'/var/lib/kolla/config_files/ceilometer-agent-ipmi.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/ipmi.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/ceilometer_agent_compute.json': {'command': '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /var/log/ceilometer/compute.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/collectd.json': {'command': '/usr/sbin/collectd -f', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/', 'merge': False, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/etc/collectd.d'}], 'permissions': [{'owner': 'collectd:collectd', 'path': '/var/log/collectd', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/scripts', 'recurse': True}, {'owner': 'collectd:collectd', 'path': '/config-scripts', 'recurse': True}]}, '/var/lib/kolla/config_files/iscsid.json': {'command': '/usr/sbin/iscsid -f', 'config_files': [{'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/'}]}, '/var/lib/kolla/config_files/logrotate-crond.json': {'command': '/usr/sbin/crond -s -n', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}]}, '/var/lib/kolla/config_files/metrics_qdr.json': {'command': '/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/', 'merge': True, 'optional': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-tls/*'}], 'permissions': [{'owner': 'qdrouterd:qdrouterd', 'path': '/var/lib/qdrouterd', 'recurse': True}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/certs/metrics_qdr.crt'}, {'optional': True, 'owner': 'qdrouterd:qdrouterd', 'path': '/etc/pki/tls/private/metrics_qdr.key'}]}, '/var/lib/kolla/config_files/nova-migration-target.json': {'command': 'dumb-init --single-child -- /usr/sbin/sshd -D -p 2022', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ssh/', 'owner': 'root', 'perm': '0600', 'source': '/host-ssh/ssh_host_*_key'}]}, '/var/lib/kolla/config_files/nova_compute.json': {'command': '/var/lib/nova/delay-nova-compute --delay 180 --nova-binary /usr/bin/nova-compute ', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/iscsi/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-iscsid/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}, {'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json': {'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_wait_for_compute_service.py', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'nova:nova', 'path': '/var/log/nova', 'recurse': True}]}, '/var/lib/kolla/config_files/nova_virtlogd.json': {'command': '/usr/local/bin/virtlogd_wrapper', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtnodedevd.json': {'command': '/usr/sbin/virtnodedevd --config /etc/libvirt/virtnodedevd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtproxyd.json': {'command': '/usr/sbin/virtproxyd --config /etc/libvirt/virtproxyd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtqemud.json': {'command': '/usr/sbin/virtqemud --config /etc/libvirt/virtqemud.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtsecretd.json': {'command': '/usr/sbin/virtsecretd --config /etc/libvirt/virtsecretd.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/nova_virtstoraged.json': {'command': '/usr/sbin/virtstoraged --config /etc/libvirt/virtstoraged.conf', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}, {'dest': '/etc/ceph/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src-ceph/'}], 'permissions': [{'owner': 'nova:nova', 'path': '/etc/ceph/ceph.client.openstack.keyring', 'perm': '0600'}]}, '/var/lib/kolla/config_files/ovn_controller.json': {'command': '/usr/bin/ovn-controller --pidfile --log-file unix:/run/openvswitch/db.sock ', 'permissions': [{'owner': 'root:root', 'path': '/var/log/openvswitch', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/ovn', 'recurse': True}]}, '/var/lib/kolla/config_files/ovn_metadata_agent.json': {'command': '/usr/bin/networking-ovn-metadata-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --log-file=/var/log/neutron/ovn-metadata-agent.log', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'neutron:neutron', 'path': '/var/log/neutron', 'recurse': True}, {'owner': 'neutron:neutron', 'path': '/var/lib/neutron', 'recurse': True}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/certs/ovn_metadata.crt', 'perm': '0644'}, {'optional': True, 'owner': 'neutron:neutron', 'path': '/etc/pki/tls/private/ovn_metadata.key', 'perm': '0644'}]}, '/var/lib/kolla/config_files/rsyslog.json': {'command': '/usr/sbin/rsyslogd -n -iNONE', 'config_files': [{'dest': '/', 'merge': True, 'preserve_properties': True, 'source': '/var/lib/kolla/config_files/src/*'}], 'permissions': [{'owner': 'root:root', 'path': '/var/lib/rsyslog', 'recurse': True}, {'owner': 'root:root', 'path': '/var/log/rsyslog', 'recurse': True}]}}
Feb 20 07:50:34 np0005625203.localdomain sudo[47276]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:34 np0005625203.localdomain sshd[47229]: Invalid user user from 185.246.128.171 port 56407
Feb 20 07:50:39 np0005625203.localdomain sudo[47324]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zkbghhyvwhxqlpufnaywimqxlxqlnlpb ; /usr/bin/python3
Feb 20 07:50:39 np0005625203.localdomain sudo[47324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:39 np0005625203.localdomain python3[47326]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:50:39 np0005625203.localdomain sudo[47324]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:40 np0005625203.localdomain sudo[47367]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bajxowoykjyigwifafxcarqibvlbneuw ; /usr/bin/python3
Feb 20 07:50:40 np0005625203.localdomain sudo[47367]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:40 np0005625203.localdomain python3[47369]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573839.465232-80837-237489305372748/source _original_basename=tmpykmhrfcb follow=False checksum=dfdcc7695edd230e7a2c06fc7b739bfa56506d8f backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:50:40 np0005625203.localdomain sudo[47367]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:40 np0005625203.localdomain sudo[47397]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxndskuiswxznbrowcaabmqhzmkmuijm ; /usr/bin/python3
Feb 20 07:50:40 np0005625203.localdomain sudo[47397]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:40 np0005625203.localdomain python3[47399]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:50:40 np0005625203.localdomain sudo[47397]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:41 np0005625203.localdomain sudo[47447]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xabwsvdmnydecuftkrrffvmoeiguxlba ; /usr/bin/python3
Feb 20 07:50:41 np0005625203.localdomain sudo[47447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:41 np0005625203.localdomain sshd[47229]: error: maximum authentication attempts exceeded for invalid user user from 185.246.128.171 port 56407 ssh2 [preauth]
Feb 20 07:50:41 np0005625203.localdomain sshd[47229]: Disconnecting invalid user user 185.246.128.171 port 56407: Too many authentication failures [preauth]
Feb 20 07:50:41 np0005625203.localdomain sudo[47447]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:41 np0005625203.localdomain sudo[47490]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjudzcpqpxnbtuskiffexskyntejgxht ; /usr/bin/python3
Feb 20 07:50:41 np0005625203.localdomain sudo[47490]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:42 np0005625203.localdomain sudo[47490]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:42 np0005625203.localdomain sudo[47520]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrqllzwrhwozbfohvfezifgtdqyuhtda ; /usr/bin/python3
Feb 20 07:50:42 np0005625203.localdomain sudo[47520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:42 np0005625203.localdomain python3[47522]: ansible-file Invoked with path=/var/lib/container-puppet state=directory setype=container_file_t selevel=s0 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 07:50:42 np0005625203.localdomain sudo[47520]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:42 np0005625203.localdomain sshd[47523]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:50:43 np0005625203.localdomain sshd[47523]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:50:43 np0005625203.localdomain sudo[47570]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gqijzsjfeiddsozljreagziggcohetrl ; /usr/bin/python3
Feb 20 07:50:43 np0005625203.localdomain sudo[47570]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:43 np0005625203.localdomain sudo[47570]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:43 np0005625203.localdomain sudo[47613]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mynumpjxykwhredbhuvzotzfxvgwoylw ; /usr/bin/python3
Feb 20 07:50:43 np0005625203.localdomain sudo[47613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:43 np0005625203.localdomain sudo[47613]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:44 np0005625203.localdomain sudo[47643]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yltqycuemmuliwwmvkmzycqpuzyajvwk ; /usr/bin/python3
Feb 20 07:50:44 np0005625203.localdomain sudo[47643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:44 np0005625203.localdomain python3[47645]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Feb 20 07:50:44 np0005625203.localdomain sudo[47643]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:44 np0005625203.localdomain sshd[47646]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:50:46 np0005625203.localdomain sudo[47661]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lzlfuxsetvfocsuxlkebqfmupqqzkipl ; /usr/bin/python3
Feb 20 07:50:46 np0005625203.localdomain sudo[47661]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:46 np0005625203.localdomain python3[47663]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -q lvm2 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:50:46 np0005625203.localdomain sudo[47661]: pam_unix(sudo:session): session closed for user root
Feb 20 07:50:47 np0005625203.localdomain sudo[47678]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ecrupgxlplbuaekmnajqblfvvafbichv ; /usr/bin/python3
Feb 20 07:50:47 np0005625203.localdomain sudo[47678]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:50:47 np0005625203.localdomain sshd[47646]: Invalid user user from 185.246.128.171 port 50131
Feb 20 07:50:47 np0005625203.localdomain python3[47680]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 07:50:49 np0005625203.localdomain systemd[36457]: Created slice User Background Tasks Slice.
Feb 20 07:50:49 np0005625203.localdomain systemd[36457]: Starting Cleanup of User's Temporary Files and Directories...
Feb 20 07:50:49 np0005625203.localdomain systemd[36457]: Finished Cleanup of User's Temporary Files and Directories.
Feb 20 07:50:51 np0005625203.localdomain dbus-broker-launch[18429]: Noticed file-system modification, trigger reload.
Feb 20 07:50:51 np0005625203.localdomain dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Feb 20 07:50:51 np0005625203.localdomain dbus-broker-launch[18429]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Feb 20 07:50:51 np0005625203.localdomain dbus-broker-launch[18429]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Feb 20 07:50:51 np0005625203.localdomain dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Feb 20 07:50:51 np0005625203.localdomain systemd[1]: Reexecuting.
Feb 20 07:50:51 np0005625203.localdomain systemd[1]: systemd 252-14.el9_2.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Feb 20 07:50:51 np0005625203.localdomain systemd[1]: Detected virtualization kvm.
Feb 20 07:50:51 np0005625203.localdomain systemd[1]: Detected architecture x86-64.
Feb 20 07:50:51 np0005625203.localdomain systemd-rc-local-generator[47737]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:50:51 np0005625203.localdomain systemd-sysv-generator[47742]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:50:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:50:56 np0005625203.localdomain sshd[47756]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:50:56 np0005625203.localdomain sshd[47646]: error: maximum authentication attempts exceeded for invalid user user from 185.246.128.171 port 50131 ssh2 [preauth]
Feb 20 07:50:56 np0005625203.localdomain sshd[47646]: Disconnecting invalid user user 185.246.128.171 port 50131: Too many authentication failures [preauth]
Feb 20 07:50:57 np0005625203.localdomain sshd[47756]: Received disconnect from 187.87.206.21 port 51746:11: Bye Bye [preauth]
Feb 20 07:50:57 np0005625203.localdomain sshd[47756]: Disconnected from authenticating user root 187.87.206.21 port 51746 [preauth]
Feb 20 07:50:58 np0005625203.localdomain sshd[47758]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:50:59 np0005625203.localdomain kernel: SELinux:  Converting 2707 SID table entries...
Feb 20 07:50:59 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 07:50:59 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 07:50:59 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 07:50:59 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 07:50:59 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 07:50:59 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 07:50:59 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 07:50:59 np0005625203.localdomain dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Feb 20 07:50:59 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Feb 20 07:50:59 np0005625203.localdomain dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Feb 20 07:51:00 np0005625203.localdomain sshd[47758]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:51:00 np0005625203.localdomain sshd[47770]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Starting man-db-cache-update.service...
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:51:01 np0005625203.localdomain systemd-rc-local-generator[47879]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:51:01 np0005625203.localdomain systemd-sysv-generator[47884]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Feb 20 07:51:01 np0005625203.localdomain systemd-journald[618]: Journal stopped
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Stopping Journal Service...
Feb 20 07:51:01 np0005625203.localdomain systemd-journald[618]: Received SIGTERM from PID 1 (systemd).
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Stopped Journal Service.
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: systemd-journald.service: Consumed 2.438s CPU time.
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Starting Journal Service...
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: systemd-udevd.service: Consumed 2.929s CPU time.
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Starting Rule-based Manager for Device Events and Files...
Feb 20 07:51:01 np0005625203.localdomain systemd-journald[48285]: Journal started
Feb 20 07:51:01 np0005625203.localdomain systemd-journald[48285]: Runtime Journal (/run/log/journal/01f46965e72fd8a157841feaa66c8d52) is 12.8M, max 314.7M, 301.9M free.
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Started Journal Service.
Feb 20 07:51:01 np0005625203.localdomain systemd-journald[48285]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation.
Feb 20 07:51:01 np0005625203.localdomain systemd-journald[48285]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating.
Feb 20 07:51:01 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 07:51:01 np0005625203.localdomain systemd-udevd[48286]: Using default interface naming scheme 'rhel-9.0'.
Feb 20 07:51:01 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 07:51:01 np0005625203.localdomain systemd[1]: Started Rule-based Manager for Device Events and Files.
Feb 20 07:51:02 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:51:02 np0005625203.localdomain systemd-sysv-generator[48914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:51:02 np0005625203.localdomain systemd-rc-local-generator[48910]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:51:02 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:51:02 np0005625203.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Feb 20 07:51:02 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 07:51:02 np0005625203.localdomain systemd[1]: Finished man-db-cache-update.service.
Feb 20 07:51:02 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Consumed 1.266s CPU time.
Feb 20 07:51:02 np0005625203.localdomain systemd[1]: run-r2769a43677614b6bbc2053a8bf97d30a.service: Deactivated successfully.
Feb 20 07:51:02 np0005625203.localdomain systemd[1]: run-r473a0997f45d4075802dcff64616eb30.service: Deactivated successfully.
Feb 20 07:51:02 np0005625203.localdomain sshd[47770]: Invalid user user from 185.246.128.171 port 3174
Feb 20 07:51:03 np0005625203.localdomain sudo[47678]: pam_unix(sudo:session): session closed for user root
Feb 20 07:51:04 np0005625203.localdomain sudo[49179]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nekfnidmmigjmzjuctkumvrgooeerewx ; /usr/bin/python3
Feb 20 07:51:04 np0005625203.localdomain sudo[49179]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:51:04 np0005625203.localdomain python3[49181]: ansible-sysctl Invoked with name=vm.unprivileged_userfaultfd reload=True state=present sysctl_file=/etc/sysctl.d/99-tripleo-postcopy.conf sysctl_set=True value=1 ignoreerrors=False
Feb 20 07:51:04 np0005625203.localdomain sudo[49179]: pam_unix(sudo:session): session closed for user root
Feb 20 07:51:04 np0005625203.localdomain sudo[49198]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nfkoqblhxxbenzuesaobwnewdrshjcxd ; /usr/bin/python3
Feb 20 07:51:04 np0005625203.localdomain sudo[49198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:51:04 np0005625203.localdomain python3[49200]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-active ksm.service || systemctl is-enabled ksm.service _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 07:51:04 np0005625203.localdomain sudo[49198]: pam_unix(sudo:session): session closed for user root
Feb 20 07:51:05 np0005625203.localdomain sudo[49216]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymubdjhfhfillxqgwbxpgiztjvqfsabb ; /usr/bin/python3
Feb 20 07:51:05 np0005625203.localdomain sudo[49216]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:51:05 np0005625203.localdomain python3[49218]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Feb 20 07:51:05 np0005625203.localdomain python3[49218]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 --format json
Feb 20 07:51:05 np0005625203.localdomain python3[49218]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 -q --tls-verify=false
Feb 20 07:51:07 np0005625203.localdomain sshd[47770]: error: maximum authentication attempts exceeded for invalid user user from 185.246.128.171 port 3174 ssh2 [preauth]
Feb 20 07:51:07 np0005625203.localdomain sshd[47770]: Disconnecting invalid user user 185.246.128.171 port 3174: Too many authentication failures [preauth]
Feb 20 07:51:08 np0005625203.localdomain sshd[49269]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:51:10 np0005625203.localdomain sshd[49269]: Invalid user user from 185.246.128.171 port 43185
Feb 20 07:51:13 np0005625203.localdomain podman[49231]: 2026-02-20 07:51:05.92421243 +0000 UTC m=+0.039524416 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Feb 20 07:51:13 np0005625203.localdomain python3[49218]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 591bb9fb46a70e9f840f28502388406078442df6b6701a3c17990ee75e333673 --format json
Feb 20 07:51:13 np0005625203.localdomain sudo[49216]: pam_unix(sudo:session): session closed for user root
Feb 20 07:51:13 np0005625203.localdomain sudo[49334]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fpuiojcqfbmhsskcisaneyxbeokrzxeg ; /usr/bin/python3
Feb 20 07:51:13 np0005625203.localdomain sudo[49334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:51:13 np0005625203.localdomain python3[49336]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Feb 20 07:51:13 np0005625203.localdomain python3[49336]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 --format json
Feb 20 07:51:13 np0005625203.localdomain python3[49336]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 -q --tls-verify=false
Feb 20 07:51:16 np0005625203.localdomain sshd[49269]: error: maximum authentication attempts exceeded for invalid user user from 185.246.128.171 port 43185 ssh2 [preauth]
Feb 20 07:51:16 np0005625203.localdomain sshd[49269]: Disconnecting invalid user user 185.246.128.171 port 43185: Too many authentication failures [preauth]
Feb 20 07:51:20 np0005625203.localdomain sshd[49424]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:51:20 np0005625203.localdomain podman[49349]: 2026-02-20 07:51:13.713088406 +0000 UTC m=+0.042388492 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Feb 20 07:51:20 np0005625203.localdomain python3[49336]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect d59b33e7fb841c47a47a12b18fb68b11debd968b4596c63f3177ecc7400fb1bc --format json
Feb 20 07:51:20 np0005625203.localdomain sudo[49334]: pam_unix(sudo:session): session closed for user root
Feb 20 07:51:21 np0005625203.localdomain sudo[49451]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dkpxnondqdsygjytfaivdnjhepwpmacf ; /usr/bin/python3
Feb 20 07:51:21 np0005625203.localdomain sudo[49451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:51:21 np0005625203.localdomain python3[49453]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Feb 20 07:51:21 np0005625203.localdomain python3[49453]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 --format json
Feb 20 07:51:21 np0005625203.localdomain python3[49453]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 -q --tls-verify=false
Feb 20 07:51:25 np0005625203.localdomain sshd[49424]: Invalid user user from 185.246.128.171 port 41162
Feb 20 07:51:27 np0005625203.localdomain sudo[49519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:51:27 np0005625203.localdomain sudo[49519]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:51:27 np0005625203.localdomain sudo[49519]: pam_unix(sudo:session): session closed for user root
Feb 20 07:51:27 np0005625203.localdomain sudo[49534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 07:51:27 np0005625203.localdomain sudo[49534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:51:28 np0005625203.localdomain sshd[49559]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:51:29 np0005625203.localdomain sshd[49559]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:51:29 np0005625203.localdomain sshd[49424]: error: maximum authentication attempts exceeded for invalid user user from 185.246.128.171 port 41162 ssh2 [preauth]
Feb 20 07:51:29 np0005625203.localdomain sshd[49424]: Disconnecting invalid user user 185.246.128.171 port 41162: Too many authentication failures [preauth]
Feb 20 07:51:29 np0005625203.localdomain sshd[49561]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:51:33 np0005625203.localdomain sshd[49561]: Invalid user user from 185.246.128.171 port 23163
Feb 20 07:51:37 np0005625203.localdomain sshd[49561]: error: maximum authentication attempts exceeded for invalid user user from 185.246.128.171 port 23163 ssh2 [preauth]
Feb 20 07:51:37 np0005625203.localdomain sshd[49561]: Disconnecting invalid user user 185.246.128.171 port 23163: Too many authentication failures [preauth]
Feb 20 07:51:38 np0005625203.localdomain sshd[50020]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:51:38 np0005625203.localdomain podman[49467]: 2026-02-20 07:51:21.375639745 +0000 UTC m=+0.042773626 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 07:51:39 np0005625203.localdomain python3[49453]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 6eddd23e1e6adfbfa713a747123707c02f92ffdbf1913da92f171aba1d6d7856 --format json
Feb 20 07:51:39 np0005625203.localdomain systemd[1]: tmp-crun.3IpRAs.mount: Deactivated successfully.
Feb 20 07:51:39 np0005625203.localdomain podman[50022]: 2026-02-20 07:51:39.023111026 +0000 UTC m=+0.099155526 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, ceph=True, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, architecture=x86_64, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, RELEASE=main, io.openshift.expose-services=, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1770267347, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph)
Feb 20 07:51:39 np0005625203.localdomain sudo[49451]: pam_unix(sudo:session): session closed for user root
Feb 20 07:51:39 np0005625203.localdomain podman[50022]: 2026-02-20 07:51:39.132270637 +0000 UTC m=+0.208315157 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, name=rhceph, distribution-scope=public, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.42.2)
Feb 20 07:51:39 np0005625203.localdomain sudo[50089]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eejtbtgkhlbtmbgrvydhhwsclsxozuwn ; /usr/bin/python3
Feb 20 07:51:39 np0005625203.localdomain sudo[50089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:51:39 np0005625203.localdomain sshd[50113]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:51:39 np0005625203.localdomain sudo[49534]: pam_unix(sudo:session): session closed for user root
Feb 20 07:51:39 np0005625203.localdomain python3[50098]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Feb 20 07:51:39 np0005625203.localdomain python3[50098]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 --format json
Feb 20 07:51:39 np0005625203.localdomain python3[50098]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 -q --tls-verify=false
Feb 20 07:51:40 np0005625203.localdomain sshd[50020]: Invalid user oracle from 103.171.84.20 port 45648
Feb 20 07:51:40 np0005625203.localdomain sudo[50139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:51:40 np0005625203.localdomain sudo[50139]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:51:40 np0005625203.localdomain sudo[50139]: pam_unix(sudo:session): session closed for user root
Feb 20 07:51:40 np0005625203.localdomain sudo[50154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:51:40 np0005625203.localdomain sudo[50154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:51:40 np0005625203.localdomain sshd[50020]: Received disconnect from 103.171.84.20 port 45648:11: Bye Bye [preauth]
Feb 20 07:51:40 np0005625203.localdomain sshd[50020]: Disconnected from invalid user oracle 103.171.84.20 port 45648 [preauth]
Feb 20 07:51:40 np0005625203.localdomain sudo[50154]: pam_unix(sudo:session): session closed for user root
Feb 20 07:51:41 np0005625203.localdomain sudo[50227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:51:41 np0005625203.localdomain sudo[50227]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:51:41 np0005625203.localdomain sudo[50227]: pam_unix(sudo:session): session closed for user root
Feb 20 07:51:43 np0005625203.localdomain sshd[50113]: Invalid user user from 185.246.128.171 port 7404
Feb 20 07:51:48 np0005625203.localdomain sshd[50113]: error: maximum authentication attempts exceeded for invalid user user from 185.246.128.171 port 7404 ssh2 [preauth]
Feb 20 07:51:48 np0005625203.localdomain sshd[50113]: Disconnecting invalid user user 185.246.128.171 port 7404: Too many authentication failures [preauth]
Feb 20 07:51:49 np0005625203.localdomain sshd[50254]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:51:52 np0005625203.localdomain podman[50126]: 2026-02-20 07:51:39.466738384 +0000 UTC m=+0.032465511 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Feb 20 07:51:52 np0005625203.localdomain python3[50098]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 2c8610235afe953aa46efb141a5a988799548b22280d65a7e7ab21889422df37 --format json
Feb 20 07:51:52 np0005625203.localdomain sudo[50089]: pam_unix(sudo:session): session closed for user root
Feb 20 07:51:52 np0005625203.localdomain sudo[50318]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-byzisskshpoglzihayuahhymxoajcgdx ; /usr/bin/python3
Feb 20 07:51:52 np0005625203.localdomain sudo[50318]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:51:53 np0005625203.localdomain python3[50320]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Feb 20 07:51:53 np0005625203.localdomain python3[50320]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 --format json
Feb 20 07:51:53 np0005625203.localdomain python3[50320]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 -q --tls-verify=false
Feb 20 07:51:55 np0005625203.localdomain sshd[50254]: Invalid user user from 185.246.128.171 port 61445
Feb 20 07:51:57 np0005625203.localdomain sshd[50254]: Disconnecting invalid user user 185.246.128.171 port 61445: Change of username or service not allowed: (user,ssh-connection) -> (secret,ssh-connection) [preauth]
Feb 20 07:52:00 np0005625203.localdomain sshd[50400]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:00 np0005625203.localdomain podman[50333]: 2026-02-20 07:51:53.124335092 +0000 UTC m=+0.047617575 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Feb 20 07:52:00 np0005625203.localdomain python3[50320]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 9ab5aab6d0c3ec80926032b7acf4cec1d4710f1c2daccd17ae4daa64399ec237 --format json
Feb 20 07:52:00 np0005625203.localdomain sudo[50318]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:00 np0005625203.localdomain sudo[50435]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ulimtmdqrvjydsnyfbjyjsjflrzfjbbl ; /usr/bin/python3
Feb 20 07:52:01 np0005625203.localdomain sudo[50435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:01 np0005625203.localdomain python3[50438]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Feb 20 07:52:01 np0005625203.localdomain python3[50438]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 --format json
Feb 20 07:52:02 np0005625203.localdomain python3[50438]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1 -q --tls-verify=false
Feb 20 07:52:03 np0005625203.localdomain sshd[50400]: Invalid user secret from 185.246.128.171 port 48184
Feb 20 07:52:04 np0005625203.localdomain sshd[50400]: Disconnecting invalid user secret 185.246.128.171 port 48184: Change of username or service not allowed: (secret,ssh-connection) -> (localadmin,ssh-connection) [preauth]
Feb 20 07:52:04 np0005625203.localdomain sshd[50490]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:06 np0005625203.localdomain sshd[50490]: Invalid user localadmin from 185.246.128.171 port 8174
Feb 20 07:52:06 np0005625203.localdomain podman[50451]: 2026-02-20 07:52:02.064219101 +0000 UTC m=+0.045660014 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Feb 20 07:52:06 np0005625203.localdomain python3[50438]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 4853142d85dba3766b28d28ae195b26f7242230fe3646e9590a7aee2dc2e0dfa --format json
Feb 20 07:52:06 np0005625203.localdomain sudo[50435]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:07 np0005625203.localdomain sshd[50490]: Disconnecting invalid user localadmin 185.246.128.171 port 8174: Change of username or service not allowed: (localadmin,ssh-connection) -> (~,ssh-connection) [preauth]
Feb 20 07:52:09 np0005625203.localdomain sshd[50517]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:09 np0005625203.localdomain sudo[50530]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nzmhepkaouansscyzpfwcrqglwrgmfai ; /usr/bin/python3
Feb 20 07:52:09 np0005625203.localdomain sudo[50530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:09 np0005625203.localdomain python3[50532]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Feb 20 07:52:09 np0005625203.localdomain python3[50532]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 --format json
Feb 20 07:52:09 np0005625203.localdomain python3[50532]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1 -q --tls-verify=false
Feb 20 07:52:11 np0005625203.localdomain sshd[50517]: Invalid user ~ from 185.246.128.171 port 29034
Feb 20 07:52:11 np0005625203.localdomain podman[50545]: 2026-02-20 07:52:09.665991105 +0000 UTC m=+0.030545822 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Feb 20 07:52:11 np0005625203.localdomain python3[50532]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 9ac6ea63c0fb4851145e847f9ced2f20804afc8472907b63a82d5866f5cf608a --format json
Feb 20 07:52:11 np0005625203.localdomain sudo[50530]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:12 np0005625203.localdomain sudo[50623]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-csyvfbvbbafruyqcerbebupwqkvbegdd ; /usr/bin/python3
Feb 20 07:52:12 np0005625203.localdomain sudo[50623]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:12 np0005625203.localdomain python3[50625]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Feb 20 07:52:12 np0005625203.localdomain python3[50625]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 --format json
Feb 20 07:52:12 np0005625203.localdomain sshd[50517]: Disconnecting invalid user ~ 185.246.128.171 port 29034: Change of username or service not allowed: (~,ssh-connection) -> (service,ssh-connection) [preauth]
Feb 20 07:52:12 np0005625203.localdomain python3[50625]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1 -q --tls-verify=false
Feb 20 07:52:13 np0005625203.localdomain sshd[50676]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:14 np0005625203.localdomain podman[50638]: 2026-02-20 07:52:12.407595633 +0000 UTC m=+0.041247350 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Feb 20 07:52:14 np0005625203.localdomain python3[50625]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect ba1a08ea1c1207b471b1f02cee16ff456b8a812662cce16906d16de330a66d63 --format json
Feb 20 07:52:14 np0005625203.localdomain sudo[50623]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:14 np0005625203.localdomain sudo[50715]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-irsfdqzlddwaniskhgksroyauafkykei ; /usr/bin/python3
Feb 20 07:52:14 np0005625203.localdomain sudo[50715]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:14 np0005625203.localdomain sshd[50718]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:14 np0005625203.localdomain python3[50717]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Feb 20 07:52:14 np0005625203.localdomain python3[50717]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 --format json
Feb 20 07:52:14 np0005625203.localdomain python3[50717]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1 -q --tls-verify=false
Feb 20 07:52:15 np0005625203.localdomain sshd[50718]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:52:17 np0005625203.localdomain podman[50731]: 2026-02-20 07:52:14.801185132 +0000 UTC m=+0.045583313 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Feb 20 07:52:17 np0005625203.localdomain python3[50717]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 8576d3a17e57ea28f29435f132f583320941b5aa7bf0aa02e998b09a094d1fe8 --format json
Feb 20 07:52:17 np0005625203.localdomain sudo[50715]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:17 np0005625203.localdomain sudo[50807]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uebxykwbrzazzzmtslcwmkuambdkvdmq ; /usr/bin/python3
Feb 20 07:52:17 np0005625203.localdomain sudo[50807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:17 np0005625203.localdomain python3[50809]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Feb 20 07:52:17 np0005625203.localdomain python3[50809]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 --format json
Feb 20 07:52:17 np0005625203.localdomain python3[50809]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 -q --tls-verify=false
Feb 20 07:52:17 np0005625203.localdomain sshd[50676]: Invalid user service from 185.246.128.171 port 52634
Feb 20 07:52:18 np0005625203.localdomain sshd[50676]: Disconnecting invalid user service 185.246.128.171 port 52634: Change of username or service not allowed: (service,ssh-connection) -> (super,ssh-connection) [preauth]
Feb 20 07:52:21 np0005625203.localdomain sshd[50872]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:22 np0005625203.localdomain podman[50822]: 2026-02-20 07:52:17.619805952 +0000 UTC m=+0.041241520 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Feb 20 07:52:22 np0005625203.localdomain python3[50809]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 7fcbf63c0504494c8fcaa07583f909a06486472a0982aeac9554c6fdbeb04c9a --format json
Feb 20 07:52:22 np0005625203.localdomain sudo[50807]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:22 np0005625203.localdomain sudo[50912]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wdgmloulqnikneylsonuwajzizxalkbn ; /usr/bin/python3
Feb 20 07:52:22 np0005625203.localdomain sudo[50912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:22 np0005625203.localdomain python3[50914]: ansible-containers.podman.podman_image Invoked with force=True name=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 validate_certs=False tag=latest pull=True push=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'volume': None, 'extra_args': None} push_args={'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'transport': None} path=None auth_file=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None
Feb 20 07:52:22 np0005625203.localdomain python3[50914]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman image ls registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 --format json
Feb 20 07:52:22 np0005625203.localdomain python3[50914]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman pull registry.redhat.io/rhosp-rhel9/openstack-cron:17.1 -q --tls-verify=false
Feb 20 07:52:23 np0005625203.localdomain sshd[50872]: Invalid user super from 185.246.128.171 port 27507
Feb 20 07:52:24 np0005625203.localdomain sshd[50872]: Disconnecting invalid user super 185.246.128.171 port 27507: Change of username or service not allowed: (super,ssh-connection) -> (sales,ssh-connection) [preauth]
Feb 20 07:52:25 np0005625203.localdomain podman[50926]: 2026-02-20 07:52:22.958257892 +0000 UTC m=+0.040585956 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Feb 20 07:52:25 np0005625203.localdomain python3[50914]: ansible-containers.podman.podman_image PODMAN-IMAGE-DEBUG: /bin/podman inspect 72ddf109f135b64d3116af7b84caaa358dc72e2e60f4c8753fa54fa65b76ba35 --format json
Feb 20 07:52:25 np0005625203.localdomain sudo[50912]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:25 np0005625203.localdomain sudo[51001]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-guwkwtwpgaarladlfrzkoqvafmdcxtpw ; /usr/bin/python3
Feb 20 07:52:25 np0005625203.localdomain sudo[51001]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:26 np0005625203.localdomain python3[51003]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_1 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:52:26 np0005625203.localdomain sudo[51001]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:26 np0005625203.localdomain sshd[51019]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:26 np0005625203.localdomain sudo[51052]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vgmfntwfcsywvrnwddbanzzgqvcrpmzm ; /usr/bin/python3
Feb 20 07:52:26 np0005625203.localdomain sudo[51052]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:26 np0005625203.localdomain sudo[51052]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:26 np0005625203.localdomain sudo[51071]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hzscpolgckkwjiteonbwtfdxgbwbtjwj ; /usr/bin/python3
Feb 20 07:52:26 np0005625203.localdomain sudo[51071]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:26 np0005625203.localdomain sudo[51071]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:27 np0005625203.localdomain sudo[51175]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efeozlvbedtuhgpzkcvhftebzezlwmyh ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573947.1641803-83698-243433501649505/async_wrapper.py 741933824592 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573947.1641803-83698-243433501649505/AnsiballZ_command.py _
Feb 20 07:52:27 np0005625203.localdomain sudo[51175]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Feb 20 07:52:27 np0005625203.localdomain ansible-async_wrapper.py[51177]: Invoked with 741933824592 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573947.1641803-83698-243433501649505/AnsiballZ_command.py _
Feb 20 07:52:27 np0005625203.localdomain ansible-async_wrapper.py[51181]: Starting module and watcher
Feb 20 07:52:27 np0005625203.localdomain ansible-async_wrapper.py[51181]: Start watching 51182 (3600)
Feb 20 07:52:27 np0005625203.localdomain ansible-async_wrapper.py[51182]: Start module (51182)
Feb 20 07:52:27 np0005625203.localdomain ansible-async_wrapper.py[51177]: Return async_wrapper task started.
Feb 20 07:52:27 np0005625203.localdomain sudo[51175]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:27 np0005625203.localdomain sudo[51197]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dczffkbrcvsgjuyilynnctrckabkxomh ; /usr/bin/python3
Feb 20 07:52:27 np0005625203.localdomain sudo[51197]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:28 np0005625203.localdomain python3[51199]: ansible-ansible.legacy.async_status Invoked with jid=741933824592.51177 mode=status _async_dir=/tmp/.ansible_async
Feb 20 07:52:28 np0005625203.localdomain sudo[51197]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:28 np0005625203.localdomain sshd[51019]: Invalid user sales from 185.246.128.171 port 52066
Feb 20 07:52:28 np0005625203.localdomain sshd[51019]: Disconnecting invalid user sales 185.246.128.171 port 52066: Change of username or service not allowed: (sales,ssh-connection) -> (user15,ssh-connection) [preauth]
Feb 20 07:52:29 np0005625203.localdomain sshd[51218]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:30 np0005625203.localdomain sshd[51218]: Received disconnect from 123.204.132.127 port 41316:11: Bye Bye [preauth]
Feb 20 07:52:30 np0005625203.localdomain sshd[51218]: Disconnected from authenticating user root 123.204.132.127 port 41316 [preauth]
Feb 20 07:52:31 np0005625203.localdomain puppet-user[51202]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 07:52:31 np0005625203.localdomain puppet-user[51202]:    (file: /etc/puppet/hiera.yaml)
Feb 20 07:52:31 np0005625203.localdomain puppet-user[51202]: Warning: Undefined variable '::deploy_config_name';
Feb 20 07:52:31 np0005625203.localdomain puppet-user[51202]:    (file & line not available)
Feb 20 07:52:31 np0005625203.localdomain puppet-user[51202]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 07:52:31 np0005625203.localdomain puppet-user[51202]:    (file & line not available)
Feb 20 07:52:31 np0005625203.localdomain puppet-user[51202]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Feb 20 07:52:31 np0005625203.localdomain puppet-user[51202]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Feb 20 07:52:31 np0005625203.localdomain puppet-user[51202]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.13 seconds
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Exec[directory-create-etc-my.cnf.d]/returns: executed successfully
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/File[/etc/my.cnf.d/tripleo.cnf]/ensure: created
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]: Notice: /Stage[main]/Tripleo::Profile::Base::Database::Mysql::Client/Augeas[tripleo-mysql-client-conf]/returns: executed successfully
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]: Notice: Applied catalog in 0.05 seconds
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]: Application:
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:    Initial environment: production
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:    Converged environment: production
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:          Run mode: user
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]: Changes:
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:             Total: 3
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]: Events:
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:           Success: 3
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:             Total: 3
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]: Resources:
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:           Changed: 3
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:       Out of sync: 3
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:             Total: 10
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]: Time:
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:          Schedule: 0.00
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:              File: 0.00
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:            Augeas: 0.02
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:              Exec: 0.02
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:    Transaction evaluation: 0.05
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:    Catalog application: 0.05
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:    Config retrieval: 0.17
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:          Last run: 1771573952
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:        Filebucket: 0.00
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:             Total: 0.06
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]: Version:
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:            Config: 1771573951
Feb 20 07:52:32 np0005625203.localdomain puppet-user[51202]:            Puppet: 7.10.0
Feb 20 07:52:32 np0005625203.localdomain ansible-async_wrapper.py[51182]: Module complete (51182)
Feb 20 07:52:32 np0005625203.localdomain sshd[51316]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:32 np0005625203.localdomain ansible-async_wrapper.py[51181]: Done in kid B.
Feb 20 07:52:34 np0005625203.localdomain sshd[51316]: Invalid user user15 from 185.246.128.171 port 19287
Feb 20 07:52:35 np0005625203.localdomain sshd[51316]: Disconnecting invalid user user15 185.246.128.171 port 19287: Change of username or service not allowed: (user15,ssh-connection) -> (tempuser,ssh-connection) [preauth]
Feb 20 07:52:38 np0005625203.localdomain sudo[51331]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxwyvjukjbfftdhajlzmricjqkgygfin ; /usr/bin/python3
Feb 20 07:52:38 np0005625203.localdomain sudo[51331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:38 np0005625203.localdomain sshd[51333]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:38 np0005625203.localdomain sshd[51335]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:38 np0005625203.localdomain python3[51334]: ansible-ansible.legacy.async_status Invoked with jid=741933824592.51177 mode=status _async_dir=/tmp/.ansible_async
Feb 20 07:52:38 np0005625203.localdomain sudo[51331]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:39 np0005625203.localdomain sshd[51335]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:52:39 np0005625203.localdomain sudo[51350]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lnepurjhqfrnhzpovqyxilkbbcoyrcmf ; /usr/bin/python3
Feb 20 07:52:39 np0005625203.localdomain sudo[51350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:39 np0005625203.localdomain python3[51352]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 07:52:39 np0005625203.localdomain sudo[51350]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:39 np0005625203.localdomain sudo[51367]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nxhheeyjgfanmbzcmsqfrevdlttloxpf ; /usr/bin/python3
Feb 20 07:52:39 np0005625203.localdomain sudo[51367]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:39 np0005625203.localdomain python3[51369]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:52:39 np0005625203.localdomain sudo[51367]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:40 np0005625203.localdomain sudo[51415]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vkbrgvopqwemlgduljvelpezbofjxzcw ; /usr/bin/python3
Feb 20 07:52:40 np0005625203.localdomain sudo[51415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:40 np0005625203.localdomain python3[51417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:52:40 np0005625203.localdomain sudo[51415]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:40 np0005625203.localdomain sudo[51458]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gfxeszolvjrrfjgxnavagjepsigdqnts ; /usr/bin/python3
Feb 20 07:52:40 np0005625203.localdomain sudo[51458]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:40 np0005625203.localdomain python3[51460]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/container-puppet/puppetlabs/facter.conf setype=svirt_sandbox_file_t selevel=s0 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573960.0504158-84037-170671505186265/source _original_basename=tmp3n7z4sg8 follow=False checksum=53908622cb869db5e2e2a68e737aa2ab1a872111 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 07:52:40 np0005625203.localdomain sudo[51458]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:40 np0005625203.localdomain sudo[51488]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-inuzytmztmzdzdhszjnlirdocbvhgriz ; /usr/bin/python3
Feb 20 07:52:40 np0005625203.localdomain sudo[51488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:41 np0005625203.localdomain python3[51490]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:52:41 np0005625203.localdomain sudo[51488]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:41 np0005625203.localdomain sshd[51333]: Invalid user tempuser from 185.246.128.171 port 49119
Feb 20 07:52:41 np0005625203.localdomain sudo[51504]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dbrirhrfndrqswkqnqyfuvlshlsmgmoq ; /usr/bin/python3
Feb 20 07:52:41 np0005625203.localdomain sudo[51504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:41 np0005625203.localdomain sudo[51509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:52:41 np0005625203.localdomain sudo[51509]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:52:41 np0005625203.localdomain sudo[51509]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:41 np0005625203.localdomain sudo[51539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:52:41 np0005625203.localdomain sudo[51539]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:52:41 np0005625203.localdomain sudo[51504]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:42 np0005625203.localdomain sudo[51621]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wgiftskzdgqtjnitfijnzeqveldtsmlx ; /usr/bin/python3
Feb 20 07:52:42 np0005625203.localdomain sudo[51621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:42 np0005625203.localdomain python3[51623]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Feb 20 07:52:42 np0005625203.localdomain sudo[51621]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:42 np0005625203.localdomain sudo[51539]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:42 np0005625203.localdomain sudo[51672]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ypjyucfewqyvfhbijnlhnnmsgxfrwjye ; /usr/bin/python3
Feb 20 07:52:42 np0005625203.localdomain sudo[51672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:42 np0005625203.localdomain python3[51674]: ansible-file Invoked with path=/var/lib/tripleo-config/container-puppet-config mode=448 recurse=True setype=container_file_t force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 07:52:42 np0005625203.localdomain sudo[51672]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:42 np0005625203.localdomain sudo[51688]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-auwjklabaysuaucfejktuckgctkivywg ; /usr/bin/python3
Feb 20 07:52:42 np0005625203.localdomain sudo[51688]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:43 np0005625203.localdomain python3[51690]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=False puppet_config=/var/lib/container-puppet/container-puppet.json short_hostname=np0005625203 step=1 update_config_hash_only=False
Feb 20 07:52:43 np0005625203.localdomain sudo[51688]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:43 np0005625203.localdomain sshd[51333]: Disconnecting invalid user tempuser 185.246.128.171 port 49119: Change of username or service not allowed: (tempuser,ssh-connection) -> (ftp_test,ssh-connection) [preauth]
Feb 20 07:52:43 np0005625203.localdomain sudo[51704]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ngmhoeltnwlkbrxgqykuhuypppkjqbvt ; /usr/bin/python3
Feb 20 07:52:43 np0005625203.localdomain sudo[51704]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:43 np0005625203.localdomain python3[51706]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:52:43 np0005625203.localdomain sudo[51704]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:43 np0005625203.localdomain sudo[51720]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gvfadyzqbqhrcravayhzjfvwsbkfjbef ; /usr/bin/python3
Feb 20 07:52:43 np0005625203.localdomain sudo[51720]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:43 np0005625203.localdomain python3[51722]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True
Feb 20 07:52:43 np0005625203.localdomain sudo[51720]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:44 np0005625203.localdomain sudo[51736]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uzvrotqrumzpdxkgpvkhwiiqgcebnrfi ; /usr/bin/python3
Feb 20 07:52:44 np0005625203.localdomain sudo[51736]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:44 np0005625203.localdomain python3[51738]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Feb 20 07:52:44 np0005625203.localdomain sudo[51736]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:44 np0005625203.localdomain sudo[51764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:52:44 np0005625203.localdomain sudo[51764]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:52:44 np0005625203.localdomain sudo[51764]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:45 np0005625203.localdomain sudo[51792]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-flksvyglkhtfajghuzndldukesksjrvj ; /usr/bin/python3
Feb 20 07:52:45 np0005625203.localdomain sudo[51792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:45 np0005625203.localdomain python3[51794]: ansible-tripleo_container_manage Invoked with config_id=tripleo_puppet_step1 config_dir=/var/lib/tripleo-config/container-puppet-config/step_1 config_patterns=container-puppet-*.json config_overrides={} concurrency=6 log_base_path=/var/log/containers/stdouts debug=False
Feb 20 07:52:45 np0005625203.localdomain podman[51963]: 2026-02-20 07:52:45.536199676 +0000 UTC m=+0.061076136 container create 7ac329b858f59d94c41f916651c01ed4ddd497ef2a4aa921841100b8e71c7d98 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=container-puppet-iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_puppet_step1, release=1766032510, architecture=x86_64, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, version=17.1.13)
Feb 20 07:52:45 np0005625203.localdomain podman[51952]: 2026-02-20 07:52:45.557656775 +0000 UTC m=+0.088803474 container create cbb1f8627e016fb05c68b71668d1592638bcc97a71e255d8465867dd0d6a8d13 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=container-puppet-nova_libvirt, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1766032510, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1)
Feb 20 07:52:45 np0005625203.localdomain podman[51996]: 2026-02-20 07:52:45.577524935 +0000 UTC m=+0.064319914 container create 458e70bfd6a3d9a2b6db9f2a21ce44e1ab35c706d3161a464214b6b8ab78c11b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1766032510, vcs-type=git, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., container_name=container-puppet-crond, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, architecture=x86_64, version=17.1.13, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 07:52:45 np0005625203.localdomain systemd[1]: Started libpod-conmon-cbb1f8627e016fb05c68b71668d1592638bcc97a71e255d8465867dd0d6a8d13.scope.
Feb 20 07:52:45 np0005625203.localdomain systemd[1]: Started libpod-conmon-7ac329b858f59d94c41f916651c01ed4ddd497ef2a4aa921841100b8e71c7d98.scope.
Feb 20 07:52:45 np0005625203.localdomain podman[52002]: 2026-02-20 07:52:45.597925381 +0000 UTC m=+0.068482790 container create 820c6f010736fb6e83a6267ff183d3021b29ef2addaf11bc201444b2280b49a9 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_puppet_step1, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, container_name=container-puppet-metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 07:52:45 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:52:45 np0005625203.localdomain systemd[1]: Started libpod-conmon-458e70bfd6a3d9a2b6db9f2a21ce44e1ab35c706d3161a464214b6b8ab78c11b.scope.
Feb 20 07:52:45 np0005625203.localdomain podman[51963]: 2026-02-20 07:52:45.505389656 +0000 UTC m=+0.030266126 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Feb 20 07:52:45 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:52:45 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed31fcdcaee63ea79499a12bac1a1c15d8713ced0f8f605d87edc1f9667054a/merged/tmp/iscsi.host supports timestamps until 2038 (0x7fffffff)
Feb 20 07:52:45 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3dde96d66e26fbb5d914ff807f950d08e80e1daf4f655a6bd9c579da655dfe/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Feb 20 07:52:45 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed31fcdcaee63ea79499a12bac1a1c15d8713ced0f8f605d87edc1f9667054a/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Feb 20 07:52:45 np0005625203.localdomain podman[51952]: 2026-02-20 07:52:45.507086326 +0000 UTC m=+0.038233035 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 07:52:45 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:52:45 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6f20b244995edeba33e1e070a87c3f1ef57f7787ff2d18dff693e5ba4f7644/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Feb 20 07:52:45 np0005625203.localdomain podman[51963]: 2026-02-20 07:52:45.617802221 +0000 UTC m=+0.142678681 container init 7ac329b858f59d94c41f916651c01ed4ddd497ef2a4aa921841100b8e71c7d98 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, batch=17.1_20260112.1, release=1766032510, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, container_name=container-puppet-iscsid, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.)
Feb 20 07:52:45 np0005625203.localdomain podman[51963]: 2026-02-20 07:52:45.626170684 +0000 UTC m=+0.151047144 container start 7ac329b858f59d94c41f916651c01ed4ddd497ef2a4aa921841100b8e71c7d98 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, architecture=x86_64, tcib_managed=true, release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_puppet_step1, container_name=container-puppet-iscsid)
Feb 20 07:52:45 np0005625203.localdomain podman[51963]: 2026-02-20 07:52:45.626438982 +0000 UTC m=+0.151315442 container attach 7ac329b858f59d94c41f916651c01ed4ddd497ef2a4aa921841100b8e71c7d98 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, tcib_managed=true, container_name=container-puppet-iscsid, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, batch=17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 07:52:45 np0005625203.localdomain systemd[1]: Started libpod-conmon-820c6f010736fb6e83a6267ff183d3021b29ef2addaf11bc201444b2280b49a9.scope.
Feb 20 07:52:45 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:52:45 np0005625203.localdomain podman[51996]: 2026-02-20 07:52:45.546606151 +0000 UTC m=+0.033401150 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Feb 20 07:52:45 np0005625203.localdomain podman[51986]: 2026-02-20 07:52:45.64692071 +0000 UTC m=+0.145645580 container create 0b36adae283f8ba4be5aeb5f6a76a838a1c219c945dacc3ec826f24a1d8b723a (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=container-puppet-collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_puppet_step1, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, url=https://www.redhat.com)
Feb 20 07:52:45 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/358e1b67e7a5a713aac9c0a160bdddbfbcb9b13948cffdbbefd6a9946a4ee797/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Feb 20 07:52:45 np0005625203.localdomain podman[52002]: 2026-02-20 07:52:45.562207482 +0000 UTC m=+0.032764921 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Feb 20 07:52:45 np0005625203.localdomain podman[51986]: 2026-02-20 07:52:45.610553372 +0000 UTC m=+0.109278252 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Feb 20 07:52:47 np0005625203.localdomain sshd[52059]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:48 np0005625203.localdomain systemd[1]: Started libpod-conmon-0b36adae283f8ba4be5aeb5f6a76a838a1c219c945dacc3ec826f24a1d8b723a.scope.
Feb 20 07:52:48 np0005625203.localdomain podman[52002]: 2026-02-20 07:52:48.297513103 +0000 UTC m=+2.768070562 container init 820c6f010736fb6e83a6267ff183d3021b29ef2addaf11bc201444b2280b49a9 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, container_name=container-puppet-metrics_qdr, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64)
Feb 20 07:52:48 np0005625203.localdomain podman[51952]: 2026-02-20 07:52:48.315398983 +0000 UTC m=+2.846545712 container init cbb1f8627e016fb05c68b71668d1592638bcc97a71e255d8465867dd0d6a8d13 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=container-puppet-nova_libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, config_id=tripleo_puppet_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1766032510, build-date=2026-01-12T23:31:49Z)
Feb 20 07:52:48 np0005625203.localdomain podman[51952]: 2026-02-20 07:52:48.322971602 +0000 UTC m=+2.854118341 container start cbb1f8627e016fb05c68b71668d1592638bcc97a71e255d8465867dd0d6a8d13 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=container-puppet-nova_libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, org.opencontainers.image.created=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64)
Feb 20 07:52:48 np0005625203.localdomain podman[51952]: 2026-02-20 07:52:48.323278451 +0000 UTC m=+2.854425180 container attach cbb1f8627e016fb05c68b71668d1592638bcc97a71e255d8465867dd0d6a8d13 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=container-puppet-nova_libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_puppet_step1, batch=17.1_20260112.1, architecture=x86_64)
Feb 20 07:52:48 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:52:48 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8516f80a305bf91ea394ab1c0000d8e781a49a1e417cb98413c75dee5cfb0223/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Feb 20 07:52:48 np0005625203.localdomain podman[51986]: 2026-02-20 07:52:48.343194842 +0000 UTC m=+2.841919732 container init 0b36adae283f8ba4be5aeb5f6a76a838a1c219c945dacc3ec826f24a1d8b723a (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, tcib_managed=true, container_name=container-puppet-collectd, batch=17.1_20260112.1, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, maintainer=OpenStack TripleO Team)
Feb 20 07:52:48 np0005625203.localdomain systemd[1]: tmp-crun.cjwMVl.mount: Deactivated successfully.
Feb 20 07:52:48 np0005625203.localdomain podman[51996]: 2026-02-20 07:52:48.354658588 +0000 UTC m=+2.841453597 container init 458e70bfd6a3d9a2b6db9f2a21ce44e1ab35c706d3161a464214b6b8ab78c11b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, distribution-scope=public, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=container-puppet-crond, version=17.1.13, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_puppet_step1, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, build-date=2026-01-12T22:10:15Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 07:52:48 np0005625203.localdomain podman[52002]: 2026-02-20 07:52:48.364827616 +0000 UTC m=+2.835385075 container start 820c6f010736fb6e83a6267ff183d3021b29ef2addaf11bc201444b2280b49a9 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, build-date=2026-01-12T22:10:14Z, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.buildah.version=1.41.5, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, container_name=container-puppet-metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible)
Feb 20 07:52:48 np0005625203.localdomain podman[52002]: 2026-02-20 07:52:48.365466585 +0000 UTC m=+2.836024074 container attach 820c6f010736fb6e83a6267ff183d3021b29ef2addaf11bc201444b2280b49a9 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, batch=17.1_20260112.1, architecture=x86_64, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=container-puppet-metrics_qdr, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, release=1766032510)
Feb 20 07:52:48 np0005625203.localdomain podman[51986]: 2026-02-20 07:52:48.409832195 +0000 UTC m=+2.908557065 container start 0b36adae283f8ba4be5aeb5f6a76a838a1c219c945dacc3ec826f24a1d8b723a (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.5, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-collectd, release=1766032510, url=https://www.redhat.com, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, maintainer=OpenStack TripleO Team)
Feb 20 07:52:48 np0005625203.localdomain podman[51986]: 2026-02-20 07:52:48.410137724 +0000 UTC m=+2.908862654 container attach 0b36adae283f8ba4be5aeb5f6a76a838a1c219c945dacc3ec826f24a1d8b723a (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, com.redhat.component=openstack-collectd-container, release=1766032510, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_puppet_step1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, container_name=container-puppet-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 07:52:48 np0005625203.localdomain podman[51996]: 2026-02-20 07:52:48.419065964 +0000 UTC m=+2.905860983 container start 458e70bfd6a3d9a2b6db9f2a21ce44e1ab35c706d3161a464214b6b8ab78c11b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, distribution-scope=public, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, container_name=container-puppet-crond, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, tcib_managed=true, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_puppet_step1)
Feb 20 07:52:48 np0005625203.localdomain podman[51996]: 2026-02-20 07:52:48.419461276 +0000 UTC m=+2.906256265 container attach 458e70bfd6a3d9a2b6db9f2a21ce44e1ab35c706d3161a464214b6b8ab78c11b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, release=1766032510, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, container_name=container-puppet-crond, config_id=tripleo_puppet_step1, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Feb 20 07:52:49 np0005625203.localdomain podman[51863]: 2026-02-20 07:52:45.411329254 +0000 UTC m=+0.033216614 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Feb 20 07:52:49 np0005625203.localdomain podman[52202]: 2026-02-20 07:52:49.580174335 +0000 UTC m=+0.055512947 container create 1d7389e20e828014f4d1835f355506bcfd040c3d888962e8f16c6fd2ccdcd8a8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, release=1766032510, container_name=container-puppet-ceilometer, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:24Z, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-central, org.opencontainers.image.created=2026-01-12T23:07:24Z, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com)
Feb 20 07:52:49 np0005625203.localdomain podman[52202]: 2026-02-20 07:52:49.547573271 +0000 UTC m=+0.022911893 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Feb 20 07:52:49 np0005625203.localdomain systemd[1]: Started libpod-conmon-1d7389e20e828014f4d1835f355506bcfd040c3d888962e8f16c6fd2ccdcd8a8.scope.
Feb 20 07:52:49 np0005625203.localdomain systemd[1]: tmp-crun.waGW3T.mount: Deactivated successfully.
Feb 20 07:52:49 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:52:49 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18b2a38b03b01d01d2686bb7a91f0fbbb40a40764e34293cc7422e7526e11c3c/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Feb 20 07:52:49 np0005625203.localdomain podman[52202]: 2026-02-20 07:52:49.70977325 +0000 UTC m=+0.185111862 container init 1d7389e20e828014f4d1835f355506bcfd040c3d888962e8f16c6fd2ccdcd8a8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:24Z, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=container-puppet-ceilometer, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, version=17.1.13, io.openshift.expose-services=, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-central, release=1766032510, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.component=openstack-ceilometer-central-container, org.opencontainers.image.created=2026-01-12T23:07:24Z)
Feb 20 07:52:49 np0005625203.localdomain podman[52202]: 2026-02-20 07:52:49.718806363 +0000 UTC m=+0.194145005 container start 1d7389e20e828014f4d1835f355506bcfd040c3d888962e8f16c6fd2ccdcd8a8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, architecture=x86_64, com.redhat.component=openstack-ceilometer-central-container, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-central, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:24Z, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_puppet_step1, version=17.1.13, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.openshift.expose-services=, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:24Z, container_name=container-puppet-ceilometer, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-central, summary=Red Hat OpenStack Platform 17.1 ceilometer-central)
Feb 20 07:52:49 np0005625203.localdomain podman[52202]: 2026-02-20 07:52:49.719136943 +0000 UTC m=+0.194475555 container attach 1d7389e20e828014f4d1835f355506bcfd040c3d888962e8f16c6fd2ccdcd8a8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, description=Red Hat OpenStack Platform 17.1 ceilometer-central, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-central-container, org.opencontainers.image.created=2026-01-12T23:07:24Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, build-date=2026-01-12T23:07:24Z, name=rhosp-rhel9/openstack-ceilometer-central, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.expose-services=, config_id=tripleo_puppet_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, container_name=container-puppet-ceilometer, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 07:52:49 np0005625203.localdomain ovs-vsctl[52246]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:    (file: /etc/puppet/hiera.yaml)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Warning: Undefined variable '::deploy_config_name';
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:    (file & line not available)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]:    (file: /etc/puppet/hiera.yaml)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: Undefined variable '::deploy_config_name';
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]:    (file & line not available)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:    (file & line not available)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]:    (file & line not available)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:    (file: /etc/puppet/hiera.yaml)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Warning: Undefined variable '::deploy_config_name';
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:    (file & line not available)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:    (file: /etc/puppet/hiera.yaml)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Warning: Undefined variable '::deploy_config_name';
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:    (file & line not available)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:    (file & line not available)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:    (file & line not available)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.08 seconds
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:    (file: /etc/puppet/hiera.yaml)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Warning: Undefined variable '::deploy_config_name';
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:    (file & line not available)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.09 seconds
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/File[/etc/logrotate-crond.conf]/ensure: defined content as '{sha256}1c3202f58bd2ae16cb31badcbb7f0d4e6697157b987d1887736ad96bb73d70b0'
Feb 20 07:52:50 np0005625203.localdomain crontab[52532]: (root) LIST (root)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:    (file & line not available)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Notice: /Stage[main]/Tripleo::Profile::Base::Logging::Logrotate/Cron[logrotate-crond]/ensure: created
Feb 20 07:52:50 np0005625203.localdomain crontab[52533]: (root) REPLACE (root)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Notice: Applied catalog in 0.04 seconds
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Application:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:    Initial environment: production
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:    Converged environment: production
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:          Run mode: user
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Changes:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:             Total: 2
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Events:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:           Success: 2
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:             Total: 2
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Resources:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:           Changed: 2
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:       Out of sync: 2
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:           Skipped: 7
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:             Total: 9
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Time:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:              File: 0.01
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:              Cron: 0.01
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:    Transaction evaluation: 0.03
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:    Catalog application: 0.04
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:    Config retrieval: 0.10
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:          Last run: 1771573970
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:             Total: 0.04
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]: Version:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:            Config: 1771573970
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52138]:            Puppet: 7.10.0
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Notice: Accepting previously invalid value for target type 'Integer'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[reset-iscsi-initiator-name]/returns: executed successfully
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/File[/etc/iscsi/.initiator_reset]/ensure: created
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.13 seconds
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Exec[sync-iqn-to-host]/returns: executed successfully
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: Scope(Class[Nova]): The os_region_name parameter is deprecated and will be removed \
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: in a future release. Use nova::cinder::os_region_name instead
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: Scope(Class[Nova]): The catalog_info parameter is deprecated and will be removed \
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: in a future release. Use nova::cinder::catalog_info instead
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/owner: owner changed 'qdrouterd' to 'root'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/group: group changed 'qdrouterd' to 'root'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Notice: /Stage[main]/Qdr::Config/File[/var/lib/qdrouterd]/mode: mode changed '0700' to '0755'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Notice: /Stage[main]/Qdr::Config/File[/etc/qpid-dispatch/ssl]/ensure: created
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Notice: /Stage[main]/Qdr::Config/File[qdrouterd.conf]/content: content changed '{sha256}89e10d8896247f992c5f0baf027c25a8ca5d0441be46d8859d9db2067ea74cd3' to '{sha256}61f75742dea86a220625b959c1b58b0a7cef70d348e0bf1fab4bdf0569b71f31'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd]/ensure: created
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Notice: /Stage[main]/Qdr::Config/File[/var/log/qdrouterd/metrics_qdr.log]/ensure: created
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Notice: Applied catalog in 0.03 seconds
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Application:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:    Initial environment: production
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:    Converged environment: production
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:          Run mode: user
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Changes:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:             Total: 7
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Events:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:           Success: 7
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:             Total: 7
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Resources:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:           Skipped: 13
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:           Changed: 5
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:       Out of sync: 5
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:             Total: 20
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Time:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:              File: 0.01
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:    Transaction evaluation: 0.03
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:    Catalog application: 0.03
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:    Config retrieval: 0.16
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:          Last run: 1771573970
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:             Total: 0.03
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]: Version:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:            Config: 1771573970
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52105]:            Puppet: 7.10.0
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.35 seconds
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: Unknown variable: '::nova::compute::verify_glance_signatures'. (file: /etc/puppet/modules/nova/manifests/glance.pp, line: 62, column: 41)
Feb 20 07:52:50 np0005625203.localdomain systemd[1]: libpod-458e70bfd6a3d9a2b6db9f2a21ce44e1ab35c706d3161a464214b6b8ab78c11b.scope: Deactivated successfully.
Feb 20 07:52:50 np0005625203.localdomain systemd[1]: libpod-458e70bfd6a3d9a2b6db9f2a21ce44e1ab35c706d3161a464214b6b8ab78c11b.scope: Consumed 2.066s CPU time.
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/content: content changed '{sha256}aea388a73ebafc7e07a81ddb930a91099211f660eee55fbf92c13007a77501e5' to '{sha256}2523d01ee9c3022c0e9f61d896b1474a168e18472aee141cc278e69fe13f41c1'
Feb 20 07:52:50 np0005625203.localdomain podman[51996]: 2026-02-20 07:52:50.591192723 +0000 UTC m=+5.077987782 container died 458e70bfd6a3d9a2b6db9f2a21ce44e1ab35c706d3161a464214b6b8ab78c11b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_puppet_step1, batch=17.1_20260112.1, container_name=container-puppet-crond, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, architecture=x86_64, version=17.1.13, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.41.5, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-type=git, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/owner: owner changed 'collectd' to 'root'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_base_images'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 44, column: 5)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_original_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 48, column: 5)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/group: group changed 'collectd' to 'root'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: Unknown variable: '::nova::compute::libvirt::remove_unused_resized_minimum_age_seconds'. (file: /etc/puppet/modules/nova/manifests/compute/image_cache.pp, line: 52, column: 5)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[collectd.conf]/mode: mode changed '0644' to '0640'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: Scope(Class[Tripleo::Profile::Base::Nova::Compute]): The keymgr_backend parameter has been deprecated
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: Scope(Class[Nova::Compute]): vcpu_pin_set is deprecated, instead use cpu_dedicated_set or cpu_shared_set.
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: Scope(Class[Nova::Compute]): verify_glance_signatures is deprecated. Use the same parameter in nova::glance
Feb 20 07:52:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-458e70bfd6a3d9a2b6db9f2a21ce44e1ab35c706d3161a464214b6b8ab78c11b-userdata-shm.mount: Deactivated successfully.
Feb 20 07:52:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0a6f20b244995edeba33e1e070a87c3f1ef57f7787ff2d18dff693e5ba4f7644-merged.mount: Deactivated successfully.
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/owner: owner changed 'collectd' to 'root'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/group: group changed 'collectd' to 'root'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[collectd.d]/mode: mode changed '0755' to '0750'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-cpu.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-interface.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-load.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-memory.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/90-default-plugins-syslog.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/apache.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain podman[52586]: 2026-02-20 07:52:50.690746101 +0000 UTC m=+0.092274958 container cleanup 458e70bfd6a3d9a2b6db9f2a21ce44e1ab35c706d3161a464214b6b8ab78c11b (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=container-puppet-crond, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, managed_by=tripleo_ansible, container_name=container-puppet-crond, tcib_managed=true, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/dns.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ipmi.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mcelog.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/mysql.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-events.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ovs-stats.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/ping.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/pmu.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/rdt.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/sensors.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/snmp.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Config/File[/etc/collectd.d/write_prometheus.conf]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain systemd[1]: libpod-conmon-458e70bfd6a3d9a2b6db9f2a21ce44e1ab35c706d3161a464214b6b8ab78c11b.scope: Deactivated successfully.
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Python/File[/usr/lib/python3.9/site-packages]/mode: mode changed '0755' to '0750'
Feb 20 07:52:50 np0005625203.localdomain python3[51794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-crond --conmon-pidfile /run/container-puppet-crond.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625203 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=crond --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::logging::logrotate --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-crond --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'crond', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::logrotate'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-crond.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Python/Collectd::Plugin[python]/File[python.load]/ensure: defined content as '{sha256}0163924a0099dd43fe39cb85e836df147fd2cfee8197dc6866d3c384539eb6ee'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Python/Concat[/etc/collectd.d/python-config.conf]/File[/etc/collectd.d/python-config.conf]/ensure: defined content as '{sha256}2e5fb20e60b30f84687fc456a37fc62451000d2d85f5bbc1b3fca3a5eac9deeb'
Feb 20 07:52:50 np0005625203.localdomain systemd[1]: libpod-820c6f010736fb6e83a6267ff183d3021b29ef2addaf11bc201444b2280b49a9.scope: Deactivated successfully.
Feb 20 07:52:50 np0005625203.localdomain systemd[1]: libpod-820c6f010736fb6e83a6267ff183d3021b29ef2addaf11bc201444b2280b49a9.scope: Consumed 2.219s CPU time.
Feb 20 07:52:50 np0005625203.localdomain podman[52002]: 2026-02-20 07:52:50.719487129 +0000 UTC m=+5.190044568 container died 820c6f010736fb6e83a6267ff183d3021b29ef2addaf11bc201444b2280b49a9 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64, version=17.1.13, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., batch=17.1_20260112.1, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_puppet_step1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, container_name=container-puppet-metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Logfile/Collectd::Plugin[logfile]/File[logfile.load]/ensure: defined content as '{sha256}07bbda08ef9b824089500bdc6ac5a86e7d1ef2ae3ed4ed423c0559fe6361e5af'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Amqp1/Collectd::Plugin[amqp1]/File[amqp1.load]/ensure: defined content as '{sha256}0d4e701b7b2398bbf396579a0713d46d3c496c79edc52f2e260456f359c9a46c'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Ceph/Collectd::Plugin[ceph]/File[ceph.load]/ensure: defined content as '{sha256}c796abffda2e860875295b4fc11cc95c6032b4e13fa8fb128e839a305aa1676c'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Cpu/Collectd::Plugin[cpu]/File[cpu.load]/ensure: defined content as '{sha256}67d4c8bf6bf5785f4cb6b596712204d9eacbcebbf16fe289907195d4d3cb0e34'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Df/Collectd::Plugin[df]/File[df.load]/ensure: defined content as '{sha256}edeb4716d96fc9dca2c6adfe07bae70ba08c6af3944a3900581cba0f08f3c4ba'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Disk/Collectd::Plugin[disk]/File[disk.load]/ensure: defined content as '{sha256}1d0cb838278f3226fcd381f0fc2e0e1abaf0d590f4ba7bcb2fc6ec113d3ebde7'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[hugepages.load]/ensure: defined content as '{sha256}9b9f35b65a73da8d4037e4355a23b678f2cf61997ccf7a5e1adf2a7ce6415827'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Hugepages/Collectd::Plugin[hugepages]/File[older_hugepages.load]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Interface/Collectd::Plugin[interface]/File[interface.load]/ensure: defined content as '{sha256}b76b315dc312e398940fe029c6dbc5c18d2b974ff7527469fc7d3617b5222046'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Notice: /Stage[main]/Tripleo::Profile::Base::Iscsid/Augeas[chap_algs in /etc/iscsi/iscsid.conf]/returns: executed successfully
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Load/Collectd::Plugin[load]/File[load.load]/ensure: defined content as '{sha256}af2403f76aebd2f10202d66d2d55e1a8d987eed09ced5a3e3873a4093585dc31'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Notice: Applied catalog in 0.48 seconds
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Application:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:    Initial environment: production
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:    Converged environment: production
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:          Run mode: user
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Changes:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:             Total: 4
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Events:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:           Success: 4
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:             Total: 4
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Resources:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:           Changed: 4
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:       Out of sync: 4
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:           Skipped: 8
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:             Total: 13
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Time:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:              File: 0.00
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:              Exec: 0.07
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:    Config retrieval: 0.12
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:            Augeas: 0.40
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:    Transaction evaluation: 0.47
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:    Catalog application: 0.48
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:          Last run: 1771573970
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:             Total: 0.48
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]: Version:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:            Config: 1771573970
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52065]:            Puppet: 7.10.0
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Memory/Collectd::Plugin[memory]/File[memory.load]/ensure: defined content as '{sha256}0f270425ee6b05fc9440ee32b9afd1010dcbddd9b04ca78ff693858f7ecb9d0e'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Unixsock/Collectd::Plugin[unixsock]/File[unixsock.load]/ensure: defined content as '{sha256}9d1ec1c51ba386baa6f62d2e019dbd6998ad924bf868b3edc2d24d3dc3c63885'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Uptime/Collectd::Plugin[uptime]/File[uptime.load]/ensure: defined content as '{sha256}f7a26c6369f904d0ca1af59627ebea15f5e72160bcacdf08d217af282b42e5c0'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[virt.load]/ensure: defined content as '{sha256}9a2bcf913f6bf8a962a0ff351a9faea51ae863cc80af97b77f63f8ab68941c62'
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: /Stage[main]/Collectd::Plugin::Virt/Collectd::Plugin[virt]/File[older_virt.load]/ensure: removed
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Notice: Applied catalog in 0.26 seconds
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Application:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:    Initial environment: production
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:    Converged environment: production
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:          Run mode: user
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Changes:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:             Total: 43
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Events:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:           Success: 43
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:             Total: 43
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Resources:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:           Skipped: 14
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:           Changed: 38
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:       Out of sync: 38
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:             Total: 82
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Time:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:    Concat fragment: 0.00
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:              File: 0.10
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:    Transaction evaluation: 0.26
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:    Catalog application: 0.26
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:    Config retrieval: 0.42
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:          Last run: 1771573970
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:       Concat file: 0.00
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:             Total: 0.26
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]: Version:
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:            Config: 1771573970
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52140]:            Puppet: 7.10.0
Feb 20 07:52:50 np0005625203.localdomain puppet-user[52107]: Warning: Scope(Class[Nova::Compute::Libvirt]): nova::compute::libvirt::images_type will be required if rbd ephemeral storage is used.
Feb 20 07:52:50 np0005625203.localdomain podman[52632]: 2026-02-20 07:52:50.866768078 +0000 UTC m=+0.134030290 container cleanup 820c6f010736fb6e83a6267ff183d3021b29ef2addaf11bc201444b2280b49a9 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=container-puppet-metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, container_name=container-puppet-metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_puppet_step1, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13)
Feb 20 07:52:50 np0005625203.localdomain systemd[1]: libpod-conmon-820c6f010736fb6e83a6267ff183d3021b29ef2addaf11bc201444b2280b49a9.scope: Deactivated successfully.
Feb 20 07:52:50 np0005625203.localdomain python3[51794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-metrics_qdr --conmon-pidfile /run/container-puppet-metrics_qdr.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625203 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron --env NAME=metrics_qdr --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::metrics::qdr
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-metrics_qdr --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron', 'NAME': 'metrics_qdr', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::qdr\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-metrics_qdr.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Feb 20 07:52:51 np0005625203.localdomain podman[52753]: 2026-02-20 07:52:51.089566827 +0000 UTC m=+0.039515464 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: libpod-7ac329b858f59d94c41f916651c01ed4ddd497ef2a4aa921841100b8e71c7d98.scope: Deactivated successfully.
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: libpod-7ac329b858f59d94c41f916651c01ed4ddd497ef2a4aa921841100b8e71c7d98.scope: Consumed 2.619s CPU time.
Feb 20 07:52:51 np0005625203.localdomain podman[51963]: 2026-02-20 07:52:51.116222303 +0000 UTC m=+5.641098793 container died 7ac329b858f59d94c41f916651c01ed4ddd497ef2a4aa921841100b8e71c7d98 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=container-puppet-iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2026-01-12T22:34:43Z, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Feb 20 07:52:51 np0005625203.localdomain podman[52753]: 2026-02-20 07:52:51.12971447 +0000 UTC m=+0.079663087 container create 68c419f9f8eaecd42ccf1c7548f0e9adb54f4237ea0c5b588a052d4c6cd16e01 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, build-date=2026-01-12T22:10:09Z, url=https://www.redhat.com, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-type=git, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:09Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-rsyslog, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, container_name=container-puppet-rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog)
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: Started libpod-conmon-68c419f9f8eaecd42ccf1c7548f0e9adb54f4237ea0c5b588a052d4c6cd16e01.scope.
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:52:51 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd2341bfcaa4cb509ed75b46508987f090c82b284b579c71e1d63af13468cdb7/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Feb 20 07:52:51 np0005625203.localdomain podman[52753]: 2026-02-20 07:52:51.163031256 +0000 UTC m=+0.112979873 container init 68c419f9f8eaecd42ccf1c7548f0e9adb54f4237ea0c5b588a052d4c6cd16e01 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-rsyslog, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, batch=17.1_20260112.1, container_name=container-puppet-rsyslog, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible)
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: libpod-0b36adae283f8ba4be5aeb5f6a76a838a1c219c945dacc3ec826f24a1d8b723a.scope: Deactivated successfully.
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: libpod-0b36adae283f8ba4be5aeb5f6a76a838a1c219c945dacc3ec826f24a1d8b723a.scope: Consumed 2.537s CPU time.
Feb 20 07:52:51 np0005625203.localdomain podman[51986]: 2026-02-20 07:52:51.207378335 +0000 UTC m=+5.706103215 container died 0b36adae283f8ba4be5aeb5f6a76a838a1c219c945dacc3ec826f24a1d8b723a (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, build-date=2026-01-12T22:10:15Z, config_id=tripleo_puppet_step1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.13, vcs-type=git, container_name=container-puppet-collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 07:52:51 np0005625203.localdomain podman[52753]: 2026-02-20 07:52:51.272857984 +0000 UTC m=+0.222806631 container start 68c419f9f8eaecd42ccf1c7548f0e9adb54f4237ea0c5b588a052d4c6cd16e01 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, name=rhosp-rhel9/openstack-rsyslog, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, container_name=container-puppet-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_puppet_step1, vcs-type=git, build-date=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5)
Feb 20 07:52:51 np0005625203.localdomain podman[52753]: 2026-02-20 07:52:51.274928366 +0000 UTC m=+0.224877013 container attach 68c419f9f8eaecd42ccf1c7548f0e9adb54f4237ea0c5b588a052d4c6cd16e01 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=container-puppet-rsyslog, version=17.1.13, config_id=tripleo_puppet_step1, maintainer=OpenStack TripleO Team, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, name=rhosp-rhel9/openstack-rsyslog, batch=17.1_20260112.1, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:09Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 rsyslog)
Feb 20 07:52:51 np0005625203.localdomain podman[52807]: 2026-02-20 07:52:51.277223015 +0000 UTC m=+0.154819527 container cleanup 7ac329b858f59d94c41f916651c01ed4ddd497ef2a4aa921841100b8e71c7d98 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=container-puppet-iscsid, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=container-puppet-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64)
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: libpod-conmon-7ac329b858f59d94c41f916651c01ed4ddd497ef2a4aa921841100b8e71c7d98.scope: Deactivated successfully.
Feb 20 07:52:51 np0005625203.localdomain python3[51794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-iscsid --conmon-pidfile /run/container-puppet-iscsid.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625203 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,iscsid_config --env NAME=iscsid --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::iscsid
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-iscsid --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,iscsid_config', 'NAME': 'iscsid', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::iscsid\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/iscsi:/tmp/iscsi.host:z', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-iscsid.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/iscsi:/tmp/iscsi.host:z --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8516f80a305bf91ea394ab1c0000d8e781a49a1e417cb98413c75dee5cfb0223-merged.mount: Deactivated successfully.
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b36adae283f8ba4be5aeb5f6a76a838a1c219c945dacc3ec826f24a1d8b723a-userdata-shm.mount: Deactivated successfully.
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-358e1b67e7a5a713aac9c0a160bdddbfbcb9b13948cffdbbefd6a9946a4ee797-merged.mount: Deactivated successfully.
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-820c6f010736fb6e83a6267ff183d3021b29ef2addaf11bc201444b2280b49a9-userdata-shm.mount: Deactivated successfully.
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-3ed31fcdcaee63ea79499a12bac1a1c15d8713ced0f8f605d87edc1f9667054a-merged.mount: Deactivated successfully.
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7ac329b858f59d94c41f916651c01ed4ddd497ef2a4aa921841100b8e71c7d98-userdata-shm.mount: Deactivated successfully.
Feb 20 07:52:51 np0005625203.localdomain podman[52845]: 2026-02-20 07:52:51.324621197 +0000 UTC m=+0.107967222 container cleanup 0b36adae283f8ba4be5aeb5f6a76a838a1c219c945dacc3ec826f24a1d8b723a (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=container-puppet-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, container_name=container-puppet-collectd)
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: libpod-conmon-0b36adae283f8ba4be5aeb5f6a76a838a1c219c945dacc3ec826f24a1d8b723a.scope: Deactivated successfully.
Feb 20 07:52:51 np0005625203.localdomain python3[51794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-collectd --conmon-pidfile /run/container-puppet-collectd.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625203 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,collectd_client_config,exec --env NAME=collectd --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::metrics::collectd --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-collectd --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,collectd_client_config,exec', 'NAME': 'collectd', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::metrics::collectd'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-collectd.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 1.27 seconds
Feb 20 07:52:51 np0005625203.localdomain sshd[52059]: Invalid user ftp_test from 185.246.128.171 port 31170
Feb 20 07:52:51 np0005625203.localdomain podman[52890]: 2026-02-20 07:52:51.467015758 +0000 UTC m=+0.087842834 container create 55a9d249868929b5a25a812b9e1a4d9a0100ee9b942de85b78bcee02054bc2f6 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=container-puppet-ovn_controller, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510)
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: Started libpod-conmon-55a9d249868929b5a25a812b9e1a4d9a0100ee9b942de85b78bcee02054bc2f6.scope.
Feb 20 07:52:51 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:52:51 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e86ba1d201a7a318aee08dfa5526c28ec564d0175a53bee4b062bdddbb48cb0/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Feb 20 07:52:51 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e86ba1d201a7a318aee08dfa5526c28ec564d0175a53bee4b062bdddbb48cb0/merged/etc/sysconfig/modules supports timestamps until 2038 (0x7fffffff)
Feb 20 07:52:51 np0005625203.localdomain podman[52890]: 2026-02-20 07:52:51.524636429 +0000 UTC m=+0.145463505 container init 55a9d249868929b5a25a812b9e1a4d9a0100ee9b942de85b78bcee02054bc2f6 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_puppet_step1, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=container-puppet-ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1)
Feb 20 07:52:51 np0005625203.localdomain podman[52890]: 2026-02-20 07:52:51.427758012 +0000 UTC m=+0.048585108 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Feb 20 07:52:51 np0005625203.localdomain podman[52890]: 2026-02-20 07:52:51.535009682 +0000 UTC m=+0.155836748 container start 55a9d249868929b5a25a812b9e1a4d9a0100ee9b942de85b78bcee02054bc2f6 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, vcs-type=git, tcib_managed=true, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., container_name=container-puppet-ovn_controller, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510)
Feb 20 07:52:51 np0005625203.localdomain podman[52890]: 2026-02-20 07:52:51.535173497 +0000 UTC m=+0.156000573 container attach 55a9d249868929b5a25a812b9e1a4d9a0100ee9b942de85b78bcee02054bc2f6 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_puppet_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, container_name=container-puppet-ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]:    (file: /etc/puppet/hiera.yaml)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Undefined variable '::deploy_config_name';
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]:    (file & line not available)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File[/etc/nova/migration/identity]/content: content changed '{sha256}86610d84e745a3992358ae0b747297805d075492e5114c666fa08f8aecce7da0' to '{sha256}37542e92f883a9129d79835364a7293bd4c337025ae650a647285cb3357f99b9'
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Client/File_line[nova_ssh_port]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/File[/etc/sasl2/libvirt.conf]/content: content changed '{sha256}78510a0d6f14b269ddeb9f9638dfdfba9f976d370ee2ec04ba25352a8af6df35' to '{sha256}6d7bcae773217a30c0772f75d0d1b6d21f5d64e72853f5e3d91bb47799dbb7fe'
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Warning: Empty environment setting 'TLS_PASSWORD'
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]:    (file: /etc/puppet/modules/tripleo/manifests/profile/base/nova/libvirt.pp, line: 182)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]:    (file & line not available)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Libvirt/Exec[set libvirt sasl credentials]/returns: executed successfully
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File[/etc/nova/migration/authorized_keys]/content: content changed '{sha256}0d05a8832f36c0517b84e9c3ad11069d531c7d2be5297661e5552fd29e3a5e47' to '{sha256}5bbbbc79dd1f184aec3b40a4e5d830cb87a3dca9076a18726a5379ee062cd087'
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Tripleo::Profile::Base::Nova::Migration::Target/File_line[nova_migration_logindefs]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/never_download_image_if_on_rbd]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::cache_backend'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 145, column: 39)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::memcache_servers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 146, column: 39)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::cache_tls_enabled'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 147, column: 39)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::cache_tls_cafile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 148, column: 39)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::cache_tls_certfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 149, column: 39)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::cache_tls_keyfile'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 150, column: 39)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::cache_tls_allowed_ciphers'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 151, column: 39)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::manage_backend_package'. (file: /etc/puppet/modules/ceilometer/manifests/cache.pp, line: 152, column: 39)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Workarounds/Nova_config[workarounds/disable_compute_service_check_for_ffu]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ssl_only]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/my_ip]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_password'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 63, column: 25)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_url'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 68, column: 25)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_region'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 69, column: 28)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 70, column: 25)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_tenant_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 71, column: 29)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_cacert'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 72, column: 23)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/host]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_endpoint_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 73, column: 26)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_user_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 74, column: 33)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_project_domain_name'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 75, column: 36)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52235]: Warning: Unknown variable: '::ceilometer::agent::auth::auth_type'. (file: /etc/puppet/modules/ceilometer/manifests/agent/service_credentials.pp, line: 76, column: 26)
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/cpu_allocation_ratio]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/ram_allocation_ratio]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/disk_allocation_ratio]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/dhcp_domain]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[vif_plug_ovs/ovsdb_connection]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[notifications/notification_format]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/state_path]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/service_down_time]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/rootwrap_config]/ensure: created
Feb 20 07:52:51 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[DEFAULT/report_interval]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[notifications/notify_on_state_change]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.37 seconds
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Nova_config[cinder/cross_az_attach]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/http_timeout]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[DEFAULT/host]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[publisher/telemetry_secret]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer/Ceilometer_config[hardware/readonly_user_password]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_url]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/region_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/username]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/password]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Glance/Nova_config[glance/valid_interfaces]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/interface]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/user_domain_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/project_domain_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Service_credentials/Ceilometer_config[service_credentials/auth_type]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[compute/instance_discovery_method]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[DEFAULT/polling_namespaces]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[polling/tenant_name_discovery]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Agent::Polling/Ceilometer_config[coordination/backend_url]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_type]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/backend]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/auth_url]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/enabled]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/password]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/memcache_servers]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_domain_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/project_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Cache/Oslo::Cache[ceilometer_config]/Ceilometer_config[cache/tls_enabled]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/user_domain_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/username]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/region_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Rabbit[ceilometer_config]/Ceilometer_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Placement/Nova_config[placement/valid_interfaces]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/password]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_type]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/auth_url]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/rpc_address_prefix]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Amqp[ceilometer_config]/Ceilometer_config[oslo_messaging_amqp/notify_address_prefix]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/region_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/project_domain_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/username]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/user_domain_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/os_region_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cinder/Nova_config[cinder/catalog_info]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/manager_interval]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_base_images]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/driver]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/transport_url]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_original_minimum_age_seconds]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Notifications[ceilometer_config]/Ceilometer_config[oslo_messaging_notifications/topics]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer/Oslo::Messaging::Default[ceilometer_config]/Ceilometer_config[DEFAULT/transport_url]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/remove_unused_resized_minimum_age_seconds]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/debug]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Image_cache/Nova_config[image_cache/precache_concurrency]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: /Stage[main]/Ceilometer::Logging/Oslo::Log[ceilometer_config]/Ceilometer_config[DEFAULT/log_dir]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Notice: Applied catalog in 0.39 seconds
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Application:
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:    Initial environment: production
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:    Converged environment: production
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:          Run mode: user
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Changes:
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:             Total: 31
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Events:
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:           Success: 31
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:             Total: 31
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Resources:
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:           Skipped: 22
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:           Changed: 31
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:       Out of sync: 31
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:             Total: 151
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Time:
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:           Package: 0.02
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:    Ceilometer config: 0.31
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:    Transaction evaluation: 0.39
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:    Catalog application: 0.39
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:    Config retrieval: 0.44
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:          Last run: 1771573972
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:         Resources: 0.00
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:             Total: 0.39
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]: Version:
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:            Config: 1771573971
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52235]:            Puppet: 7.10.0
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/project_domain_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Vendordata/Nova_config[vendordata_dynamic_auth/user_domain_name]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Provider/Nova_config[compute/provider_config_location]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Provider/File[/etc/nova/provider_config]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/use_cow_images]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/mkisofs_cmd]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/force_raw_images]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_host_memory_mb]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/reserved_huge_pages]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/resume_guests_state_on_host_boot]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[key_manager/backend]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/sync_power_state_interval]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/consecutive_build_service_disable_threshold]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/live_migration_wait_for_vif_plug]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[compute/max_disk_devices_to_attach]/ensure: created
Feb 20 07:52:52 np0005625203.localdomain systemd[1]: libpod-1d7389e20e828014f4d1835f355506bcfd040c3d888962e8f16c6fd2ccdcd8a8.scope: Deactivated successfully.
Feb 20 07:52:52 np0005625203.localdomain systemd[1]: libpod-1d7389e20e828014f4d1835f355506bcfd040c3d888962e8f16c6fd2ccdcd8a8.scope: Consumed 2.929s CPU time.
Feb 20 07:52:52 np0005625203.localdomain podman[52202]: 2026-02-20 07:52:52.9628263 +0000 UTC m=+3.438164912 container died 1d7389e20e828014f4d1835f355506bcfd040c3d888962e8f16c6fd2ccdcd8a8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, build-date=2026-01-12T23:07:24Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:24Z, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, container_name=container-puppet-ceilometer, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-central, com.redhat.component=openstack-ceilometer-central-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-central, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central)
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52860]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52860]:    (file: /etc/puppet/hiera.yaml)
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52860]: Warning: Undefined variable '::deploy_config_name';
Feb 20 07:52:52 np0005625203.localdomain puppet-user[52860]:    (file & line not available)
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:    (file & line not available)
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Vncproxy::Common/Nova_config[vnc/novncproxy_base_url]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/server_proxyclient_address]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain systemd[1]: tmp-crun.g0q3E7.mount: Deactivated successfully.
Feb 20 07:52:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d7389e20e828014f4d1835f355506bcfd040c3d888962e8f16c6fd2ccdcd8a8-userdata-shm.mount: Deactivated successfully.
Feb 20 07:52:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-18b2a38b03b01d01d2686bb7a91f0fbbb40a40764e34293cc7422e7526e11c3c-merged.mount: Deactivated successfully.
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[vnc/enabled]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[spice/enabled]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute/Nova_config[DEFAULT/instance_usage_audit_period]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain podman[53161]: 2026-02-20 07:52:53.093338802 +0000 UTC m=+0.121313176 container cleanup 1d7389e20e828014f4d1835f355506bcfd040c3d888962e8f16c6fd2ccdcd8a8 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1, name=container-puppet-ceilometer, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-central, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-central, version=17.1.13, container_name=container-puppet-ceilometer, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:24Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-central, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-central, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-central, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_puppet_step1, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:24Z, com.redhat.component=openstack-ceilometer-central-container, tcib_managed=true, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, name=rhosp-rhel9/openstack-ceilometer-central)
Feb 20 07:52:53 np0005625203.localdomain systemd[1]: libpod-conmon-1d7389e20e828014f4d1835f355506bcfd040c3d888962e8f16c6fd2ccdcd8a8.scope: Deactivated successfully.
Feb 20 07:52:53 np0005625203.localdomain python3[51794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ceilometer --conmon-pidfile /run/container-puppet-ceilometer.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625203 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config --env NAME=ceilometer --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::ceilometer::agent::polling
                                                         include tripleo::profile::base::ceilometer::agent::polling
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ceilometer --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,ceilometer_config,ceilometer_config', 'NAME': 'ceilometer', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::ceilometer::agent::polling\ninclude tripleo::profile::base::ceilometer::agent::polling\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ceilometer.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ceilometer-central:17.1
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_is_fatal]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[DEFAULT/vif_plugging_timeout]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/default_floating_pool]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/timeout]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_name]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/project_domain_name]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/region_name]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/username]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/user_domain_name]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.23 seconds
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/password]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_url]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/valid_interfaces]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/ovs_bridge]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/extension_sync_interval]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Network::Neutron/Nova_config[neutron/auth_type]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_uri]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_tunnelled]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_inbound_addr]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_post_copy]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]: Notice: /Stage[main]/Rsyslog::Base/File[/etc/rsyslog.conf]/content: content changed '{sha256}d6f679f6a4eb6f33f9fc20c846cb30bef93811e1c86bc4da1946dc3100b826c3' to '{sha256}7963bd801fadd49a17561f4d3f80738c3f504b413b11c443432d8303138041f2'
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Migration::Libvirt/Nova_config[libvirt/live_migration_permit_auto_converge]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]: Notice: /Stage[main]/Rsyslog::Config::Global/Rsyslog::Component::Global_config[MaxMessageSize]/Rsyslog::Generate_concat[rsyslog::concat::global_config::MaxMessageSize]/Concat[/etc/rsyslog.d/00_rsyslog.conf]/File[/etc/rsyslog.d/00_rsyslog.conf]/ensure: defined content as '{sha256}a291d5cc6d5884a978161f4c7b5831d43edd07797cc590bae366e7f150b8643b'
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tls]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Migration::Libvirt/Virtproxyd_config[listen_tcp]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]: Notice: /Stage[main]/Rsyslog::Config::Templates/Rsyslog::Component::Template[rsyslog-node-index]/Rsyslog::Generate_concat[rsyslog::concat::template::rsyslog-node-index]/Concat[/etc/rsyslog.d/50_openstack_logs.conf]/File[/etc/rsyslog.d/50_openstack_logs.conf]/ensure: defined content as '{sha256}ad7203f44a3b610eaa8c8aa53ddb60c39ba0e80184dc8ed29a79edad3d9d0146'
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]: Notice: Applied catalog in 0.11 seconds
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]: Application:
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:    Initial environment: production
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:    Converged environment: production
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:          Run mode: user
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]: Changes:
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:             Total: 3
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]: Events:
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:           Success: 3
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:             Total: 3
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]: Resources:
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:           Skipped: 11
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:           Changed: 3
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:       Out of sync: 3
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:             Total: 25
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]: Time:
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:       Concat file: 0.00
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:    Concat fragment: 0.00
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:              File: 0.01
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:    Transaction evaluation: 0.11
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:    Catalog application: 0.11
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:    Config retrieval: 0.29
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:          Last run: 1771573973
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:             Total: 0.11
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]: Version:
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:            Config: 1771573972
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52860]:            Puppet: 7.10.0
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_user]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/rbd_secret_uuid]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Rbd/File[/etc/nova/secret.xml]/ensure: defined content as '{sha256}64c5f9c37bfdcd550f09aea32895662c8b3e80da678034168cc6138d9da68080'
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]:    (file: /etc/puppet/hiera.yaml)
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]: Warning: Undefined variable '::deploy_config_name';
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]:    (file & line not available)
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_type]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_pool]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_ceph_conf]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]:    (file & line not available)
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_store_name]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_poll_interval]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Rbd/Nova_config[libvirt/images_rbd_glance_copy_timeout]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/compute_driver]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[DEFAULT/preallocate_images]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[vnc/server_listen]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/virt_type]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_mode]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_password]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_key]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/inject_partition]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_disk_discard]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/hw_machine_type]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/enabled_perf_events]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/rx_queue_size]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/tx_queue_size]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain systemd[1]: libpod-68c419f9f8eaecd42ccf1c7548f0e9adb54f4237ea0c5b588a052d4c6cd16e01.scope: Deactivated successfully.
Feb 20 07:52:53 np0005625203.localdomain systemd[1]: libpod-68c419f9f8eaecd42ccf1c7548f0e9adb54f4237ea0c5b588a052d4c6cd16e01.scope: Consumed 2.290s CPU time.
Feb 20 07:52:53 np0005625203.localdomain podman[52753]: 2026-02-20 07:52:53.652904764 +0000 UTC m=+2.602853381 container died 68c419f9f8eaecd42ccf1c7548f0e9adb54f4237ea0c5b588a052d4c6cd16e01 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, build-date=2026-01-12T22:10:09Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., container_name=container-puppet-rsyslog, tcib_managed=true, url=https://www.redhat.com)
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.29 seconds
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/file_backed_memory]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain sshd[52059]: Disconnecting invalid user ftp_test 185.246.128.171 port 31170: Change of username or service not allowed: (ftp_test,ssh-connection) -> (data,ssh-connection) [preauth]
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/volume_use_multipath]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/num_pcie_ports]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/mem_stats_period_seconds]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/pmem_namespaces]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/swtpm_enabled]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/cpu_model_extra_flags]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain ovs-vsctl[53313]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote=tcp:172.17.0.103:6642,tcp:172.17.0.104:6642,tcp:172.17.0.105:6642
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt/Nova_config[libvirt/disk_cachemodes]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_filters]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtlogd/Virtlogd_config[log_outputs]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_filters]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtproxyd/Virtproxyd_config[log_outputs]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_filters]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain ovs-vsctl[53315]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-type=geneve
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtqemud/Virtqemud_config[log_outputs]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-type]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_filters]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtnodedevd/Virtnodedevd_config[log_outputs]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_filters]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtstoraged/Virtstoraged_config[log_outputs]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_filters]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Virtsecretd/Virtsecretd_config[log_outputs]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain ovs-vsctl[53317]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-ip=172.19.0.107
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_group]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_ro]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[auth_unix_rw]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-ip]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_ro_perms]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtnodedevd_config[unix_sock_rw_perms]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_group]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_ro]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[auth_unix_rw]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_ro_perms]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtproxyd_config[unix_sock_rw_perms]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain ovs-vsctl[53320]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:hostname=np0005625203.localdomain
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_group]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_ro]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:hostname]/value: value changed 'np0005625203.novalocal' to 'np0005625203.localdomain'
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[auth_unix_rw]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_ro_perms]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtqemud_config[unix_sock_rw_perms]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_group]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_ro]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[auth_unix_rw]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_ro_perms]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtsecretd_config[unix_sock_rw_perms]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_group]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain ovs-vsctl[53322]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge=br-int
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_ro]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[auth_unix_rw]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_ro_perms]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Config/Virtstoraged_config[unix_sock_rw_perms]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain ovs-vsctl[53324]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-remote-probe-interval=60000
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-remote-probe-interval]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain ovs-vsctl[53326]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-openflow-probe-interval=60
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-openflow-probe-interval]/ensure: created
Feb 20 07:52:53 np0005625203.localdomain ovs-vsctl[53328]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-monitor-all=true
Feb 20 07:52:53 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-monitor-all]/ensure: created
Feb 20 07:52:54 np0005625203.localdomain ovs-vsctl[53330]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-ofctrl-wait-before-clear=8000
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-ofctrl-wait-before-clear]/ensure: created
Feb 20 07:52:54 np0005625203.localdomain ovs-vsctl[53332]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-encap-tos=0
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-encap-tos]/ensure: created
Feb 20 07:52:54 np0005625203.localdomain ovs-vsctl[53334]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-chassis-mac-mappings=datacentre:fa:16:3e:ad:21:e8
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-chassis-mac-mappings]/ensure: created
Feb 20 07:52:54 np0005625203.localdomain ovs-vsctl[53336]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-bridge-mappings=datacentre:br-ex
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-bridge-mappings]/ensure: created
Feb 20 07:52:54 np0005625203.localdomain systemd[1]: tmp-crun.3f9q36.mount: Deactivated successfully.
Feb 20 07:52:54 np0005625203.localdomain ovs-vsctl[53339]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:ovn-match-northd-version=false
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:ovn-match-northd-version]/ensure: created
Feb 20 07:52:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-68c419f9f8eaecd42ccf1c7548f0e9adb54f4237ea0c5b588a052d4c6cd16e01-userdata-shm.mount: Deactivated successfully.
Feb 20 07:52:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-cd2341bfcaa4cb509ed75b46508987f090c82b284b579c71e1d63af13468cdb7-merged.mount: Deactivated successfully.
Feb 20 07:52:54 np0005625203.localdomain ovs-vsctl[53341]: ovs|00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl set Open_vSwitch . external_ids:garp-max-timeout-sec=0
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Notice: /Stage[main]/Ovn::Controller/Vs_config[external_ids:garp-max-timeout-sec]/ensure: created
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Notice: Applied catalog in 0.57 seconds
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Application:
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:    Initial environment: production
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:    Converged environment: production
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:          Run mode: user
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Changes:
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:             Total: 14
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Events:
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:           Success: 14
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:             Total: 14
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Resources:
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:           Skipped: 12
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:           Changed: 14
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:       Out of sync: 14
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:             Total: 29
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Time:
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:              Exec: 0.02
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:    Config retrieval: 0.32
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:         Vs config: 0.40
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:    Transaction evaluation: 0.50
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:    Catalog application: 0.57
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:          Last run: 1771573974
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:             Total: 0.57
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]: Version:
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:            Config: 1771573973
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52974]:            Puppet: 7.10.0
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Compute::Libvirt::Qemu/Augeas[qemu-conf-limits]/returns: executed successfully
Feb 20 07:52:54 np0005625203.localdomain podman[53299]: 2026-02-20 07:52:54.478849762 +0000 UTC m=+0.814451172 container cleanup 68c419f9f8eaecd42ccf1c7548f0e9adb54f4237ea0c5b588a052d4c6cd16e01 (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=container-puppet-rsyslog, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.created=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, name=rhosp-rhel9/openstack-rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=container-puppet-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:09Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_puppet_step1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 rsyslog)
Feb 20 07:52:54 np0005625203.localdomain python3[51794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-rsyslog --conmon-pidfile /run/container-puppet-rsyslog.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625203 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment --env NAME=rsyslog --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::logging::rsyslog --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-rsyslog --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,rsyslog::generate_concat,concat::fragment', 'NAME': 'rsyslog', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::logging::rsyslog'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-rsyslog.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Feb 20 07:52:54 np0005625203.localdomain systemd[1]: libpod-conmon-68c419f9f8eaecd42ccf1c7548f0e9adb54f4237ea0c5b588a052d4c6cd16e01.scope: Deactivated successfully.
Feb 20 07:52:54 np0005625203.localdomain podman[52942]: 2026-02-20 07:52:51.5448599 +0000 UTC m=+0.031162902 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Feb 20 07:52:54 np0005625203.localdomain podman[53510]: 2026-02-20 07:52:54.734652928 +0000 UTC m=+0.099055873 container create 310784807da8c80c452ed602747f8a7780c6887da15d40a30331de41628041f4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, description=Red Hat OpenStack Platform 17.1 neutron-server, config_id=tripleo_puppet_step1, batch=17.1_20260112.1, tcib_managed=true, version=17.1.13, build-date=2026-01-12T22:57:35Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:57:35Z, vcs-type=git, container_name=container-puppet-neutron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-server, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-server-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp-rhel9/openstack-neutron-server, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']})
Feb 20 07:52:54 np0005625203.localdomain systemd[1]: Started libpod-conmon-310784807da8c80c452ed602747f8a7780c6887da15d40a30331de41628041f4.scope.
Feb 20 07:52:54 np0005625203.localdomain podman[53510]: 2026-02-20 07:52:54.677224514 +0000 UTC m=+0.041627549 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Feb 20 07:52:54 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:52:54 np0005625203.localdomain systemd[1]: libpod-55a9d249868929b5a25a812b9e1a4d9a0100ee9b942de85b78bcee02054bc2f6.scope: Deactivated successfully.
Feb 20 07:52:54 np0005625203.localdomain systemd[1]: libpod-55a9d249868929b5a25a812b9e1a4d9a0100ee9b942de85b78bcee02054bc2f6.scope: Consumed 2.951s CPU time.
Feb 20 07:52:54 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffa0361de9b1daeb234135fe553d80c8cdedd4f9acc107cdd3a4ed4b713c4ea0/merged/var/lib/config-data supports timestamps until 2038 (0x7fffffff)
Feb 20 07:52:54 np0005625203.localdomain podman[53510]: 2026-02-20 07:52:54.800833677 +0000 UTC m=+0.165236652 container init 310784807da8c80c452ed602747f8a7780c6887da15d40a30331de41628041f4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp-rhel9/openstack-neutron-server, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:57:35Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-server, release=1766032510, config_id=tripleo_puppet_step1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-neutron, com.redhat.component=openstack-neutron-server-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2026-01-12T22:57:35Z, version=17.1.13)
Feb 20 07:52:54 np0005625203.localdomain podman[53510]: 2026-02-20 07:52:54.808047676 +0000 UTC m=+0.172450621 container start 310784807da8c80c452ed602747f8a7780c6887da15d40a30331de41628041f4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, batch=17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:57:35Z, io.openshift.expose-services=, config_id=tripleo_puppet_step1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, build-date=2026-01-12T22:57:35Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, release=1766032510, container_name=container-puppet-neutron, summary=Red Hat OpenStack Platform 17.1 neutron-server, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-server-container, name=rhosp-rhel9/openstack-neutron-server, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 07:52:54 np0005625203.localdomain podman[53510]: 2026-02-20 07:52:54.808220981 +0000 UTC m=+0.172624016 container attach 310784807da8c80c452ed602747f8a7780c6887da15d40a30331de41628041f4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-server, container_name=container-puppet-neutron, summary=Red Hat OpenStack Platform 17.1 neutron-server, build-date=2026-01-12T22:57:35Z, release=1766032510, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-server, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_puppet_step1, vendor=Red Hat, Inc., version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-server-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:57:35Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server)
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Migration::Qemu/Augeas[qemu-conf-migration-ports]/returns: executed successfully
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/debug]/ensure: created
Feb 20 07:52:54 np0005625203.localdomain podman[53541]: 2026-02-20 07:52:54.857598252 +0000 UTC m=+0.052857228 container died 55a9d249868929b5a25a812b9e1a4d9a0100ee9b942de85b78bcee02054bc2f6 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, io.openshift.expose-services=, architecture=x86_64, container_name=container-puppet-ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, config_id=tripleo_puppet_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git)
Feb 20 07:52:54 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Logging/Oslo::Log[nova_config]/Nova_config[DEFAULT/log_dir]/ensure: created
Feb 20 07:52:54 np0005625203.localdomain podman[53541]: 2026-02-20 07:52:54.881986759 +0000 UTC m=+0.077245735 container cleanup 55a9d249868929b5a25a812b9e1a4d9a0100ee9b942de85b78bcee02054bc2f6 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=container-puppet-ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_puppet_step1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=container-puppet-ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 07:52:54 np0005625203.localdomain systemd[1]: libpod-conmon-55a9d249868929b5a25a812b9e1a4d9a0100ee9b942de85b78bcee02054bc2f6.scope: Deactivated successfully.
Feb 20 07:52:54 np0005625203.localdomain python3[51794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-ovn_controller --conmon-pidfile /run/container-puppet-ovn_controller.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625203 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,vs_config,exec --env NAME=ovn_controller --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::neutron::agents::ovn
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-ovn_controller --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,vs_config,exec', 'NAME': 'ovn_controller', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::agents::ovn\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/etc/sysconfig/modules:/etc/sysconfig/modules', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-ovn_controller.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /etc/sysconfig/modules:/etc/sysconfig/modules --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/backend]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/enabled]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/memcache_servers]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4e86ba1d201a7a318aee08dfa5526c28ec564d0175a53bee4b062bdddbb48cb0-merged.mount: Deactivated successfully.
Feb 20 07:52:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-55a9d249868929b5a25a812b9e1a4d9a0100ee9b942de85b78bcee02054bc2f6-userdata-shm.mount: Deactivated successfully.
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Cache/Oslo::Cache[nova_config]/Nova_config[cache/tls_enabled]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Oslo::Messaging::Rabbit[nova_config]/Nova_config[oslo_messaging_rabbit/ssl]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain sshd[53594]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Oslo::Messaging::Default[nova_config]/Nova_config[DEFAULT/transport_url]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/driver]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Oslo::Messaging::Notifications[nova_config]/Nova_config[oslo_messaging_notifications/transport_url]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova/Oslo::Concurrency[nova_config]/Nova_config[oslo_concurrency/lock_path]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_type]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/region_name]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/auth_url]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/username]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/password]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/user_domain_name]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_name]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/project_domain_name]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Nova::Keystone::Service_user/Keystone::Resource::Service_user[nova_config]/Nova_config[service_user/send_service_user_token]/ensure: created
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: /Stage[main]/Ssh::Server::Config/Concat[/etc/ssh/sshd_config]/File[/etc/ssh/sshd_config]/ensure: defined content as '{sha256}3a12438802493a75725c4f7704f2af6db1ef72af396369e5de28f6f4d6a7ed98'
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Notice: Applied catalog in 4.24 seconds
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Application:
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:    Initial environment: production
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:    Converged environment: production
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:          Run mode: user
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Changes:
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:             Total: 183
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Events:
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:           Success: 183
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:             Total: 183
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Resources:
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:           Changed: 183
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:       Out of sync: 183
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:           Skipped: 57
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:             Total: 487
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Time:
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:       Concat file: 0.00
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:    Concat fragment: 0.00
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:            Anchor: 0.00
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:         File line: 0.00
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:    Virtlogd config: 0.00
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:    Virtsecretd config: 0.02
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:              Exec: 0.02
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:    Virtstoraged config: 0.02
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:           Package: 0.02
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:              File: 0.03
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:    Virtnodedevd config: 0.03
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:    Virtqemud config: 0.03
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:    Virtproxyd config: 0.04
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:            Augeas: 0.92
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:    Config retrieval: 1.50
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:          Last run: 1771573975
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:       Nova config: 2.93
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:    Transaction evaluation: 4.23
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:    Catalog application: 4.24
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:         Resources: 0.00
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:             Total: 4.24
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]: Version:
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:            Config: 1771573970
Feb 20 07:52:55 np0005625203.localdomain puppet-user[52107]:            Puppet: 7.10.0
Feb 20 07:52:56 np0005625203.localdomain puppet-user[53567]: Error: Facter: error while resolving custom fact "haproxy_version": undefined method `strip' for nil:NilClass
Feb 20 07:52:56 np0005625203.localdomain systemd[1]: libpod-cbb1f8627e016fb05c68b71668d1592638bcc97a71e255d8465867dd0d6a8d13.scope: Deactivated successfully.
Feb 20 07:52:56 np0005625203.localdomain systemd[1]: libpod-cbb1f8627e016fb05c68b71668d1592638bcc97a71e255d8465867dd0d6a8d13.scope: Consumed 8.215s CPU time.
Feb 20 07:52:56 np0005625203.localdomain podman[51952]: 2026-02-20 07:52:56.727340308 +0000 UTC m=+11.258487017 container died cbb1f8627e016fb05c68b71668d1592638bcc97a71e255d8465867dd0d6a8d13 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, vcs-type=git, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, container_name=container-puppet-nova_libvirt, url=https://www.redhat.com, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, architecture=x86_64, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_puppet_step1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:31:49Z)
Feb 20 07:52:56 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cbb1f8627e016fb05c68b71668d1592638bcc97a71e255d8465867dd0d6a8d13-userdata-shm.mount: Deactivated successfully.
Feb 20 07:52:56 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1a3dde96d66e26fbb5d914ff807f950d08e80e1daf4f655a6bd9c579da655dfe-merged.mount: Deactivated successfully.
Feb 20 07:52:56 np0005625203.localdomain podman[53651]: 2026-02-20 07:52:56.875454712 +0000 UTC m=+0.134798743 container cleanup cbb1f8627e016fb05c68b71668d1592638bcc97a71e255d8465867dd0d6a8d13 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=container-puppet-nova_libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_puppet_step1, io.buildah.version=1.41.5, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=container-puppet-nova_libvirt, name=rhosp-rhel9/openstack-nova-libvirt)
Feb 20 07:52:56 np0005625203.localdomain systemd[1]: libpod-conmon-cbb1f8627e016fb05c68b71668d1592638bcc97a71e255d8465867dd0d6a8d13.scope: Deactivated successfully.
Feb 20 07:52:56 np0005625203.localdomain python3[51794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-nova_libvirt --conmon-pidfile /run/container-puppet-nova_libvirt.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625203 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password --env NAME=nova_libvirt --env STEP_CONFIG=include ::tripleo::packages
                                                         # TODO(emilien): figure how to deal with libvirt profile.
                                                         # We'll probably treat it like we do with Neutron plugins.
                                                         # Until then, just include it in the default nova-compute role.
                                                         include tripleo::profile::base::nova::compute::libvirt
                                                         
                                                         include tripleo::profile::base::nova::libvirt
                                                         
                                                         include tripleo::profile::base::nova::compute::libvirt_guests
                                                         
                                                         include tripleo::profile::base::sshd
                                                         include tripleo::profile::base::nova::migration::target --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-nova_libvirt --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,nova_config,libvirtd_config,virtlogd_config,virtproxyd_config,virtqemud_config,virtnodedevd_config,virtsecretd_config,virtstoraged_config,nova_config,file,libvirt_tls_password,libvirtd_config,nova_config,file,libvirt_tls_password', 'NAME': 'nova_libvirt', 'STEP_CONFIG': "include ::tripleo::packages\n# TODO(emilien): figure how to deal with libvirt profile.\n# We'll probably treat it like we do with Neutron plugins.\n# Until then, just include it in the default nova-compute role.\ninclude tripleo::profile::base::nova::compute::libvirt\n\ninclude tripleo::profile::base::nova::libvirt\n\ninclude tripleo::profile::base::nova::compute::libvirt_guests\n\ninclude tripleo::profile::base::sshd\ninclude tripleo::profile::base::nova::migration::target"}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-nova_libvirt.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 07:52:56 np0005625203.localdomain puppet-user[53567]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 07:52:56 np0005625203.localdomain puppet-user[53567]:    (file: /etc/puppet/hiera.yaml)
Feb 20 07:52:56 np0005625203.localdomain puppet-user[53567]: Warning: Undefined variable '::deploy_config_name';
Feb 20 07:52:56 np0005625203.localdomain puppet-user[53567]:    (file & line not available)
Feb 20 07:52:56 np0005625203.localdomain puppet-user[53567]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 07:52:56 np0005625203.localdomain puppet-user[53567]:    (file & line not available)
Feb 20 07:52:56 np0005625203.localdomain puppet-user[53567]: Warning: Unknown variable: 'dhcp_agents_per_net'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/neutron.pp, line: 154, column: 37)
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.62 seconds
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/auth_strategy]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/core_plugin]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/host]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dns_domain]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/dhcp_agent_notification]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/allow_overlapping_ips]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/global_physnet_mtu]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/vlan_transparent]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Neutron_config[agent/root_helper]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Neutron_config[agent/report_interval]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Neutron_config[DEFAULT/service_plugins]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/debug]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_host]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/nova_metadata_protocol]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_proxy_shared_secret]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/metadata_workers]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/state_path]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[DEFAULT/hwol_qos_enabled]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[agent/root_helper]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovs/ovsdb_connection_timeout]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovsdb_probe_interval]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_nb_connection]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Agents::Ovn_metadata/Ovn_metadata_agent_config[ovn/ovn_sb_connection]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/transport_url]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Default[neutron_config]/Neutron_config[DEFAULT/control_exchange]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Oslo::Concurrency[neutron_config]/Neutron_config[oslo_concurrency/lock_path]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/driver]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Notifications[neutron_config]/Neutron_config[oslo_messaging_notifications/transport_url]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_in_pthread]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron/Oslo::Messaging::Rabbit[neutron_config]/Neutron_config[oslo_messaging_rabbit/heartbeat_timeout_threshold]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/debug]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: /Stage[main]/Neutron::Logging/Oslo::Log[neutron_config]/Neutron_config[DEFAULT/log_dir]/ensure: created
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Notice: Applied catalog in 0.43 seconds
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Application:
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:    Initial environment: production
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:    Converged environment: production
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:          Run mode: user
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Changes:
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:             Total: 33
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Events:
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:           Success: 33
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:             Total: 33
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Resources:
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:           Skipped: 21
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:           Changed: 33
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:       Out of sync: 33
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:             Total: 155
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Time:
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:         Resources: 0.00
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:    Ovn metadata agent config: 0.01
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:    Neutron config: 0.37
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:    Transaction evaluation: 0.43
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:    Catalog application: 0.43
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:    Config retrieval: 0.68
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:          Last run: 1771573977
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:             Total: 0.43
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]: Version:
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:            Config: 1771573976
Feb 20 07:52:57 np0005625203.localdomain puppet-user[53567]:            Puppet: 7.10.0
Feb 20 07:52:58 np0005625203.localdomain systemd[1]: libpod-310784807da8c80c452ed602747f8a7780c6887da15d40a30331de41628041f4.scope: Deactivated successfully.
Feb 20 07:52:58 np0005625203.localdomain systemd[1]: libpod-310784807da8c80c452ed602747f8a7780c6887da15d40a30331de41628041f4.scope: Consumed 3.633s CPU time.
Feb 20 07:52:58 np0005625203.localdomain podman[53777]: 2026-02-20 07:52:58.570728069 +0000 UTC m=+0.052489987 container died 310784807da8c80c452ed602747f8a7780c6887da15d40a30331de41628041f4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, build-date=2026-01-12T22:57:35Z, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, com.redhat.component=openstack-neutron-server-container, config_id=tripleo_puppet_step1, summary=Red Hat OpenStack Platform 17.1 neutron-server, org.opencontainers.image.created=2026-01-12T22:57:35Z, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-server, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.expose-services=, container_name=container-puppet-neutron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-server, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 07:52:58 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-310784807da8c80c452ed602747f8a7780c6887da15d40a30331de41628041f4-userdata-shm.mount: Deactivated successfully.
Feb 20 07:52:58 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ffa0361de9b1daeb234135fe553d80c8cdedd4f9acc107cdd3a4ed4b713c4ea0-merged.mount: Deactivated successfully.
Feb 20 07:52:58 np0005625203.localdomain podman[53777]: 2026-02-20 07:52:58.631634778 +0000 UTC m=+0.113396646 container cleanup 310784807da8c80c452ed602747f8a7780c6887da15d40a30331de41628041f4 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1, name=container-puppet-neutron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-server, name=rhosp-rhel9/openstack-neutron-server, org.opencontainers.image.created=2026-01-12T22:57:35Z, version=17.1.13, vcs-type=git, batch=17.1_20260112.1, config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-server, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:57:35Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-server, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_puppet_step1, com.redhat.component=openstack-neutron-server-container, container_name=container-puppet-neutron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-server, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-server)
Feb 20 07:52:58 np0005625203.localdomain systemd[1]: libpod-conmon-310784807da8c80c452ed602747f8a7780c6887da15d40a30331de41628041f4.scope: Deactivated successfully.
Feb 20 07:52:58 np0005625203.localdomain python3[51794]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name container-puppet-neutron --conmon-pidfile /run/container-puppet-neutron.pid --detach=False --entrypoint /var/lib/container-puppet/container-puppet.sh --env STEP=6 --env NET_HOST=true --env DEBUG=true --env HOSTNAME=np0005625203 --env NO_ARCHIVE= --env PUPPET_TAGS=file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config --env NAME=neutron --env STEP_CONFIG=include ::tripleo::packages
                                                         include tripleo::profile::base::neutron::ovn_metadata
                                                          --label config_id=tripleo_puppet_step1 --label container_name=container-puppet-neutron --label managed_by=tripleo_ansible --label config_data={'security_opt': ['label=disable'], 'user': 0, 'detach': False, 'recreate': True, 'entrypoint': '/var/lib/container-puppet/container-puppet.sh', 'environment': {'STEP': 6, 'NET_HOST': 'true', 'DEBUG': 'true', 'HOSTNAME': 'np0005625203', 'NO_ARCHIVE': '', 'PUPPET_TAGS': 'file,file_line,concat,augeas,cron,neutron_config,ovn_metadata_agent_config', 'NAME': 'neutron', 'STEP_CONFIG': 'include ::tripleo::packages\ninclude tripleo::profile::base::neutron::ovn_metadata\n'}, 'net': ['host'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1', 'volumes': ['/dev/log:/dev/log:rw', '/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/config-data:/var/lib/config-data:rw', '/var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro', '/var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro', '/var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/container-puppet-neutron.log --network host --security-opt label=disable --user 0 --volume /dev/log:/dev/log:rw --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/config-data:/var/lib/config-data:rw --volume /var/lib/container-puppet/container-puppet.sh:/var/lib/container-puppet/container-puppet.sh:ro --volume /var/lib/container-puppet/puppetlabs/facter.conf:/etc/puppetlabs/facter/facter.conf:ro --volume /var/lib/container-puppet/puppetlabs:/opt/puppetlabs:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-server:17.1
Feb 20 07:52:58 np0005625203.localdomain sudo[51792]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:59 np0005625203.localdomain sudo[53829]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmtvfcnetlmatpadclkealavyzwuawfs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:52:59 np0005625203.localdomain sudo[53829]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:52:59 np0005625203.localdomain python3[53831]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:52:59 np0005625203.localdomain sudo[53829]: pam_unix(sudo:session): session closed for user root
Feb 20 07:52:59 np0005625203.localdomain sshd[53594]: Invalid user data from 185.246.128.171 port 11677
Feb 20 07:52:59 np0005625203.localdomain sudo[53845]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rckmkrzwfkumesysmeebwaszizycnjdl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:52:59 np0005625203.localdomain sudo[53845]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:00 np0005625203.localdomain sudo[53845]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:00 np0005625203.localdomain sudo[53861]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yajitjtsfviggivlwojtskjisgbneddv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:00 np0005625203.localdomain sudo[53861]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:00 np0005625203.localdomain python3[53863]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:53:00 np0005625203.localdomain sudo[53861]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:00 np0005625203.localdomain sshd[53866]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:53:00 np0005625203.localdomain sudo[53913]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-foehkfohdfxjycetbpfilksnyjkovasg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:00 np0005625203.localdomain sudo[53913]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:00 np0005625203.localdomain sshd[53866]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:53:01 np0005625203.localdomain python3[53915]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:53:01 np0005625203.localdomain sudo[53913]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:01 np0005625203.localdomain sshd[53594]: Disconnecting invalid user data 185.246.128.171 port 11677: Change of username or service not allowed: (data,ssh-connection) -> (ubuntuserver,ssh-connection) [preauth]
Feb 20 07:53:01 np0005625203.localdomain sudo[53956]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tucmbvdfioxaywlnxmtrzmimnreolrtc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:01 np0005625203.localdomain sudo[53956]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:01 np0005625203.localdomain python3[53958]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573980.7185414-84655-218035756974980/source dest=/usr/libexec/tripleo-container-shutdown mode=0700 owner=root group=root _original_basename=tripleo-container-shutdown follow=False checksum=7d67b1986212f5548057505748cd74cfcf9c0d35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:53:01 np0005625203.localdomain sudo[53956]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:01 np0005625203.localdomain sudo[54018]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gkrwfriiaxgvazdcwybtorgilavxbphb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:01 np0005625203.localdomain sudo[54018]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:01 np0005625203.localdomain python3[54020]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:53:01 np0005625203.localdomain sudo[54018]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:02 np0005625203.localdomain sudo[54061]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hsatuzycagrdmqcqqavcaqjoxqrdzhto ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:02 np0005625203.localdomain sudo[54061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:02 np0005625203.localdomain python3[54063]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573981.5472877-84655-43206586300347/source dest=/usr/libexec/tripleo-start-podman-container mode=0700 owner=root group=root _original_basename=tripleo-start-podman-container follow=False checksum=536965633b8d3b1ce794269ffb07be0105a560a0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:53:02 np0005625203.localdomain sudo[54061]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:02 np0005625203.localdomain sudo[54123]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-daxqvjfasqumlhrlyglsyhpqkgpjygxv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:02 np0005625203.localdomain sudo[54123]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:02 np0005625203.localdomain sshd[54126]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:53:02 np0005625203.localdomain python3[54125]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:53:02 np0005625203.localdomain sudo[54123]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:02 np0005625203.localdomain sudo[54167]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vorxtsrmmdylryutgftudvfgltvnogvf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:02 np0005625203.localdomain sudo[54167]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:03 np0005625203.localdomain python3[54169]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573982.401042-84687-142440986973214/source dest=/usr/lib/systemd/system/tripleo-container-shutdown.service mode=0644 owner=root group=root _original_basename=tripleo-container-shutdown-service follow=False checksum=66c1d41406ba8714feb9ed0a35259a7a57ef9707 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:53:03 np0005625203.localdomain sudo[54167]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:03 np0005625203.localdomain sudo[54229]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hnxjynnzogpvukhjofuurbikqgujnybr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:03 np0005625203.localdomain sudo[54229]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:03 np0005625203.localdomain python3[54231]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:53:03 np0005625203.localdomain sudo[54229]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:03 np0005625203.localdomain sudo[54273]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-voyvabgnvbebsffwvzwektntdtofkedl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:03 np0005625203.localdomain sudo[54273]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:04 np0005625203.localdomain python3[54275]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573983.3058789-84706-117135192572045/source dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset mode=0644 owner=root group=root _original_basename=91-tripleo-container-shutdown-preset follow=False checksum=bccb1207dcbcfaa5ca05f83c8f36ce4c2460f081 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:53:04 np0005625203.localdomain sudo[54273]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:04 np0005625203.localdomain sudo[54303]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gkzukmzwcnsfvabmggnjgyqerqhaatyx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:04 np0005625203.localdomain sudo[54303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:04 np0005625203.localdomain python3[54305]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:53:04 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:53:04 np0005625203.localdomain systemd-sysv-generator[54331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:53:04 np0005625203.localdomain systemd-rc-local-generator[54326]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:53:04 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:53:04 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:53:04 np0005625203.localdomain systemd-rc-local-generator[54370]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:53:04 np0005625203.localdomain systemd-sysv-generator[54374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:53:05 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:53:05 np0005625203.localdomain systemd[1]: Starting TripleO Container Shutdown...
Feb 20 07:53:05 np0005625203.localdomain systemd[1]: Finished TripleO Container Shutdown.
Feb 20 07:53:05 np0005625203.localdomain sudo[54303]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:05 np0005625203.localdomain sudo[54427]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-okpbpgavteobdmvwkzjuqpflxffremxc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:05 np0005625203.localdomain sudo[54427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:05 np0005625203.localdomain python3[54429]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:53:05 np0005625203.localdomain sudo[54427]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:05 np0005625203.localdomain sshd[54126]: Invalid user ubuntuserver from 185.246.128.171 port 47291
Feb 20 07:53:05 np0005625203.localdomain sudo[54470]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zskqtbterurfzoauntycvoafjxdbsitz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:05 np0005625203.localdomain sudo[54470]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:05 np0005625203.localdomain python3[54472]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573985.3199987-84726-44968364650965/source dest=/usr/lib/systemd/system/netns-placeholder.service mode=0644 owner=root group=root _original_basename=netns-placeholder-service follow=False checksum=8e9c6d5ce3a6e7f71c18780ec899f32f23de4c71 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:53:06 np0005625203.localdomain sudo[54470]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:06 np0005625203.localdomain sudo[54532]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pyrlgjqzqnrkawnovibqhugksxnxqjwc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:06 np0005625203.localdomain sudo[54532]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:06 np0005625203.localdomain python3[54534]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:53:06 np0005625203.localdomain sudo[54532]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:06 np0005625203.localdomain sshd[54126]: Disconnecting invalid user ubuntuserver 185.246.128.171 port 47291: Change of username or service not allowed: (ubuntuserver,ssh-connection) -> (root,ssh-connection) [preauth]
Feb 20 07:53:06 np0005625203.localdomain sudo[54575]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ksrojfdjgkkqvuajwgkfrvweusmsgqct ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:06 np0005625203.localdomain sudo[54575]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:06 np0005625203.localdomain python3[54577]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573986.2139297-84744-227447322675392/source dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset mode=0644 owner=root group=root _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:53:06 np0005625203.localdomain sudo[54575]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:07 np0005625203.localdomain sudo[54605]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mljhywixkumcajacfyhypzlipfbjozzr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:07 np0005625203.localdomain sudo[54605]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:07 np0005625203.localdomain python3[54607]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:53:07 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:53:07 np0005625203.localdomain systemd-rc-local-generator[54630]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:53:07 np0005625203.localdomain systemd-sysv-generator[54634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:53:07 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:53:07 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:53:07 np0005625203.localdomain systemd-rc-local-generator[54669]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:53:07 np0005625203.localdomain systemd-sysv-generator[54674]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:53:07 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:53:08 np0005625203.localdomain systemd[1]: Starting Create netns directory...
Feb 20 07:53:08 np0005625203.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 20 07:53:08 np0005625203.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 20 07:53:08 np0005625203.localdomain systemd[1]: Finished Create netns directory.
Feb 20 07:53:08 np0005625203.localdomain sudo[54605]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:08 np0005625203.localdomain sudo[54697]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bvrrnknmqwrorwtgtxzsadlkmitfcswz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:08 np0005625203.localdomain sudo[54697]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for metrics_qdr, new hash: 03a088c43b03e1e2de6da7d8c7c66191
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for collectd, new hash: 4767aaabc3de112d8791c290aa2b669d
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for iscsid, new hash: ea40a11d6c51260bfa854053d924f0d3
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtlogd_wrapper, new hash: 2eb7e8e9794eebaba92e1ff8facc8868
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtnodedevd, new hash: 2eb7e8e9794eebaba92e1ff8facc8868
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtproxyd, new hash: 2eb7e8e9794eebaba92e1ff8facc8868
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtqemud, new hash: 2eb7e8e9794eebaba92e1ff8facc8868
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtsecretd, new hash: 2eb7e8e9794eebaba92e1ff8facc8868
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for nova_virtstoraged, new hash: 2eb7e8e9794eebaba92e1ff8facc8868
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for rsyslog, new hash: 97414cfc893df553083a7f7bb1c65a4f
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_compute, new hash: 201974126bd6c3f7e7b4f5296aea3207
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for ceilometer_agent_ipmi, new hash: 201974126bd6c3f7e7b4f5296aea3207
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for logrotate_crond, new hash: 53ed83bb0cae779ff95edb2002262c6f
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for nova_libvirt_init_secret, new hash: 2eb7e8e9794eebaba92e1ff8facc8868
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for nova_migration_target, new hash: 2eb7e8e9794eebaba92e1ff8facc8868
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for ovn_metadata_agent, new hash: ef7731a1bdeb8ee7875974b29f2e34e6
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for nova_compute, new hash: ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868
Feb 20 07:53:08 np0005625203.localdomain python3[54699]: ansible-container_puppet_config [WARNING] Config change detected for nova_wait_for_compute_service, new hash: 2eb7e8e9794eebaba92e1ff8facc8868
Feb 20 07:53:08 np0005625203.localdomain sudo[54697]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:08 np0005625203.localdomain sudo[54713]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tevlacrpujailoysfpyfwajbyxegbvvm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:08 np0005625203.localdomain sudo[54713]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:09 np0005625203.localdomain sudo[54713]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:09 np0005625203.localdomain sudo[54755]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oksddyefrgalrvpvibrmudehhraflmjo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:09 np0005625203.localdomain sudo[54755]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:10 np0005625203.localdomain python3[54757]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step1 config_dir=/var/lib/tripleo-config/container-startup-config/step_1 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Feb 20 07:53:10 np0005625203.localdomain podman[54795]: 2026-02-20 07:53:10.334318891 +0000 UTC m=+0.070205261 container create 4277dff6ab147a8c84334cc97c9451f5c9c754803a99e608c8e88f8a02809976 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., architecture=x86_64, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr_init_logs, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public)
Feb 20 07:53:10 np0005625203.localdomain systemd[1]: Started libpod-conmon-4277dff6ab147a8c84334cc97c9451f5c9c754803a99e608c8e88f8a02809976.scope.
Feb 20 07:53:10 np0005625203.localdomain podman[54795]: 2026-02-20 07:53:10.298470688 +0000 UTC m=+0.034357058 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Feb 20 07:53:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:53:10 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18d3c180b04686ea7c65af6629ae8cca3da3fb24aab07e7a3010e7d9a717a915/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff)
Feb 20 07:53:10 np0005625203.localdomain podman[54795]: 2026-02-20 07:53:10.422085542 +0000 UTC m=+0.157971942 container init 4277dff6ab147a8c84334cc97c9451f5c9c754803a99e608c8e88f8a02809976 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, container_name=metrics_qdr_init_logs, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, tcib_managed=true, distribution-scope=public, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 07:53:10 np0005625203.localdomain podman[54795]: 2026-02-20 07:53:10.433401984 +0000 UTC m=+0.169288384 container start 4277dff6ab147a8c84334cc97c9451f5c9c754803a99e608c8e88f8a02809976 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, vcs-type=git, version=17.1.13, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 07:53:10 np0005625203.localdomain podman[54795]: 2026-02-20 07:53:10.433671672 +0000 UTC m=+0.169558062 container attach 4277dff6ab147a8c84334cc97c9451f5c9c754803a99e608c8e88f8a02809976 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, maintainer=OpenStack TripleO Team, version=17.1.13, config_id=tripleo_step1, vcs-type=git, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr_init_logs, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, batch=17.1_20260112.1, architecture=x86_64, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible)
Feb 20 07:53:10 np0005625203.localdomain systemd[1]: libpod-4277dff6ab147a8c84334cc97c9451f5c9c754803a99e608c8e88f8a02809976.scope: Deactivated successfully.
Feb 20 07:53:10 np0005625203.localdomain podman[54795]: 2026-02-20 07:53:10.445097657 +0000 UTC m=+0.180984137 container died 4277dff6ab147a8c84334cc97c9451f5c9c754803a99e608c8e88f8a02809976 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr_init_logs, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Feb 20 07:53:10 np0005625203.localdomain sshd[54826]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:53:10 np0005625203.localdomain podman[54814]: 2026-02-20 07:53:10.526663171 +0000 UTC m=+0.066838780 container cleanup 4277dff6ab147a8c84334cc97c9451f5c9c754803a99e608c8e88f8a02809976 (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr_init_logs, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.13, config_id=tripleo_step1, distribution-scope=public, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=metrics_qdr_init_logs)
Feb 20 07:53:10 np0005625203.localdomain systemd[1]: libpod-conmon-4277dff6ab147a8c84334cc97c9451f5c9c754803a99e608c8e88f8a02809976.scope: Deactivated successfully.
Feb 20 07:53:10 np0005625203.localdomain python3[54757]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr_init_logs --conmon-pidfile /run/metrics_qdr_init_logs.pid --detach=False --label config_id=tripleo_step1 --label container_name=metrics_qdr_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R qdrouterd:qdrouterd /var/log/qdrouterd'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'none', 'privileged': False, 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr_init_logs.log --network none --privileged=False --user root --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1 /bin/bash -c chown -R qdrouterd:qdrouterd /var/log/qdrouterd
Feb 20 07:53:11 np0005625203.localdomain podman[54888]: 2026-02-20 07:53:11.011813546 +0000 UTC m=+0.088772323 container create a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, managed_by=tripleo_ansible)
Feb 20 07:53:11 np0005625203.localdomain systemd[1]: Started libpod-conmon-a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.scope.
Feb 20 07:53:11 np0005625203.localdomain podman[54888]: 2026-02-20 07:53:10.969499798 +0000 UTC m=+0.046458565 image pull  registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Feb 20 07:53:11 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:53:11 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de09a3fcf798e8d6f765a1320359ed0f97a6a1d1a2a8fd17434f89e173a7556b/merged/var/log/qdrouterd supports timestamps until 2038 (0x7fffffff)
Feb 20 07:53:11 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de09a3fcf798e8d6f765a1320359ed0f97a6a1d1a2a8fd17434f89e173a7556b/merged/var/lib/qdrouterd supports timestamps until 2038 (0x7fffffff)
Feb 20 07:53:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:53:11 np0005625203.localdomain podman[54888]: 2026-02-20 07:53:11.11191961 +0000 UTC m=+0.188878417 container init a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, release=1766032510)
Feb 20 07:53:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:53:11 np0005625203.localdomain podman[54888]: 2026-02-20 07:53:11.140120981 +0000 UTC m=+0.217079758 container start a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, tcib_managed=true, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, io.buildah.version=1.41.5, vcs-type=git)
Feb 20 07:53:11 np0005625203.localdomain sudo[54909]: qdrouterd : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 07:53:11 np0005625203.localdomain sudo[54909]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42465)
Feb 20 07:53:11 np0005625203.localdomain python3[54757]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name metrics_qdr --conmon-pidfile /run/metrics_qdr.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=03a088c43b03e1e2de6da7d8c7c66191 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step1 --label container_name=metrics_qdr --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/metrics_qdr.log --network host --privileged=False --user qdrouterd --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro --volume /var/lib/metrics_qdr:/var/lib/qdrouterd:z --volume /var/log/containers/metrics_qdr:/var/log/qdrouterd:z registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1
Feb 20 07:53:11 np0005625203.localdomain sudo[54909]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:11 np0005625203.localdomain podman[54910]: 2026-02-20 07:53:11.285023878 +0000 UTC m=+0.132999808 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=starting, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, version=17.1.13, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 07:53:11 np0005625203.localdomain sudo[54755]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-18d3c180b04686ea7c65af6629ae8cca3da3fb24aab07e7a3010e7d9a717a915-merged.mount: Deactivated successfully.
Feb 20 07:53:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4277dff6ab147a8c84334cc97c9451f5c9c754803a99e608c8e88f8a02809976-userdata-shm.mount: Deactivated successfully.
Feb 20 07:53:11 np0005625203.localdomain podman[54910]: 2026-02-20 07:53:11.489536726 +0000 UTC m=+0.337512686 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, version=17.1.13, vendor=Red Hat, Inc., config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Feb 20 07:53:11 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:53:11 np0005625203.localdomain sudo[54979]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmrnwhawfgshyorhsojlrzixuixufcza ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:11 np0005625203.localdomain sudo[54979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:11 np0005625203.localdomain python3[54981]: ansible-file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:53:11 np0005625203.localdomain sudo[54979]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:11 np0005625203.localdomain sudo[54995]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vovyynctizbcqfrjcibpcyqgfygeaihp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:11 np0005625203.localdomain sudo[54995]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:11 np0005625203.localdomain python3[54997]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_metrics_qdr_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:53:12 np0005625203.localdomain sudo[54995]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:12 np0005625203.localdomain sudo[55057]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqrpbcruqnxsnucnrltzceqkkwaevqmd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:12 np0005625203.localdomain sudo[55057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:12 np0005625203.localdomain python3[55059]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771573992.0573015-84911-230283816016542/source dest=/etc/systemd/system/tripleo_metrics_qdr.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:53:12 np0005625203.localdomain sudo[55057]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:12 np0005625203.localdomain sudo[55073]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vvjhpqhmuujuowcmbiqpztwsmgkreawh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:12 np0005625203.localdomain sudo[55073]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:12 np0005625203.localdomain python3[55075]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 07:53:12 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:53:13 np0005625203.localdomain systemd-rc-local-generator[55099]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:53:13 np0005625203.localdomain systemd-sysv-generator[55105]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:53:13 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:53:13 np0005625203.localdomain sudo[55073]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:13 np0005625203.localdomain sudo[55125]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmkcxptmrsjdxaarjvcqhaprlqatqtgt ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:53:13 np0005625203.localdomain sudo[55125]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:13 np0005625203.localdomain python3[55127]: ansible-systemd Invoked with state=restarted name=tripleo_metrics_qdr.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:53:13 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:53:13 np0005625203.localdomain systemd-rc-local-generator[55153]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:53:13 np0005625203.localdomain systemd-sysv-generator[55158]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:53:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:53:14 np0005625203.localdomain systemd[1]: Starting metrics_qdr container...
Feb 20 07:53:14 np0005625203.localdomain systemd[1]: Started metrics_qdr container.
Feb 20 07:53:14 np0005625203.localdomain sudo[55125]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:14 np0005625203.localdomain sudo[55206]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-unitoxcsbglzcchhxtrppqzrwgbgdmfp ; /usr/bin/python3
Feb 20 07:53:14 np0005625203.localdomain sudo[55206]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:14 np0005625203.localdomain python3[55208]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks1.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:53:14 np0005625203.localdomain sudo[55206]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:15 np0005625203.localdomain sudo[55254]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-drbuobbaslngejmatmsxwrkfvjdkmyma ; /usr/bin/python3
Feb 20 07:53:15 np0005625203.localdomain sudo[55254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:15 np0005625203.localdomain sudo[55254]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:15 np0005625203.localdomain sudo[55297]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ivbspolpamrdilojefyifysqwdfisikh ; /usr/bin/python3
Feb 20 07:53:15 np0005625203.localdomain sudo[55297]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:15 np0005625203.localdomain sudo[55297]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:15 np0005625203.localdomain sudo[55327]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rpwsakcawhvxxnsoaisooxenloyiadjb ; /usr/bin/python3
Feb 20 07:53:15 np0005625203.localdomain sudo[55327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:16 np0005625203.localdomain python3[55329]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks1.json short_hostname=np0005625203 step=1 update_config_hash_only=False
Feb 20 07:53:16 np0005625203.localdomain sudo[55327]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:16 np0005625203.localdomain sudo[55343]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-acblupbncpaxnpjevpmmoivmqeocwwbg ; /usr/bin/python3
Feb 20 07:53:16 np0005625203.localdomain sudo[55343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:16 np0005625203.localdomain python3[55345]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:53:16 np0005625203.localdomain sudo[55343]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:16 np0005625203.localdomain sudo[55359]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqffedbspvngtvyawrvkvttbbgbhzmfk ; /usr/bin/python3
Feb 20 07:53:16 np0005625203.localdomain sudo[55359]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:53:16 np0005625203.localdomain python3[55361]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_1 config_pattern=container-puppet-*.json config_overrides={} debug=True
Feb 20 07:53:16 np0005625203.localdomain sudo[55359]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:20 np0005625203.localdomain sshd[54826]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 24471 ssh2 [preauth]
Feb 20 07:53:20 np0005625203.localdomain sshd[54826]: Disconnecting authenticating user root 185.246.128.171 port 24471: Too many authentication failures [preauth]
Feb 20 07:53:22 np0005625203.localdomain sshd[55362]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:53:31 np0005625203.localdomain sshd[55362]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 21117 ssh2 [preauth]
Feb 20 07:53:31 np0005625203.localdomain sshd[55362]: Disconnecting authenticating user root 185.246.128.171 port 21117: Too many authentication failures [preauth]
Feb 20 07:53:35 np0005625203.localdomain sshd[55364]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:53:39 np0005625203.localdomain sshd[55366]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:53:41 np0005625203.localdomain sshd[55366]: Invalid user n8n from 40.81.244.142 port 53664
Feb 20 07:53:41 np0005625203.localdomain sshd[55366]: Received disconnect from 40.81.244.142 port 53664:11: Bye Bye [preauth]
Feb 20 07:53:41 np0005625203.localdomain sshd[55366]: Disconnected from invalid user n8n 40.81.244.142 port 53664 [preauth]
Feb 20 07:53:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:53:41 np0005625203.localdomain systemd[1]: tmp-crun.V4pXv0.mount: Deactivated successfully.
Feb 20 07:53:41 np0005625203.localdomain podman[55368]: 2026-02-20 07:53:41.774903653 +0000 UTC m=+0.093702834 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, version=17.1.13, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 07:53:41 np0005625203.localdomain podman[55368]: 2026-02-20 07:53:41.987115859 +0000 UTC m=+0.305914970 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510)
Feb 20 07:53:42 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:53:44 np0005625203.localdomain sshd[55364]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 27547 ssh2 [preauth]
Feb 20 07:53:44 np0005625203.localdomain sshd[55364]: Disconnecting authenticating user root 185.246.128.171 port 27547: Too many authentication failures [preauth]
Feb 20 07:53:46 np0005625203.localdomain sshd[55397]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:53:47 np0005625203.localdomain sshd[55399]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:53:47 np0005625203.localdomain sshd[55399]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:53:51 np0005625203.localdomain sudo[55401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:53:51 np0005625203.localdomain sudo[55401]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:53:51 np0005625203.localdomain sudo[55401]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:51 np0005625203.localdomain sudo[55416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:53:51 np0005625203.localdomain sudo[55416]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:53:51 np0005625203.localdomain sudo[55416]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:52 np0005625203.localdomain sudo[55462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:53:52 np0005625203.localdomain sudo[55462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:53:52 np0005625203.localdomain sudo[55462]: pam_unix(sudo:session): session closed for user root
Feb 20 07:53:53 np0005625203.localdomain sshd[55397]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 20052 ssh2 [preauth]
Feb 20 07:53:53 np0005625203.localdomain sshd[55397]: Disconnecting authenticating user root 185.246.128.171 port 20052: Too many authentication failures [preauth]
Feb 20 07:53:55 np0005625203.localdomain sshd[55477]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:54:03 np0005625203.localdomain sshd[55477]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 6881 ssh2 [preauth]
Feb 20 07:54:03 np0005625203.localdomain sshd[55477]: Disconnecting authenticating user root 185.246.128.171 port 6881: Too many authentication failures [preauth]
Feb 20 07:54:07 np0005625203.localdomain sshd[55479]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:54:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:54:12 np0005625203.localdomain podman[55481]: 2026-02-20 07:54:12.760845156 +0000 UTC m=+0.076543277 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, release=1766032510, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, distribution-scope=public)
Feb 20 07:54:12 np0005625203.localdomain sshd[55497]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:54:12 np0005625203.localdomain podman[55481]: 2026-02-20 07:54:12.949275583 +0000 UTC m=+0.264973754 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 07:54:12 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:54:13 np0005625203.localdomain sshd[55497]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:54:17 np0005625203.localdomain sshd[55479]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 7018 ssh2 [preauth]
Feb 20 07:54:17 np0005625203.localdomain sshd[55479]: Disconnecting authenticating user root 185.246.128.171 port 7018: Too many authentication failures [preauth]
Feb 20 07:54:20 np0005625203.localdomain sshd[55513]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:54:29 np0005625203.localdomain sshd[55513]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 11711 ssh2 [preauth]
Feb 20 07:54:29 np0005625203.localdomain sshd[55513]: Disconnecting authenticating user root 185.246.128.171 port 11711: Too many authentication failures [preauth]
Feb 20 07:54:30 np0005625203.localdomain sshd[55515]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:54:35 np0005625203.localdomain sshd[55515]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 5996 ssh2 [preauth]
Feb 20 07:54:35 np0005625203.localdomain sshd[55515]: Disconnecting authenticating user root 185.246.128.171 port 5996: Too many authentication failures [preauth]
Feb 20 07:54:35 np0005625203.localdomain sshd[55517]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:54:35 np0005625203.localdomain sshd[55519]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:54:35 np0005625203.localdomain sshd[55517]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:54:43 np0005625203.localdomain sshd[55519]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 34469 ssh2 [preauth]
Feb 20 07:54:43 np0005625203.localdomain sshd[55519]: Disconnecting authenticating user root 185.246.128.171 port 34469: Too many authentication failures [preauth]
Feb 20 07:54:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:54:43 np0005625203.localdomain podman[55521]: 2026-02-20 07:54:43.219603465 +0000 UTC m=+0.071618465 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 07:54:43 np0005625203.localdomain podman[55521]: 2026-02-20 07:54:43.448651363 +0000 UTC m=+0.300666413 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 07:54:43 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:54:44 np0005625203.localdomain sshd[55550]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:54:50 np0005625203.localdomain sshd[55550]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 18127 ssh2 [preauth]
Feb 20 07:54:50 np0005625203.localdomain sshd[55550]: Disconnecting authenticating user root 185.246.128.171 port 18127: Too many authentication failures [preauth]
Feb 20 07:54:51 np0005625203.localdomain sshd[55552]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:54:52 np0005625203.localdomain sudo[55553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:54:52 np0005625203.localdomain sudo[55553]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:54:52 np0005625203.localdomain sudo[55553]: pam_unix(sudo:session): session closed for user root
Feb 20 07:54:52 np0005625203.localdomain sudo[55568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:54:52 np0005625203.localdomain sudo[55568]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:54:53 np0005625203.localdomain sudo[55568]: pam_unix(sudo:session): session closed for user root
Feb 20 07:54:53 np0005625203.localdomain sudo[55616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:54:53 np0005625203.localdomain sudo[55616]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:54:53 np0005625203.localdomain sudo[55616]: pam_unix(sudo:session): session closed for user root
Feb 20 07:55:05 np0005625203.localdomain sshd[55631]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:55:06 np0005625203.localdomain sshd[55631]: Invalid user n8n from 187.87.206.21 port 48020
Feb 20 07:55:06 np0005625203.localdomain sshd[55552]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 61448 ssh2 [preauth]
Feb 20 07:55:06 np0005625203.localdomain sshd[55552]: Disconnecting authenticating user root 185.246.128.171 port 61448: Too many authentication failures [preauth]
Feb 20 07:55:06 np0005625203.localdomain sshd[55631]: Received disconnect from 187.87.206.21 port 48020:11: Bye Bye [preauth]
Feb 20 07:55:06 np0005625203.localdomain sshd[55631]: Disconnected from invalid user n8n 187.87.206.21 port 48020 [preauth]
Feb 20 07:55:09 np0005625203.localdomain sshd[55633]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:55:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:55:13 np0005625203.localdomain podman[55635]: 2026-02-20 07:55:13.773982798 +0000 UTC m=+0.091735503 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, url=https://www.redhat.com, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Feb 20 07:55:13 np0005625203.localdomain podman[55635]: 2026-02-20 07:55:13.967312794 +0000 UTC m=+0.285065459 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc.)
Feb 20 07:55:13 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:55:18 np0005625203.localdomain sshd[55633]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 35799 ssh2 [preauth]
Feb 20 07:55:18 np0005625203.localdomain sshd[55633]: Disconnecting authenticating user root 185.246.128.171 port 35799: Too many authentication failures [preauth]
Feb 20 07:55:20 np0005625203.localdomain sshd[55663]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:55:23 np0005625203.localdomain sshd[55665]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:55:24 np0005625203.localdomain sshd[55665]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:55:28 np0005625203.localdomain sshd[55663]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 33701 ssh2 [preauth]
Feb 20 07:55:28 np0005625203.localdomain sshd[55663]: Disconnecting authenticating user root 185.246.128.171 port 33701: Too many authentication failures [preauth]
Feb 20 07:55:31 np0005625203.localdomain sshd[55667]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:55:39 np0005625203.localdomain sshd[55669]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:55:39 np0005625203.localdomain sshd[55669]: Invalid user builder from 189.190.2.14 port 35978
Feb 20 07:55:39 np0005625203.localdomain sshd[55669]: Received disconnect from 189.190.2.14 port 35978:11: Bye Bye [preauth]
Feb 20 07:55:39 np0005625203.localdomain sshd[55669]: Disconnected from invalid user builder 189.190.2.14 port 35978 [preauth]
Feb 20 07:55:40 np0005625203.localdomain sshd[55667]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 35960 ssh2 [preauth]
Feb 20 07:55:40 np0005625203.localdomain sshd[55667]: Disconnecting authenticating user root 185.246.128.171 port 35960: Too many authentication failures [preauth]
Feb 20 07:55:43 np0005625203.localdomain sshd[55671]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:55:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:55:44 np0005625203.localdomain podman[55673]: 2026-02-20 07:55:44.757796231 +0000 UTC m=+0.078644098 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T22:10:14Z, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd)
Feb 20 07:55:44 np0005625203.localdomain podman[55673]: 2026-02-20 07:55:44.957799023 +0000 UTC m=+0.278646860 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, release=1766032510, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, vcs-type=git, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 07:55:44 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:55:45 np0005625203.localdomain sshd[55702]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:55:46 np0005625203.localdomain sshd[55702]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:55:51 np0005625203.localdomain sshd[55704]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:55:53 np0005625203.localdomain sshd[55704]: Received disconnect from 123.204.132.127 port 40746:11: Bye Bye [preauth]
Feb 20 07:55:53 np0005625203.localdomain sshd[55704]: Disconnected from authenticating user root 123.204.132.127 port 40746 [preauth]
Feb 20 07:55:53 np0005625203.localdomain sudo[55706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:55:53 np0005625203.localdomain sudo[55706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:55:53 np0005625203.localdomain sudo[55706]: pam_unix(sudo:session): session closed for user root
Feb 20 07:55:54 np0005625203.localdomain sudo[55721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:55:54 np0005625203.localdomain sudo[55721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:55:54 np0005625203.localdomain sudo[55721]: pam_unix(sudo:session): session closed for user root
Feb 20 07:55:55 np0005625203.localdomain sudo[55768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:55:55 np0005625203.localdomain sudo[55768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:55:55 np0005625203.localdomain sudo[55768]: pam_unix(sudo:session): session closed for user root
Feb 20 07:55:55 np0005625203.localdomain sshd[55671]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 41088 ssh2 [preauth]
Feb 20 07:55:55 np0005625203.localdomain sshd[55671]: Disconnecting authenticating user root 185.246.128.171 port 41088: Too many authentication failures [preauth]
Feb 20 07:55:57 np0005625203.localdomain sshd[55783]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:56:00 np0005625203.localdomain sshd[55783]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 4209 ssh2 [preauth]
Feb 20 07:56:00 np0005625203.localdomain sshd[55783]: Disconnecting authenticating user root 185.246.128.171 port 4209: Too many authentication failures [preauth]
Feb 20 07:56:02 np0005625203.localdomain sshd[55785]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:56:10 np0005625203.localdomain sshd[55785]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 30170 ssh2 [preauth]
Feb 20 07:56:10 np0005625203.localdomain sshd[55785]: Disconnecting authenticating user root 185.246.128.171 port 30170: Too many authentication failures [preauth]
Feb 20 07:56:11 np0005625203.localdomain sshd[55787]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:56:12 np0005625203.localdomain sshd[55789]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:56:12 np0005625203.localdomain sshd[55789]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:56:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:56:15 np0005625203.localdomain podman[55791]: 2026-02-20 07:56:15.762825307 +0000 UTC m=+0.081523888 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, release=1766032510, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 07:56:15 np0005625203.localdomain podman[55791]: 2026-02-20 07:56:15.957028958 +0000 UTC m=+0.275727579 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.13, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 07:56:15 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:56:20 np0005625203.localdomain sshd[55787]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 22571 ssh2 [preauth]
Feb 20 07:56:20 np0005625203.localdomain sshd[55787]: Disconnecting authenticating user root 185.246.128.171 port 22571: Too many authentication failures [preauth]
Feb 20 07:56:23 np0005625203.localdomain sshd[55820]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:56:29 np0005625203.localdomain sshd[55820]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 35700 ssh2 [preauth]
Feb 20 07:56:29 np0005625203.localdomain sshd[55820]: Disconnecting authenticating user root 185.246.128.171 port 35700: Too many authentication failures [preauth]
Feb 20 07:56:30 np0005625203.localdomain sshd[55822]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:56:37 np0005625203.localdomain sshd[55822]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 12681 ssh2 [preauth]
Feb 20 07:56:37 np0005625203.localdomain sshd[55822]: Disconnecting authenticating user root 185.246.128.171 port 12681: Too many authentication failures [preauth]
Feb 20 07:56:38 np0005625203.localdomain sshd[55824]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:56:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:56:46 np0005625203.localdomain podman[55826]: 2026-02-20 07:56:46.780217216 +0000 UTC m=+0.101860321 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20260112.1)
Feb 20 07:56:46 np0005625203.localdomain podman[55826]: 2026-02-20 07:56:46.974322743 +0000 UTC m=+0.295965808 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 07:56:47 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:56:48 np0005625203.localdomain sshd[55824]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 58979 ssh2 [preauth]
Feb 20 07:56:48 np0005625203.localdomain sshd[55824]: Disconnecting authenticating user root 185.246.128.171 port 58979: Too many authentication failures [preauth]
Feb 20 07:56:50 np0005625203.localdomain sshd[55856]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:56:55 np0005625203.localdomain sshd[55856]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 3749 ssh2 [preauth]
Feb 20 07:56:55 np0005625203.localdomain sshd[55856]: Disconnecting authenticating user root 185.246.128.171 port 3749: Too many authentication failures [preauth]
Feb 20 07:56:55 np0005625203.localdomain sudo[55858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:56:55 np0005625203.localdomain sudo[55858]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:56:55 np0005625203.localdomain sudo[55858]: pam_unix(sudo:session): session closed for user root
Feb 20 07:56:55 np0005625203.localdomain sudo[55873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:56:55 np0005625203.localdomain sudo[55873]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:56:56 np0005625203.localdomain sudo[55873]: pam_unix(sudo:session): session closed for user root
Feb 20 07:56:56 np0005625203.localdomain sshd[55919]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:56:56 np0005625203.localdomain sudo[55921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:56:56 np0005625203.localdomain sudo[55921]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:56:56 np0005625203.localdomain sudo[55921]: pam_unix(sudo:session): session closed for user root
Feb 20 07:56:59 np0005625203.localdomain sshd[55936]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:56:59 np0005625203.localdomain sshd[55936]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:56:59 np0005625203.localdomain sshd[55919]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 36120 ssh2 [preauth]
Feb 20 07:56:59 np0005625203.localdomain sshd[55919]: Disconnecting authenticating user root 185.246.128.171 port 36120: Too many authentication failures [preauth]
Feb 20 07:57:00 np0005625203.localdomain sshd[55938]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:57:01 np0005625203.localdomain sshd[55940]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:57:02 np0005625203.localdomain sshd[55938]: Invalid user student from 103.171.84.20 port 38364
Feb 20 07:57:02 np0005625203.localdomain sshd[55938]: Received disconnect from 103.171.84.20 port 38364:11: Bye Bye [preauth]
Feb 20 07:57:02 np0005625203.localdomain sshd[55938]: Disconnected from invalid user student 103.171.84.20 port 38364 [preauth]
Feb 20 07:57:07 np0005625203.localdomain sshd[55940]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 2994 ssh2 [preauth]
Feb 20 07:57:07 np0005625203.localdomain sshd[55940]: Disconnecting authenticating user root 185.246.128.171 port 2994: Too many authentication failures [preauth]
Feb 20 07:57:09 np0005625203.localdomain sshd[55942]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:57:13 np0005625203.localdomain sshd[55944]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:57:13 np0005625203.localdomain sshd[55944]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:57:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:57:17 np0005625203.localdomain podman[55946]: 2026-02-20 07:57:17.745108682 +0000 UTC m=+0.068438935 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, release=1766032510, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 07:57:17 np0005625203.localdomain podman[55946]: 2026-02-20 07:57:17.9433514 +0000 UTC m=+0.266681653 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, container_name=metrics_qdr, io.openshift.expose-services=, release=1766032510, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd)
Feb 20 07:57:17 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:57:18 np0005625203.localdomain sshd[55942]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 50833 ssh2 [preauth]
Feb 20 07:57:18 np0005625203.localdomain sshd[55942]: Disconnecting authenticating user root 185.246.128.171 port 50833: Too many authentication failures [preauth]
Feb 20 07:57:19 np0005625203.localdomain sshd[55976]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:57:23 np0005625203.localdomain sshd[55976]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 46293 ssh2 [preauth]
Feb 20 07:57:23 np0005625203.localdomain sshd[55976]: Disconnecting authenticating user root 185.246.128.171 port 46293: Too many authentication failures [preauth]
Feb 20 07:57:24 np0005625203.localdomain sshd[55978]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:57:27 np0005625203.localdomain sshd[55980]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:57:28 np0005625203.localdomain sshd[55980]: Invalid user student from 40.81.244.142 port 54780
Feb 20 07:57:28 np0005625203.localdomain sshd[55980]: Received disconnect from 40.81.244.142 port 54780:11: Bye Bye [preauth]
Feb 20 07:57:28 np0005625203.localdomain sshd[55980]: Disconnected from invalid user student 40.81.244.142 port 54780 [preauth]
Feb 20 07:57:28 np0005625203.localdomain sshd[55978]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 13103 ssh2 [preauth]
Feb 20 07:57:28 np0005625203.localdomain sshd[55978]: Disconnecting authenticating user root 185.246.128.171 port 13103: Too many authentication failures [preauth]
Feb 20 07:57:29 np0005625203.localdomain sshd[55982]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:57:35 np0005625203.localdomain sshd[55982]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 42670 ssh2 [preauth]
Feb 20 07:57:35 np0005625203.localdomain sshd[55982]: Disconnecting authenticating user root 185.246.128.171 port 42670: Too many authentication failures [preauth]
Feb 20 07:57:36 np0005625203.localdomain sshd[55985]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:57:43 np0005625203.localdomain sshd[55987]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:57:43 np0005625203.localdomain sshd[55987]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:57:44 np0005625203.localdomain sshd[55985]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 16609 ssh2 [preauth]
Feb 20 07:57:44 np0005625203.localdomain sshd[55985]: Disconnecting authenticating user root 185.246.128.171 port 16609: Too many authentication failures [preauth]
Feb 20 07:57:45 np0005625203.localdomain sshd[55989]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:57:47 np0005625203.localdomain sshd[55989]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 5846 ssh2 [preauth]
Feb 20 07:57:47 np0005625203.localdomain sshd[55989]: Disconnecting authenticating user root 185.246.128.171 port 5846: Too many authentication failures [preauth]
Feb 20 07:57:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:57:48 np0005625203.localdomain podman[55991]: 2026-02-20 07:57:48.7602175 +0000 UTC m=+0.079475632 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.5, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 07:57:48 np0005625203.localdomain podman[55991]: 2026-02-20 07:57:48.950167509 +0000 UTC m=+0.269425661 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, distribution-scope=public, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 07:57:48 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:57:49 np0005625203.localdomain sshd[56021]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:57:56 np0005625203.localdomain sudo[56023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:57:56 np0005625203.localdomain sudo[56023]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:57:56 np0005625203.localdomain sudo[56023]: pam_unix(sudo:session): session closed for user root
Feb 20 07:57:56 np0005625203.localdomain sudo[56038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:57:56 np0005625203.localdomain sudo[56038]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:57:57 np0005625203.localdomain sudo[56038]: pam_unix(sudo:session): session closed for user root
Feb 20 07:57:57 np0005625203.localdomain sudo[56085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:57:57 np0005625203.localdomain sudo[56085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:57:57 np0005625203.localdomain sudo[56085]: pam_unix(sudo:session): session closed for user root
Feb 20 07:58:00 np0005625203.localdomain sshd[56021]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 29218 ssh2 [preauth]
Feb 20 07:58:00 np0005625203.localdomain sshd[56021]: Disconnecting authenticating user root 185.246.128.171 port 29218: Too many authentication failures [preauth]
Feb 20 07:58:02 np0005625203.localdomain anacron[19053]: Job `cron.weekly' started
Feb 20 07:58:02 np0005625203.localdomain anacron[19053]: Job `cron.weekly' terminated
Feb 20 07:58:02 np0005625203.localdomain sshd[56102]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:58:03 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 18 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [4,5,3] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:04 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 19 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [4,5,3] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:06 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [5,4,0] r=1 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:07 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 21 pg[4.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [3,4,5] r=1 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:09 np0005625203.localdomain sshd[56102]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 40837 ssh2 [preauth]
Feb 20 07:58:09 np0005625203.localdomain sshd[56102]: Disconnecting authenticating user root 185.246.128.171 port 40837: Too many authentication failures [preauth]
Feb 20 07:58:09 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2,3,4] r=2 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:11 np0005625203.localdomain sshd[56104]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:58:19 np0005625203.localdomain sshd[56104]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 27893 ssh2 [preauth]
Feb 20 07:58:19 np0005625203.localdomain sshd[56104]: Disconnecting authenticating user root 185.246.128.171 port 27893: Too many authentication failures [preauth]
Feb 20 07:58:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:58:19 np0005625203.localdomain podman[56106]: 2026-02-20 07:58:19.354283014 +0000 UTC m=+0.100626495 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., distribution-scope=public, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team)
Feb 20 07:58:19 np0005625203.localdomain podman[56106]: 2026-02-20 07:58:19.529387335 +0000 UTC m=+0.275730756 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 07:58:19 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:58:19 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 31 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=10.857001305s) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 active pruub 1118.581420898s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,0], acting [5,4,0] -> [5,4,0], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:19 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 31 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=31 pruub=8.639783859s) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active pruub 1116.364257812s@ mbc={}] start_peering_interval up [4,5,3] -> [4,5,3], acting [4,5,3] -> [4,5,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:19 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 31 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=31 pruub=10.853908539s) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1118.581420898s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:19 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 31 pg[2.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=31 pruub=8.639783859s) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown pruub 1116.364257812s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.19( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.19( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.18( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.17( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.17( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.16( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.16( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.15( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.14( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.14( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.13( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.15( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.12( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.13( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.12( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.10( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.11( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.18( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.11( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.10( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.f( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.e( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.d( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.f( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.c( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.e( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.b( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.a( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.c( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.d( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.a( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.b( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.1( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.6( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.7( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.7( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.3( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.2( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.6( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.3( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.2( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.4( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.5( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.4( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.8( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.5( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.8( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.9( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.1b( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1a( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.1a( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1b( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.1d( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.1c( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1c( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1d( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.9( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1e( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.1e( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[3.1f( empty local-lis/les=20/21 n=0 ec=31/20 lis/c=20/20 les/c/f=21/21/0 sis=31) [5,4,0] r=1 lpr=31 pi=[20,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1f( empty local-lis/les=18/19 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.0( empty local-lis/les=31/32 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1f( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1c( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.9( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1d( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.8( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1b( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.4( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1a( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.5( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1e( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.7( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.6( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.2( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.b( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.3( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.1( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.a( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.c( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.d( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.e( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.11( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.f( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.13( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.12( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.14( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.10( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.15( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.17( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.16( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.18( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:20 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 32 pg[2.19( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=18/18 les/c/f=19/19/0 sis=31) [4,5,3] r=0 lpr=31 pi=[18,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:21 np0005625203.localdomain sshd[56136]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:58:21 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 33 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=12.289926529s) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 active pruub 1122.057617188s@ mbc={}] start_peering_interval up [2,3,4] -> [2,3,4], acting [2,3,4] -> [2,3,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:21 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 33 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=9.739995956s) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 active pruub 1119.507690430s@ mbc={}] start_peering_interval up [3,4,5] -> [3,4,5], acting [3,4,5] -> [3,4,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:21 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 33 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=33 pruub=12.287155151s) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1122.057617188s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:21 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 33 pg[4.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=33 pruub=9.736588478s) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1119.507690430s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.18( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.18( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.19( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.19( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.1a( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.1a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.1b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.1b( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.1c( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.1d( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.1d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.1c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.f( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.3( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.e( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.2( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.2( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.3( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.5( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.4( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.4( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.5( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.1( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.7( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.6( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.1( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.6( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.7( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.d( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.c( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.c( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.a( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.b( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.b( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.a( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.8( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.8( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.9( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.9( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.d( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.16( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.17( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.17( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.16( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.15( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.15( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.14( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.14( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.12( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.13( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.12( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.10( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.13( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.11( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.1f( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.10( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.1e( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[4.1f( empty local-lis/les=21/22 n=0 ec=33/21 lis/c=21/21 les/c/f=22/22/0 sis=33) [3,4,5] r=1 lpr=33 pi=[21,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.1e( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 34 pg[5.11( empty local-lis/les=23/24 n=0 ec=33/23 lis/c=23/23 les/c/f=24/24/0 sis=33) [2,3,4] r=2 lpr=33 pi=[23,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:25 np0005625203.localdomain sshd[56138]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:58:26 np0005625203.localdomain sshd[56138]: Invalid user claude from 102.211.152.28 port 60392
Feb 20 07:58:26 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 35 pg[6.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [0,4,2] r=1 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:26 np0005625203.localdomain sshd[56138]: Received disconnect from 102.211.152.28 port 60392:11: Bye Bye [preauth]
Feb 20 07:58:26 np0005625203.localdomain sshd[56138]: Disconnected from invalid user claude 102.211.152.28 port 60392 [preauth]
Feb 20 07:58:26 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 36 pg[7.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1,5,3] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:26 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Feb 20 07:58:26 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Feb 20 07:58:27 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 37 pg[7.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1,5,3] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:29 np0005625203.localdomain sshd[56140]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:58:29 np0005625203.localdomain sshd[56140]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:58:30 np0005625203.localdomain sshd[56136]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 24514 ssh2 [preauth]
Feb 20 07:58:30 np0005625203.localdomain sshd[56136]: Disconnecting authenticating user root 185.246.128.171 port 24514: Too many authentication failures [preauth]
Feb 20 07:58:30 np0005625203.localdomain sshd[56142]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.194569588s) [3,2,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.770385742s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,4], acting [5,4,0] -> [3,2,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1f( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.181655884s) [0,4,2] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.757568359s@ mbc={}] start_peering_interval up [4,5,3] -> [0,4,2], acting [4,5,3] -> [0,4,2], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.194468498s) [3,2,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.770385742s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.649108887s) [2,3,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225219727s@ mbc={}] start_peering_interval up [3,4,5] -> [2,3,1], acting [3,4,5] -> [2,3,1], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1f( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.181522369s) [0,4,2] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.757568359s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.644793510s) [4,2,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.220825195s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,3], acting [2,3,4] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.19( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.649011612s) [2,3,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.225219727s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.18( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.644793510s) [4,2,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.220825195s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642694473s) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.218994141s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.193499565s) [0,1,5] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769897461s@ mbc={}] start_peering_interval up [5,4,0] -> [0,1,5], acting [5,4,0] -> [0,1,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1e( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.190934181s) [3,5,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.767333984s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1d( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.189816475s) [2,4,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.766357422s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1d( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.189766884s) [2,4,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.766357422s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.644563675s) [0,1,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.221191406s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,5], acting [2,3,4] -> [0,1,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1c( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.187785149s) [1,3,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.764648438s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,2], acting [5,4,0] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1e( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.190800667s) [3,5,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.767333984s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.193103790s) [0,1,5] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769897461s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1c( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.187748909s) [1,3,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.764648438s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.18( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642694473s) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.218994141s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.19( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.644521713s) [0,1,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.221191406s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.647731781s) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225341797s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.192208290s) [5,4,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769653320s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,3], acting [5,4,0] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.192071915s) [5,4,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769653320s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.647731781s) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.225341797s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.641180992s) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.218994141s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,5], acting [3,4,5] -> [4,3,5], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642642975s) [2,4,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.220458984s@ mbc={}] start_peering_interval up [2,3,4] -> [2,4,3], acting [2,3,4] -> [2,4,3], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.641180992s) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.218994141s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642579079s) [2,4,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.220458984s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1c( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.188207626s) [2,1,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.766113281s@ mbc={}] start_peering_interval up [4,5,3] -> [2,1,0], acting [4,5,3] -> [2,1,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1b( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.188536644s) [5,4,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.766723633s@ mbc={}] start_peering_interval up [4,5,3] -> [5,4,3], acting [4,5,3] -> [5,4,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.640793800s) [1,0,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.218872070s@ mbc={}] start_peering_interval up [2,3,4] -> [1,0,2], acting [2,3,4] -> [1,0,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1c( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.188001633s) [2,1,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.766113281s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1b( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.188500404s) [5,4,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.766723633s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.640569687s) [1,0,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.218872070s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.646961212s) [2,1,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225341797s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,3], acting [3,4,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.646938324s) [2,1,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.225341797s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184556007s) [5,3,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.763061523s@ mbc={}] start_peering_interval up [5,4,0] -> [5,3,4], acting [5,4,0] -> [5,3,4], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184508324s) [5,3,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.763061523s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642455101s) [4,2,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.221191406s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,0], acting [2,3,4] -> [4,2,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1a( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.188042641s) [1,5,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.766967773s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.640199661s) [2,3,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.219116211s@ mbc={}] start_peering_interval up [3,4,5] -> [2,3,4], acting [3,4,5] -> [2,3,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184652328s) [5,4,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.763549805s@ mbc={}] start_peering_interval up [5,4,0] -> [5,4,3], acting [5,4,0] -> [5,4,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1a( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.187980652s) [1,5,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.766967773s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.641639709s) [3,1,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.220703125s@ mbc={}] start_peering_interval up [2,3,4] -> [3,1,5], acting [2,3,4] -> [3,1,5], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.640155792s) [2,3,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.219116211s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.641561508s) [3,1,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.220703125s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.189859390s) [2,0,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769042969s@ mbc={}] start_peering_interval up [5,4,0] -> [2,0,4], acting [5,4,0] -> [2,0,4], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.8( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.189832687s) [2,0,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769042969s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184556961s) [5,4,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.763549805s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.641098976s) [2,0,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.220458984s@ mbc={}] start_peering_interval up [2,3,4] -> [2,0,4], acting [2,3,4] -> [2,0,4], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639822960s) [3,2,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.219116211s@ mbc={}] start_peering_interval up [3,4,5] -> [3,2,4], acting [3,4,5] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.641079903s) [2,0,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.220458984s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642455101s) [4,2,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.221191406s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.190207481s) [5,1,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769897461s@ mbc={}] start_peering_interval up [5,4,0] -> [5,1,3], acting [5,4,0] -> [5,1,3], acting_primary 5 -> 5, up_primary 5 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.9( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.190184593s) [5,1,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769897461s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.9( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186599731s) [3,4,5] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.766113281s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,5], acting [4,5,3] -> [3,4,5], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639778137s) [3,2,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.219116211s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.645234108s) [4,5,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.224975586s@ mbc={}] start_peering_interval up [3,4,5] -> [4,5,0], acting [3,4,5] -> [4,5,0], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639004707s) [4,2,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.218750000s@ mbc={}] start_peering_interval up [2,3,4] -> [4,2,3], acting [2,3,4] -> [4,2,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.9( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186542511s) [3,4,5] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.766113281s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.8( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186729431s) [1,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.766357422s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639004707s) [4,2,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.218750000s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.645234108s) [4,5,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.224975586s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.4( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.189088821s) [3,2,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769042969s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,1], acting [5,4,0] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.8( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186485291s) [1,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.766357422s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.4( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.189046860s) [3,2,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769042969s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.5( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186900139s) [2,0,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.766967773s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.638749123s) [4,0,2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.218994141s@ mbc={}] start_peering_interval up [2,3,4] -> [4,0,2], acting [2,3,4] -> [4,0,2], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.2( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.638749123s) [4,0,2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.218994141s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.5( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186783791s) [2,0,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.766967773s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.4( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186411858s) [3,2,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.766723633s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,1], acting [4,5,3] -> [3,2,1], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.188619614s) [4,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769042969s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.644308090s) [2,4,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.224243164s@ mbc={}] start_peering_interval up [3,4,5] -> [2,4,3], acting [3,4,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.4( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186331749s) [3,2,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.766723633s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.3( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.643892288s) [2,4,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.224243164s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.5( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.188619614s) [4,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.769042969s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.644365311s) [2,1,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.224975586s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,3], acting [3,4,5] -> [2,1,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.2( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.644330025s) [2,1,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.224975586s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.2( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.188308716s) [3,5,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.768920898s@ mbc={}] start_peering_interval up [5,4,0] -> [3,5,1], acting [5,4,0] -> [3,5,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.2( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.188279152s) [3,5,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.768920898s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.638098717s) [0,1,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.218750000s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,2], acting [2,3,4] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.3( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.187048912s) [4,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.767822266s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,5], acting [4,5,3] -> [4,3,5], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.3( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.187048912s) [4,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.767822266s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.645341873s) [1,5,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.226196289s@ mbc={}] start_peering_interval up [3,4,5] -> [1,5,0], acting [3,4,5] -> [1,5,0], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.3( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.637910843s) [0,1,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.218750000s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.187682152s) [4,0,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.768676758s@ mbc={}] start_peering_interval up [5,4,0] -> [4,0,5], acting [5,4,0] -> [4,0,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.636773109s) [5,3,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.217895508s@ mbc={}] start_peering_interval up [2,3,4] -> [5,3,4], acting [2,3,4] -> [5,3,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.2( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186655998s) [1,0,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.767700195s@ mbc={}] start_peering_interval up [4,5,3] -> [1,0,2], acting [4,5,3] -> [1,0,2], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.5( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.645301819s) [1,5,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.226196289s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.643848419s) [0,5,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225097656s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,1], acting [3,4,5] -> [0,5,1], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.3( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.187682152s) [4,0,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.768676758s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.2( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186430931s) [1,0,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.767700195s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.187658310s) [0,4,5] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769042969s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.6( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.187622070s) [0,4,5] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769042969s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.7( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186033249s) [4,2,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.767456055s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,3], acting [4,5,3] -> [4,2,3], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.4( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.636336327s) [5,3,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.217895508s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.7( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186033249s) [4,2,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.767456055s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.4( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.643515587s) [0,5,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.225097656s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.643873215s) [2,1,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225585938s@ mbc={}] start_peering_interval up [3,4,5] -> [2,1,0], acting [3,4,5] -> [2,1,0], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.636807442s) [0,4,5] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.218505859s@ mbc={}] start_peering_interval up [2,3,4] -> [0,4,5], acting [2,3,4] -> [0,4,5], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.6( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.185752869s) [3,2,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.767456055s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.643794060s) [2,1,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.225585938s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639472961s) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.221191406s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.6( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.185686111s) [3,2,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.767456055s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639472961s) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.221191406s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186185837s) [3,5,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.768188477s@ mbc={}] start_peering_interval up [4,5,3] -> [3,5,4], acting [4,5,3] -> [3,5,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.1( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186135292s) [3,5,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.768188477s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.5( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.636739731s) [0,4,5] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.218505859s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.643452644s) [0,5,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225585938s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,4], acting [3,4,5] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.6( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.635712624s) [3,1,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.218017578s@ mbc={}] start_peering_interval up [2,3,4] -> [3,1,2], acting [2,3,4] -> [3,1,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.6( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.635690689s) [3,1,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.218017578s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.7( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.643415451s) [0,5,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.225585938s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.182558060s) [0,4,2] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.765014648s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,2], acting [5,4,0] -> [0,4,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.1( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.182532310s) [0,4,2] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.765014648s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.188015938s) [3,5,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.770507812s@ mbc={}] start_peering_interval up [5,4,0] -> [3,5,4], acting [5,4,0] -> [3,5,4], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.635990143s) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.218505859s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.187335968s) [3,4,5] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.770019531s@ mbc={}] start_peering_interval up [5,4,0] -> [3,4,5], acting [5,4,0] -> [3,4,5], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.7( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.187925339s) [3,5,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.770507812s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.7( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.635990143s) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.218505859s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.a( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.185552597s) [2,3,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.768188477s@ mbc={}] start_peering_interval up [4,5,3] -> [2,3,1], acting [4,5,3] -> [2,3,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.a( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.185524940s) [2,3,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.768188477s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.b( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.187277794s) [3,4,5] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.770019531s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634833336s) [2,4,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.217651367s@ mbc={}] start_peering_interval up [2,3,4] -> [2,4,0], acting [2,3,4] -> [2,4,0], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642875671s) [5,3,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225830078s@ mbc={}] start_peering_interval up [3,4,5] -> [5,3,4], acting [3,4,5] -> [5,3,4], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.d( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634805679s) [2,4,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.217651367s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186345100s) [4,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769287109s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.6( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642844200s) [5,3,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.225830078s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634259224s) [3,4,2] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.217407227s@ mbc={}] start_peering_interval up [2,3,4] -> [3,4,2], acting [2,3,4] -> [3,4,2], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.b( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184579849s) [5,1,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.767700195s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,0], acting [4,5,3] -> [5,1,0], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642557144s) [4,2,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225708008s@ mbc={}] start_peering_interval up [3,4,5] -> [4,2,3], acting [3,4,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.c( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634230614s) [3,4,2] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.217407227s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.d( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642557144s) [4,2,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.225708008s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.b( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184516907s) [5,1,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.767700195s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[4.5( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,5,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.1c( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,3,2] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[5.1b( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,0,2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.1a( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,5,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.8( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,2,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.2( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,0,2] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[5.9( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,5,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.c( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184884071s) [2,0,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.768310547s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,1], acting [4,5,3] -> [2,0,1], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.c( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184843063s) [2,0,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.768310547s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642359734s) [4,3,2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225219727s@ mbc={}] start_peering_interval up [3,4,5] -> [4,3,2], acting [3,4,5] -> [4,3,2], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.a( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.186345100s) [4,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.769287109s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.c( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642359734s) [4,3,2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.225219727s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633872986s) [5,0,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.217407227s@ mbc={}] start_peering_interval up [2,3,4] -> [5,0,4], acting [2,3,4] -> [5,0,4], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.641941071s) [1,0,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225585938s@ mbc={}] start_peering_interval up [3,4,5] -> [1,0,2], acting [3,4,5] -> [1,0,2], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.b( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633849144s) [5,0,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.217407227s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.a( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.641887665s) [1,0,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.225585938s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.180875778s) [4,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.764648438s@ mbc={}] start_peering_interval up [5,4,0] -> [4,3,5], acting [5,4,0] -> [4,3,5], acting_primary 5 -> 4, up_primary 5 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642257690s) [0,1,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.226074219s@ mbc={}] start_peering_interval up [3,4,5] -> [0,1,5], acting [3,4,5] -> [0,1,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.c( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.180875778s) [4,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.764648438s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.d( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184597015s) [5,1,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.768432617s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.b( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.642235756s) [0,1,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.226074219s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.d( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184536934s) [5,1,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.768432617s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633146286s) [0,2,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.217041016s@ mbc={}] start_peering_interval up [2,3,4] -> [0,2,4], acting [2,3,4] -> [0,2,4], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.180260658s) [1,5,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.764282227s@ mbc={}] start_peering_interval up [5,4,0] -> [1,5,0], acting [5,4,0] -> [1,5,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.a( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.633117676s) [0,2,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.217041016s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.f( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.180211067s) [1,5,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.764282227s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.180075645s) [1,2,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.764282227s@ mbc={}] start_peering_interval up [5,4,0] -> [1,2,3], acting [5,4,0] -> [1,2,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.d( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.180044174s) [1,2,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.764282227s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.641438484s) [5,4,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225830078s@ mbc={}] start_peering_interval up [3,4,5] -> [5,4,3], acting [3,4,5] -> [5,4,3], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.8( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.641391754s) [5,4,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.225830078s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634038925s) [1,5,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.218505859s@ mbc={}] start_peering_interval up [2,3,4] -> [1,5,0], acting [2,3,4] -> [1,5,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.e( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184100151s) [3,2,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.768676758s@ mbc={}] start_peering_interval up [4,5,3] -> [3,2,4], acting [4,5,3] -> [3,2,4], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.9( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634014130s) [1,5,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.218505859s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.e( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184068680s) [3,2,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.768676758s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.179741859s) [2,4,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.764404297s@ mbc={}] start_peering_interval up [5,4,0] -> [2,4,0], acting [5,4,0] -> [2,4,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.f( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184202194s) [2,4,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.768920898s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,0], acting [4,5,3] -> [2,4,0], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.640985489s) [5,0,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225708008s@ mbc={}] start_peering_interval up [3,4,5] -> [5,0,1], acting [3,4,5] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.e( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.179696083s) [2,4,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.764404297s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.f( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184167862s) [2,4,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.768920898s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.9( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.640958786s) [5,0,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.225708008s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.8( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632354736s) [2,0,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.217285156s@ mbc={}] start_peering_interval up [2,3,4] -> [2,0,1], acting [2,3,4] -> [2,0,1], acting_primary 2 -> 2, up_primary 2 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.10( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184286118s) [2,0,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769287109s@ mbc={}] start_peering_interval up [4,5,3] -> [2,0,4], acting [4,5,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.640989304s) [0,4,5] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.226074219s@ mbc={}] start_peering_interval up [3,4,5] -> [0,4,5], acting [3,4,5] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.10( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.184236526s) [2,0,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769287109s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.16( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.640960693s) [0,4,5] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.226074219s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.11( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.183555603s) [4,3,2] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.768920898s@ mbc={}] start_peering_interval up [4,5,3] -> [4,3,2], acting [4,5,3] -> [4,3,2], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.10( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.178833961s) [1,5,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.764160156s@ mbc={}] start_peering_interval up [5,4,0] -> [1,5,3], acting [5,4,0] -> [1,5,3], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.17( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.631132126s) [3,5,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.216430664s@ mbc={}] start_peering_interval up [2,3,4] -> [3,5,4], acting [2,3,4] -> [3,5,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.11( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.183555603s) [4,3,2] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.768920898s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.17( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.631014824s) [3,5,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.216430664s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.10( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.178782463s) [1,5,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.764160156s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.8( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.632329941s) [2,0,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.217285156s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.640562057s) [3,1,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.226196289s@ mbc={}] start_peering_interval up [3,4,5] -> [3,1,5], acting [3,4,5] -> [3,1,5], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.12( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.183459282s) [5,3,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769165039s@ mbc={}] start_peering_interval up [4,5,3] -> [5,3,1], acting [4,5,3] -> [5,3,1], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.17( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.640532494s) [3,1,5] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.226196289s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.12( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.183423042s) [5,3,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769165039s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.f( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,5,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.631292343s) [1,3,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.217041016s@ mbc={}] start_peering_interval up [2,3,4] -> [1,3,2], acting [2,3,4] -> [1,3,2], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.16( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.631268501s) [1,3,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.217041016s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630942345s) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.216918945s@ mbc={}] start_peering_interval up [2,3,4] -> [4,3,5], acting [2,3,4] -> [4,3,5], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.15( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630942345s) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.216918945s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639879227s) [5,0,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.225952148s@ mbc={}] start_peering_interval up [3,4,5] -> [5,0,1], acting [3,4,5] -> [5,0,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.14( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639696121s) [5,0,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.225952148s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.13( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.182800293s) [2,4,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769165039s@ mbc={}] start_peering_interval up [4,5,3] -> [2,4,3], acting [4,5,3] -> [2,4,3], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639708519s) [5,3,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.226074219s@ mbc={}] start_peering_interval up [3,4,5] -> [5,3,1], acting [3,4,5] -> [5,3,1], acting_primary 3 -> 5, up_primary 3 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.13( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.182766914s) [2,4,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769165039s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.d( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,2,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.15( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639687538s) [5,3,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.226074219s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.13( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177671432s) [1,3,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.764038086s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,2], acting [5,4,0] -> [1,3,2], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630213737s) [3,2,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.216796875s@ mbc={}] start_peering_interval up [2,3,4] -> [3,2,4], acting [2,3,4] -> [3,2,4], acting_primary 2 -> 3, up_primary 2 -> 3, role 2 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.13( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177596092s) [1,3,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.764038086s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.178101540s) [0,4,5] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.764648438s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.14( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.630190849s) [3,2,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.216796875s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.12( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.178071022s) [0,4,5] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.764648438s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639580727s) [0,5,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.226318359s@ mbc={}] start_peering_interval up [3,4,5] -> [0,5,4], acting [3,4,5] -> [0,5,4], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176850319s) [2,1,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.763549805s@ mbc={}] start_peering_interval up [5,4,0] -> [2,1,0], acting [5,4,0] -> [2,1,0], acting_primary 5 -> 2, up_primary 5 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.12( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.639556885s) [0,5,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.226318359s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.15( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176804543s) [2,1,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.763549805s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.13( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629228592s) [5,0,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.216186523s@ mbc={}] start_peering_interval up [2,3,4] -> [5,0,1], acting [2,3,4] -> [5,0,1], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.10( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,5,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.13( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.629201889s) [5,0,1] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.216186523s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.14( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176167488s) [1,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.763183594s@ mbc={}] start_peering_interval up [5,4,0] -> [1,2,0], acting [5,4,0] -> [1,2,0], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.14( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.182219505s) [4,2,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769287109s@ mbc={}] start_peering_interval up [4,5,3] -> [4,2,0], acting [4,5,3] -> [4,2,0], acting_primary 4 -> 4, up_primary 4 -> 4, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.14( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.176084518s) [1,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.763183594s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.15( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.182109833s) [5,0,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769287109s@ mbc={}] start_peering_interval up [4,5,3] -> [5,0,4], acting [4,5,3] -> [5,0,4], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.14( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.182219505s) [4,2,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1132.769287109s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.15( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.182072639s) [5,0,4] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769287109s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.638875961s) [4,2,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.226196289s@ mbc={}] start_peering_interval up [3,4,5] -> [4,2,3], acting [3,4,5] -> [4,2,3], acting_primary 3 -> 4, up_primary 3 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.628442764s) [5,1,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.215698242s@ mbc={}] start_peering_interval up [2,3,4] -> [5,1,3], acting [2,3,4] -> [5,1,3], acting_primary 2 -> 5, up_primary 2 -> 5, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.13( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.638875961s) [4,2,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.226196289s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.175517082s) [0,4,5] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.763061523s@ mbc={}] start_peering_interval up [5,4,0] -> [0,4,5], acting [5,4,0] -> [0,4,5], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.17( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.175490379s) [0,4,5] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.763061523s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.12( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.628163338s) [5,1,3] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.215698242s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.16( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.181369781s) [1,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769287109s@ mbc={}] start_peering_interval up [4,5,3] -> [1,2,0], acting [4,5,3] -> [1,2,0], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.16( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.181311607s) [1,2,0] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769287109s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.627820969s) [1,2,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.215942383s@ mbc={}] start_peering_interval up [2,3,4] -> [1,2,0], acting [2,3,4] -> [1,2,0], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.638394356s) [3,2,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.226440430s@ mbc={}] start_peering_interval up [3,4,5] -> [3,2,4], acting [3,4,5] -> [3,2,4], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.10( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.638133049s) [3,2,4] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.226440430s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.11( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.627635956s) [1,2,0] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.215942383s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.174831390s) [1,3,5] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.763427734s@ mbc={}] start_peering_interval up [5,4,0] -> [1,3,5], acting [5,4,0] -> [1,3,5], acting_primary 5 -> 1, up_primary 5 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.16( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.174745560s) [1,3,5] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.763427734s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[4.a( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,0,2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[5.16( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,3,2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.17( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.179963112s) [1,5,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769287109s@ mbc={}] start_peering_interval up [4,5,3] -> [1,5,3], acting [4,5,3] -> [1,5,3], acting_primary 4 -> 1, up_primary 4 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.17( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.179833412s) [1,5,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769287109s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.13( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,3,2] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.10( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.625532150s) [4,5,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.215576172s@ mbc={}] start_peering_interval up [2,3,4] -> [4,5,0], acting [2,3,4] -> [4,5,0], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.625372887s) [4,5,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.215698242s@ mbc={}] start_peering_interval up [2,3,4] -> [4,5,3], acting [2,3,4] -> [4,5,3], acting_primary 2 -> 4, up_primary 2 -> 4, role 2 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1f( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.625372887s) [4,5,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.215698242s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.18( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.179310799s) [5,1,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769653320s@ mbc={}] start_peering_interval up [4,5,3] -> [5,1,3], acting [4,5,3] -> [5,1,3], acting_primary 4 -> 5, up_primary 4 -> 5, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.10( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.625532150s) [4,5,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown pruub 1128.215576172s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.19( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.172361374s) [0,1,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.763061523s@ mbc={}] start_peering_interval up [5,4,0] -> [0,1,2], acting [5,4,0] -> [0,1,2], acting_primary 5 -> 0, up_primary 5 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.14( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,2,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.635293007s) [3,4,2] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.226318359s@ mbc={}] start_peering_interval up [3,4,5] -> [3,4,2], acting [3,4,5] -> [3,4,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.19( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.172244072s) [0,1,2] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.763061523s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.18( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.178434372s) [5,1,3] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769653320s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634658813s) [2,4,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.226318359s@ mbc={}] start_peering_interval up [3,4,5] -> [2,4,3], acting [3,4,5] -> [2,4,3], acting_primary 3 -> 2, up_primary 3 -> 2, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.171387672s) [3,2,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.763061523s@ mbc={}] start_peering_interval up [5,4,0] -> [3,2,1], acting [5,4,0] -> [3,2,1], acting_primary 5 -> 3, up_primary 5 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634668350s) [0,4,5] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.226318359s@ mbc={}] start_peering_interval up [3,4,5] -> [0,4,5], acting [3,4,5] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[3.18( empty local-lis/les=31/32 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.171321869s) [3,2,1] r=-1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.763061523s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1e( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634626389s) [0,4,5] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.226318359s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[5.11( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,2,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.624648094s) [0,1,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active pruub 1128.216552734s@ mbc={}] start_peering_interval up [2,3,4] -> [0,1,2], acting [2,3,4] -> [0,1,2], acting_primary 2 -> 0, up_primary 2 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.19( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177775383s) [3,4,2] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active pruub 1132.769897461s@ mbc={}] start_peering_interval up [4,5,3] -> [3,4,2], acting [4,5,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.11( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.635244370s) [3,4,2] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.226318359s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[4.1f( empty local-lis/les=33/34 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.634112358s) [2,4,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.226318359s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.16( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,2,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[2.19( empty local-lis/les=31/32 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39 pruub=13.177384377s) [3,4,2] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1132.769897461s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.16( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 39 pg[5.1e( empty local-lis/les=33/34 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39 pruub=8.624612808s) [0,1,2] r=-1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1128.216552734s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.17( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,5,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[5.1e( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [0,1,2] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.18( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [3,2,1] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[5.19( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [0,1,5] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.1f( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [0,1,5] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[5.3( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [0,1,2] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.4( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [3,2,1] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[4.17( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [3,1,5] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.4( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [3,2,1] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.2( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [3,5,1] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[5.1d( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [3,1,5] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[4.4( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [0,5,1] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[4.b( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [0,1,5] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[5.6( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [3,1,2] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.19( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [0,1,2] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.9( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,1,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.d( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,1,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.b( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,1,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[5.8( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,0,1] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[4.9( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,0,1] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[4.19( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,3,1] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.c( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,0,1] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.12( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,3,1] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.1c( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,1,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[4.14( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,0,1] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[5.13( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,0,1] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[4.15( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,3,1] r=2 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[4.2( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,1,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.5( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,0,1] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[5.12( empty local-lis/les=0/0 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [5,1,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[4.1( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,1,0] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.a( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,3,1] r=2 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[2.18( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [5,1,3] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[3.15( empty local-lis/les=0/0 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [2,1,0] r=1 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 39 pg[4.1d( empty local-lis/les=0/0 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [2,1,3] r=1 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[4.18( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[4.1b( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[2.1a( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,5,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[3.5( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [4,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[3.3( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [4,0,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[4.1a( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[4.e( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,5,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[4.5( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,5,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[2.3( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [4,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[3.c( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [4,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[3.f( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,5,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[3.a( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [4,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,3,5] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,5,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,5,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,5,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[3.10( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,5,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[3.16( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,3,5] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[2.17( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,5,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[3.1c( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,3,2] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[4.d( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,2,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,2,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[2.2( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,0,2] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[3.d( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,2,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,0,2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[4.a( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,0,2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[2.8( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,2,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[3.14( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,2,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,2,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[3.13( empty local-lis/les=39/40 n=0 ec=31/20 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,3,2] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [1,3,2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 40 pg[2.16( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [1,2,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,0,2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[2.7( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [4,2,3] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[4.c( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,3,2] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,2,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[2.14( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [4,2,0] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[4.13( empty local-lis/les=39/40 n=0 ec=33/21 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,2,3] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=33/23 lis/c=33/33 les/c/f=34/34/0 sis=39) [4,2,0] r=0 lpr=39 pi=[33,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 40 pg[2.11( empty local-lis/les=39/40 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=39) [4,3,2] r=0 lpr=39 pi=[31,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:58:32 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Feb 20 07:58:33 np0005625203.localdomain sudo[56145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:58:33 np0005625203.localdomain sudo[56145]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:58:33 np0005625203.localdomain sudo[56145]: pam_unix(sudo:session): session closed for user root
Feb 20 07:58:35 np0005625203.localdomain sudo[56160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:58:35 np0005625203.localdomain sudo[56160]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:58:35 np0005625203.localdomain sudo[56160]: pam_unix(sudo:session): session closed for user root
Feb 20 07:58:35 np0005625203.localdomain sudo[56175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:58:35 np0005625203.localdomain sudo[56175]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:58:35 np0005625203.localdomain sudo[56175]: pam_unix(sudo:session): session closed for user root
Feb 20 07:58:36 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Feb 20 07:58:36 np0005625203.localdomain sshd[56142]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 15873 ssh2 [preauth]
Feb 20 07:58:36 np0005625203.localdomain sshd[56142]: Disconnecting authenticating user root 185.246.128.171 port 15873: Too many authentication failures [preauth]
Feb 20 07:58:37 np0005625203.localdomain sshd[56190]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:58:38 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Feb 20 07:58:39 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Feb 20 07:58:40 np0005625203.localdomain sshd[56192]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:58:40 np0005625203.localdomain sshd[56190]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 58386 ssh2 [preauth]
Feb 20 07:58:40 np0005625203.localdomain sshd[56190]: Disconnecting authenticating user root 185.246.128.171 port 58386: Too many authentication failures [preauth]
Feb 20 07:58:41 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Feb 20 07:58:41 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Feb 20 07:58:41 np0005625203.localdomain sshd[56192]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:58:41 np0005625203.localdomain sshd[56194]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:58:42 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Feb 20 07:58:42 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Feb 20 07:58:44 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.18 deep-scrub starts
Feb 20 07:58:44 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.18 deep-scrub ok
Feb 20 07:58:45 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Feb 20 07:58:45 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Feb 20 07:58:45 np0005625203.localdomain sshd[56194]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 14569 ssh2 [preauth]
Feb 20 07:58:45 np0005625203.localdomain sshd[56194]: Disconnecting authenticating user root 185.246.128.171 port 14569: Too many authentication failures [preauth]
Feb 20 07:58:47 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Feb 20 07:58:47 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Feb 20 07:58:49 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Feb 20 07:58:49 np0005625203.localdomain sshd[56196]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:58:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:58:49 np0005625203.localdomain systemd[1]: tmp-crun.BaRXMv.mount: Deactivated successfully.
Feb 20 07:58:49 np0005625203.localdomain podman[56197]: 2026-02-20 07:58:49.772085089 +0000 UTC m=+0.092051330 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 07:58:49 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Feb 20 07:58:49 np0005625203.localdomain podman[56197]: 2026-02-20 07:58:49.987334689 +0000 UTC m=+0.307300950 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 07:58:49 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:58:51 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.f scrub starts
Feb 20 07:58:51 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.f scrub ok
Feb 20 07:58:54 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Feb 20 07:58:54 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Feb 20 07:58:56 np0005625203.localdomain sshd[56196]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 63236 ssh2 [preauth]
Feb 20 07:58:56 np0005625203.localdomain sshd[56196]: Disconnecting authenticating user root 185.246.128.171 port 63236: Too many authentication failures [preauth]
Feb 20 07:58:56 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Feb 20 07:58:56 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Feb 20 07:58:57 np0005625203.localdomain sshd[56229]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:58:59 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.e scrub starts
Feb 20 07:58:59 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.e scrub ok
Feb 20 07:58:59 np0005625203.localdomain sudo[56244]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dlbpzonlkxxssvnnzfukrhvylhsmryrb ; /usr/bin/python3
Feb 20 07:58:59 np0005625203.localdomain sudo[56244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:00 np0005625203.localdomain python3[56246]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:00 np0005625203.localdomain sudo[56244]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:00 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Feb 20 07:59:00 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Feb 20 07:59:01 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Feb 20 07:59:01 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Feb 20 07:59:01 np0005625203.localdomain sudo[56260]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cbxcreonxsbdhweeqmbuxbvfczaulwuz ; /usr/bin/python3
Feb 20 07:59:01 np0005625203.localdomain sudo[56260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:02 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Feb 20 07:59:02 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Feb 20 07:59:02 np0005625203.localdomain python3[56262]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:02 np0005625203.localdomain sudo[56260]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:04 np0005625203.localdomain sudo[56276]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jpkwjedjhpntmfcjfujvyxrroiexonwl ; /usr/bin/python3
Feb 20 07:59:04 np0005625203.localdomain sudo[56276]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:04 np0005625203.localdomain python3[56278]: ansible-file Invoked with path=/var/lib/tripleo-config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:04 np0005625203.localdomain sudo[56276]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:04 np0005625203.localdomain sshd[56279]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:59:04 np0005625203.localdomain sshd[56229]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 49217 ssh2 [preauth]
Feb 20 07:59:04 np0005625203.localdomain sshd[56229]: Disconnecting authenticating user root 185.246.128.171 port 49217: Too many authentication failures [preauth]
Feb 20 07:59:05 np0005625203.localdomain sshd[56279]: Invalid user deployuser from 123.204.132.127 port 42704
Feb 20 07:59:06 np0005625203.localdomain sshd[56279]: Received disconnect from 123.204.132.127 port 42704:11: Bye Bye [preauth]
Feb 20 07:59:06 np0005625203.localdomain sshd[56279]: Disconnected from invalid user deployuser 123.204.132.127 port 42704 [preauth]
Feb 20 07:59:06 np0005625203.localdomain sshd[56281]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:59:06 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Feb 20 07:59:06 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Feb 20 07:59:06 np0005625203.localdomain sudo[56328]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ctspceaqrywwntwixyqueorofdpzzcuw ; /usr/bin/python3
Feb 20 07:59:06 np0005625203.localdomain sudo[56328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:07 np0005625203.localdomain python3[56330]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:59:07 np0005625203.localdomain sudo[56328]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:07 np0005625203.localdomain sudo[56371]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fqlkkrvkxeulgxsruxvnaeymxbcuyspa ; /usr/bin/python3
Feb 20 07:59:07 np0005625203.localdomain sudo[56371]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:07 np0005625203.localdomain python3[56373]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574346.7348402-92373-123084541566303/source dest=/var/lib/tripleo-config/ceph/ceph.client.openstack.keyring mode=600 _original_basename=ceph.client.openstack.keyring follow=False checksum=8e2004121a34320613d32710ae37702da8d027e6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:07 np0005625203.localdomain sudo[56371]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:09 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Feb 20 07:59:09 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Feb 20 07:59:10 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.f scrub starts
Feb 20 07:59:10 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.f scrub ok
Feb 20 07:59:10 np0005625203.localdomain sshd[56281]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 36974 ssh2 [preauth]
Feb 20 07:59:10 np0005625203.localdomain sshd[56281]: Disconnecting authenticating user root 185.246.128.171 port 36974: Too many authentication failures [preauth]
Feb 20 07:59:10 np0005625203.localdomain sshd[56388]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:59:11 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.d deep-scrub starts
Feb 20 07:59:11 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.d deep-scrub ok
Feb 20 07:59:11 np0005625203.localdomain sshd[56388]: Invalid user ubuntu from 187.87.206.21 port 37466
Feb 20 07:59:11 np0005625203.localdomain sshd[56388]: Received disconnect from 187.87.206.21 port 37466:11: Bye Bye [preauth]
Feb 20 07:59:11 np0005625203.localdomain sshd[56388]: Disconnected from invalid user ubuntu 187.87.206.21 port 37466 [preauth]
Feb 20 07:59:11 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Feb 20 07:59:11 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Feb 20 07:59:12 np0005625203.localdomain sudo[56435]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iaaqmoqskokbcxrfialnnuovhuswjbmh ; /usr/bin/python3
Feb 20 07:59:12 np0005625203.localdomain sudo[56435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:12 np0005625203.localdomain python3[56437]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:59:12 np0005625203.localdomain sudo[56435]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:12 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Feb 20 07:59:12 np0005625203.localdomain sshd[56479]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:59:12 np0005625203.localdomain sudo[56478]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cwfsevvdemiapxelvoflamvcbfeldlbn ; /usr/bin/python3
Feb 20 07:59:12 np0005625203.localdomain sudo[56478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:12 np0005625203.localdomain python3[56481]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574351.8958402-92373-13756879537995/source dest=/var/lib/tripleo-config/ceph/ceph.client.manila.keyring mode=600 _original_basename=ceph.client.manila.keyring follow=False checksum=417007d20895a54571330144b727b714177f3d13 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:12 np0005625203.localdomain sudo[56478]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:12 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.c scrub starts
Feb 20 07:59:13 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.c scrub ok
Feb 20 07:59:13 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Feb 20 07:59:13 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Feb 20 07:59:14 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Feb 20 07:59:14 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Feb 20 07:59:15 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Feb 20 07:59:15 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Feb 20 07:59:15 np0005625203.localdomain sshd[56497]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:59:16 np0005625203.localdomain sshd[56497]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 07:59:17 np0005625203.localdomain sudo[56544]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdfozvedbbbozxuvzstxkvdbufixngza ; /usr/bin/python3
Feb 20 07:59:17 np0005625203.localdomain sudo[56544]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:17 np0005625203.localdomain python3[56546]: ansible-ansible.legacy.stat Invoked with path=/var/lib/tripleo-config/ceph/ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:59:17 np0005625203.localdomain sudo[56544]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:17 np0005625203.localdomain sudo[56587]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pjfzldkxzhnlvakefwshhihpglcqkled ; /usr/bin/python3
Feb 20 07:59:17 np0005625203.localdomain sudo[56587]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:17 np0005625203.localdomain python3[56589]: ansible-ansible.legacy.copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574357.0129983-92373-5269894360124/source dest=/var/lib/tripleo-config/ceph/ceph.conf mode=644 _original_basename=ceph.conf follow=False checksum=2a03ad5f1837679340274b70e67e768ad4c81335 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:17 np0005625203.localdomain sudo[56587]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:18 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Feb 20 07:59:18 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Feb 20 07:59:18 np0005625203.localdomain sshd[56479]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 9620 ssh2 [preauth]
Feb 20 07:59:18 np0005625203.localdomain sshd[56479]: Disconnecting authenticating user root 185.246.128.171 port 9620: Too many authentication failures [preauth]
Feb 20 07:59:19 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Feb 20 07:59:19 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Feb 20 07:59:20 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Feb 20 07:59:20 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Feb 20 07:59:20 np0005625203.localdomain sshd[56604]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:59:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:59:20 np0005625203.localdomain podman[56605]: 2026-02-20 07:59:20.767604765 +0000 UTC m=+0.081273968 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, architecture=x86_64, tcib_managed=true, vcs-type=git, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 07:59:20 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 3.a deep-scrub starts
Feb 20 07:59:20 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 3.a deep-scrub ok
Feb 20 07:59:20 np0005625203.localdomain podman[56605]: 2026-02-20 07:59:20.955450069 +0000 UTC m=+0.269119262 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, architecture=x86_64, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z)
Feb 20 07:59:20 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:59:21 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Feb 20 07:59:21 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Feb 20 07:59:21 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 3.c scrub starts
Feb 20 07:59:21 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 3.c scrub ok
Feb 20 07:59:22 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Feb 20 07:59:22 np0005625203.localdomain sudo[56680]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ucblrcslhfzfvhkgjyqtwdaeocsxlrka ; /usr/bin/python3
Feb 20 07:59:22 np0005625203.localdomain sudo[56680]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:22 np0005625203.localdomain python3[56682]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:59:22 np0005625203.localdomain sudo[56680]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:22 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.d scrub starts
Feb 20 07:59:22 np0005625203.localdomain sudo[56725]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rfsdhmvvrhojsldegzrgdlcrbykhbfay ; /usr/bin/python3
Feb 20 07:59:22 np0005625203.localdomain sudo[56725]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:22 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.d scrub ok
Feb 20 07:59:23 np0005625203.localdomain python3[56727]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574362.3596203-92794-56598495708650/source _original_basename=tmpgzuc1ulc follow=False checksum=f17091ee142621a3c8290c8c96b5b52d67b3a864 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:23 np0005625203.localdomain sudo[56725]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:23 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 43 pg[6.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=15.168952942s) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 1186.569335938s@ mbc={}] start_peering_interval up [0,4,2] -> [0,4,2], acting [0,4,2] -> [0,4,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:23 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 43 pg[7.0( v 40'39 (0'0,40'39] local-lis/les=36/37 n=22 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=43 pruub=8.505630493s) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 38'38 mlcod 38'38 active pruub 1184.244873047s@ mbc={}] start_peering_interval up [1,5,3] -> [1,5,3], acting [1,5,3] -> [1,5,3], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:23 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 43 pg[6.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=15.166546822s) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1186.569335938s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:23 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 43 pg[7.0( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=43 pruub=8.505630493s) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 38'38 mlcod 0'0 unknown pruub 1184.244873047s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain sudo[56787]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vqigizebvkpyqhaapnzakdsnatsbnyui ; /usr/bin/python3
Feb 20 07:59:24 np0005625203.localdomain sudo[56787]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:24 np0005625203.localdomain python3[56789]: ansible-ansible.legacy.stat Invoked with path=/usr/local/sbin/containers-tmpwatch follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:59:24 np0005625203.localdomain sudo[56787]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.1b( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.1a( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.19( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.18( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.1f( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.d( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.c( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.1( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.7( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.1e( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.6( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.3( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.5( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.2( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.4( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.e( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.f( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.8( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.9( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.a( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.b( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.15( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.16( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.17( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.10( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.11( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.12( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.13( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.1c( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.1d( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 44 pg[6.14( empty local-lis/les=35/36 n=0 ec=43/35 lis/c=35/35 les/c/f=36/36/0 sis=43) [0,4,2] r=1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.5( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.3( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.4( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.a( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.f( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.8( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.7( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.b( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.9( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.c( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.6( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.1( v 40'39 (0'0,40'39] local-lis/les=36/37 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.2( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.d( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.e( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=36/37 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.0( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 38'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.1( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.2( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.4( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.8( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 44 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=36/36 les/c/f=37/37/0 sis=43) [1,5,3] r=0 lpr=43 pi=[36,43)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:24 np0005625203.localdomain sudo[56830]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bcuqxumfzzufssyouclrhdtpxkxngukf ; /usr/bin/python3
Feb 20 07:59:24 np0005625203.localdomain sudo[56830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:24 np0005625203.localdomain python3[56832]: ansible-ansible.legacy.copy Invoked with dest=/usr/local/sbin/containers-tmpwatch group=root mode=493 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574363.8912966-92881-228696910883264/source _original_basename=tmp7nkbwizd follow=False checksum=84397b037dad9813fed388c4bcdd4871f384cd22 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:24 np0005625203.localdomain sudo[56830]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:25 np0005625203.localdomain sudo[56860]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hwojofeqjyhgwdacjhdcmcshblimrrti ; /usr/bin/python3
Feb 20 07:59:25 np0005625203.localdomain sudo[56860]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:25 np0005625203.localdomain python3[56862]: ansible-cron Invoked with job=/usr/local/sbin/containers-tmpwatch name=Remove old logs special_time=daily user=root state=present backup=False minute=* hour=* day=* month=* weekday=* disabled=False env=False cron_file=None insertafter=None insertbefore=None
Feb 20 07:59:25 np0005625203.localdomain crontab[56863]: (root) LIST (root)
Feb 20 07:59:25 np0005625203.localdomain crontab[56864]: (root) REPLACE (root)
Feb 20 07:59:25 np0005625203.localdomain sudo[56860]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:25 np0005625203.localdomain sudo[56878]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fjzudzgagmbkguepizohjtadjtvoizul ; /usr/bin/python3
Feb 20 07:59:25 np0005625203.localdomain sudo[56878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:25 np0005625203.localdomain python3[56880]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_2 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:59:25 np0005625203.localdomain sudo[56878]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:26 np0005625203.localdomain sudo[56928]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bruvqlpbfrrvoisuizttoleuphzwdhav ; /usr/bin/python3
Feb 20 07:59:26 np0005625203.localdomain sudo[56928]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:26 np0005625203.localdomain sudo[56928]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:26 np0005625203.localdomain sudo[56946]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eiwhreqagbyhzxfcfqnctzbhdvwvehya ; /usr/bin/python3
Feb 20 07:59:26 np0005625203.localdomain sudo[56946]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:26 np0005625203.localdomain sudo[56946]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:26 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Feb 20 07:59:26 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Feb 20 07:59:27 np0005625203.localdomain sudo[57050]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdoiupnhkdycmlljdjbsaebzeqnpbfrd ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574366.6808825-92969-268714390252107/async_wrapper.py 676949965119 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574366.6808825-92969-268714390252107/AnsiballZ_command.py _
Feb 20 07:59:27 np0005625203.localdomain sudo[57050]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Feb 20 07:59:27 np0005625203.localdomain ansible-async_wrapper.py[57052]: Invoked with 676949965119 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574366.6808825-92969-268714390252107/AnsiballZ_command.py _
Feb 20 07:59:27 np0005625203.localdomain ansible-async_wrapper.py[57055]: Starting module and watcher
Feb 20 07:59:27 np0005625203.localdomain ansible-async_wrapper.py[57055]: Start watching 57056 (3600)
Feb 20 07:59:27 np0005625203.localdomain ansible-async_wrapper.py[57056]: Start module (57056)
Feb 20 07:59:27 np0005625203.localdomain ansible-async_wrapper.py[57052]: Return async_wrapper task started.
Feb 20 07:59:27 np0005625203.localdomain sudo[57050]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:27 np0005625203.localdomain sudo[57074]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-klacmfjeojyodijyymxyinyuzuwxvwed ; /usr/bin/python3
Feb 20 07:59:27 np0005625203.localdomain sudo[57074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:27 np0005625203.localdomain python3[57076]: ansible-ansible.legacy.async_status Invoked with jid=676949965119.57052 mode=status _async_dir=/tmp/.ansible_async
Feb 20 07:59:27 np0005625203.localdomain sudo[57074]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:27 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Feb 20 07:59:27 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Feb 20 07:59:27 np0005625203.localdomain sshd[56604]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 58797 ssh2 [preauth]
Feb 20 07:59:27 np0005625203.localdomain sshd[56604]: Disconnecting authenticating user root 185.246.128.171 port 58797: Too many authentication failures [preauth]
Feb 20 07:59:28 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Feb 20 07:59:28 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Feb 20 07:59:28 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.19( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980849266s) [1,3,2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.448486328s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1a( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977431297s) [4,2,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.445068359s@ mbc={}] start_peering_interval up [0,4,2] -> [4,2,0], acting [0,4,2] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1b( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980465889s) [5,1,0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.448120117s@ mbc={}] start_peering_interval up [0,4,2] -> [5,1,0], acting [0,4,2] -> [5,1,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.18( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980561256s) [0,1,2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.448364258s@ mbc={}] start_peering_interval up [0,4,2] -> [0,1,2], acting [0,4,2] -> [0,1,2], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.19( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980728149s) [1,3,2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.448486328s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1a( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977431297s) [4,2,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown pruub 1188.445068359s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1b( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980325699s) [5,1,0] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.448120117s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.18( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980443001s) [0,1,2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.448364258s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977424622s) [3,5,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.445556641s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1f( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977329254s) [3,5,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.445556641s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980141640s) [1,3,2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.448608398s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.d( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980078697s) [1,3,2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.448608398s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.979561806s) [2,1,3] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.448120117s@ mbc={}] start_peering_interval up [0,4,2] -> [2,1,3], acting [0,4,2] -> [2,1,3], acting_primary 0 -> 2, up_primary 0 -> 2, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.979502678s) [2,1,3] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.448120117s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977829933s) [4,3,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.446777344s@ mbc={}] start_peering_interval up [0,4,2] -> [4,3,2], acting [0,4,2] -> [4,3,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.981099129s) [5,1,3] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.449951172s@ mbc={}] start_peering_interval up [0,4,2] -> [5,1,3], acting [0,4,2] -> [5,1,3], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.7( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977829933s) [4,3,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown pruub 1188.446777344s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1e( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980995178s) [5,1,3] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.449951172s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.979160309s) [3,4,5] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.448486328s@ mbc={}] start_peering_interval up [0,4,2] -> [3,4,5], acting [0,4,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.982636452s) [3,1,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.451904297s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,5], acting [0,4,2] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.6( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.979116440s) [3,4,5] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.448486328s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.c( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.982452393s) [3,1,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.451904297s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978756905s) [4,5,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.448120117s@ mbc={}] start_peering_interval up [0,4,2] -> [4,5,0], acting [0,4,2] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.3( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978756905s) [4,5,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown pruub 1188.448120117s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978347778s) [1,3,2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.447998047s@ mbc={}] start_peering_interval up [0,4,2] -> [1,3,2], acting [0,4,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.2( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978300095s) [1,3,2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.447998047s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980558395s) [3,1,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.450561523s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,5], acting [0,4,2] -> [3,1,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976988792s) [4,3,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.447143555s@ mbc={}] start_peering_interval up [0,4,2] -> [4,3,2], acting [0,4,2] -> [4,3,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.4( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980463982s) [3,1,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.450561523s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977742195s) [3,5,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.447998047s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.e( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976988792s) [4,3,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown pruub 1188.447143555s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.f( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977685928s) [3,5,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.447998047s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978763580s) [0,2,4] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.449462891s@ mbc={}] start_peering_interval up [0,4,2] -> [0,2,4], acting [0,4,2] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978750229s) [4,0,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.449462891s@ mbc={}] start_peering_interval up [0,4,2] -> [4,0,2], acting [0,4,2] -> [4,0,2], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977958679s) [4,2,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.448486328s@ mbc={}] start_peering_interval up [0,4,2] -> [4,2,0], acting [0,4,2] -> [4,2,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977622986s) [1,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.448486328s@ mbc={}] start_peering_interval up [0,4,2] -> [1,2,3], acting [0,4,2] -> [1,2,3], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.a( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978750229s) [4,0,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown pruub 1188.449462891s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.9( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978693008s) [0,2,4] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.449462891s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.8( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977455139s) [1,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.448486328s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.5( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.977958679s) [4,2,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown pruub 1188.448486328s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980960846s) [3,4,5] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.452270508s@ mbc={}] start_peering_interval up [0,4,2] -> [3,4,5], acting [0,4,2] -> [3,4,5], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.14( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.980885506s) [3,4,5] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.452270508s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.16( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975839615s) [0,1,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.447265625s@ mbc={}] start_peering_interval up [0,4,2] -> [0,1,5], acting [0,4,2] -> [0,1,5], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.16( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975781441s) [0,1,5] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.447265625s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975026131s) [5,0,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.446411133s@ mbc={}] start_peering_interval up [0,4,2] -> [5,0,1], acting [0,4,2] -> [5,0,1], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.17( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974949837s) [5,0,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.446411133s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973897934s) [4,5,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.445556641s@ mbc={}] start_peering_interval up [0,4,2] -> [4,5,0], acting [0,4,2] -> [4,5,0], acting_primary 0 -> 4, up_primary 0 -> 4, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978237152s) [3,1,2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.450195312s@ mbc={}] start_peering_interval up [0,4,2] -> [3,1,2], acting [0,4,2] -> [3,1,2], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976093292s) [3,5,4] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.447875977s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,4], acting [0,4,2] -> [3,5,4], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.10( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974378586s) [0,2,4] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.446166992s@ mbc={}] start_peering_interval up [0,4,2] -> [0,2,4], acting [0,4,2] -> [0,2,4], acting_primary 0 -> 0, up_primary 0 -> 0, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.12( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974658012s) [5,4,0] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.446655273s@ mbc={}] start_peering_interval up [0,4,2] -> [5,4,0], acting [0,4,2] -> [5,4,0], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.11( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.976054192s) [3,5,4] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.447875977s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.b( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978181839s) [3,1,2] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.450195312s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.10( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974260330s) [0,2,4] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.446166992s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.12( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974620819s) [5,4,0] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.446655273s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.979071617s) [3,2,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.451293945s@ mbc={}] start_peering_interval up [0,4,2] -> [3,2,1], acting [0,4,2] -> [3,2,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.13( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.979030609s) [3,2,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.451293945s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978875160s) [5,3,4] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.451171875s@ mbc={}] start_peering_interval up [0,4,2] -> [5,3,4], acting [0,4,2] -> [5,3,4], acting_primary 0 -> 5, up_primary 0 -> 5, role 1 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.15( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973897934s) [4,5,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown pruub 1188.445556641s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978340149s) [3,5,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active pruub 1188.450805664s@ mbc={}] start_peering_interval up [0,4,2] -> [3,5,1], acting [0,4,2] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1d( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978264809s) [3,5,1] r=-1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.450805664s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[6.1c( empty local-lis/les=43/44 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.978773117s) [5,3,4] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 1188.451171875s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[7.7( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[7.b( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [1,3,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.19( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [1,3,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.8( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [1,2,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.d( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [1,3,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[7.9( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[7.5( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[7.1( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[7.3( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.1( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973889351s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1192.791992188s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 45 pg[7.d( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.1( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973760605s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1192.791992188s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.975070953s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1192.793457031s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974993706s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1192.793457031s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974288940s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1192.793090820s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974230766s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1192.793090820s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973721504s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1192.792724609s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974438667s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1192.793579102s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973656654s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1192.792724609s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.974384308s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1192.793579102s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973982811s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1192.793701172s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.972805977s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1192.792846680s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.973896027s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1192.793701172s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.972621918s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1192.792846680s@ mbc={}] start_peering_interval up [1,5,3] -> [4,2,3], acting [1,5,3] -> [4,2,3], acting_primary 1 -> 4, up_primary 1 -> 4, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.972737312s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1192.792846680s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45 pruub=10.972545624s) [4,2,3] r=-1 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1192.792846680s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:29 np0005625203.localdomain sshd[57127]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 4.a scrub starts
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.18( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [0,1,2] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.1( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [2,1,3] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.16( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [0,1,5] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.1f( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,5,1] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.c( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,1,5] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,1,3] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,0,1] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.4( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,1,5] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.f( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,5,1] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.1b( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [5,1,0] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.13( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,2,1] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.1d( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,5,1] r=2 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 45 pg[6.b( empty local-lis/les=0/0 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [3,1,2] r=1 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[6.1a( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[6.a( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,0,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[6.5( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[7.3( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=45/46 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=40'39 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[7.1( v 40'39 (0'0,40'39] local-lis/les=45/46 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[6.7( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,3,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[6.e( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,3,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[7.f( v 40'39 lc 38'1 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(1+2)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[7.7( v 40'39 lc 38'21 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 46 pg[6.19( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [1,3,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 46 pg[6.2( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [1,3,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 46 pg[6.8( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [1,2,3] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[7.d( v 40'39 lc 38'13 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[7.5( v 40'39 lc 38'11 (0'0,40'39] local-lis/les=45/46 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(1+2)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 4.a scrub ok
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 46 pg[6.d( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [1,3,2] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[6.3( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,5,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[6.15( empty local-lis/les=45/46 n=0 ec=43/35 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,5,0] r=0 lpr=45 pi=[43,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[7.b( v 40'39 lc 0'0 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=40'39 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:30 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 46 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=45) [4,2,3] r=0 lpr=45 pi=[43,45)/1 crt=40'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:    (file: /etc/puppet/hiera.yaml)
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Warning: Undefined variable '::deploy_config_name';
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:    (file & line not available)
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:    (file & line not available)
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.19 seconds
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Notice: Applied catalog in 0.04 seconds
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Application:
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:    Initial environment: production
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:    Converged environment: production
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:          Run mode: user
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Changes:
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Events:
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Resources:
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:             Total: 10
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Time:
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:          Schedule: 0.00
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:              File: 0.00
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:            Augeas: 0.01
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:              Exec: 0.01
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:    Transaction evaluation: 0.03
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:    Catalog application: 0.04
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:    Config retrieval: 0.23
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:          Last run: 1771574371
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:        Filebucket: 0.00
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:             Total: 0.04
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]: Version:
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:            Config: 1771574371
Feb 20 07:59:31 np0005625203.localdomain puppet-user[57061]:            Puppet: 7.10.0
Feb 20 07:59:31 np0005625203.localdomain ansible-async_wrapper.py[57056]: Module complete (57056)
Feb 20 07:59:32 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.0 deep-scrub starts
Feb 20 07:59:32 np0005625203.localdomain ansible-async_wrapper.py[57055]: Done in kid B.
Feb 20 07:59:34 np0005625203.localdomain sshd[57127]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 51114 ssh2 [preauth]
Feb 20 07:59:34 np0005625203.localdomain sshd[57127]: Disconnecting authenticating user root 185.246.128.171 port 51114: Too many authentication failures [preauth]
Feb 20 07:59:35 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.c deep-scrub starts
Feb 20 07:59:35 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.c deep-scrub ok
Feb 20 07:59:35 np0005625203.localdomain sshd[57191]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:59:35 np0005625203.localdomain sudo[57192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:59:35 np0005625203.localdomain sudo[57192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:59:35 np0005625203.localdomain sudo[57192]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:36 np0005625203.localdomain sudo[57207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 07:59:36 np0005625203.localdomain sudo[57207]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:59:36 np0005625203.localdomain sudo[57207]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:36 np0005625203.localdomain sudo[57244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 07:59:36 np0005625203.localdomain sudo[57244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:59:36 np0005625203.localdomain sudo[57244]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:36 np0005625203.localdomain sudo[57259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 07:59:36 np0005625203.localdomain sudo[57259]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:59:37 np0005625203.localdomain sudo[57259]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:37 np0005625203.localdomain sudo[57319]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iydjxoqcllaaeiazgesbmrptllkpsfqg ; /usr/bin/python3
Feb 20 07:59:37 np0005625203.localdomain sudo[57319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:37 np0005625203.localdomain sudo[57321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 07:59:37 np0005625203.localdomain sudo[57321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 07:59:37 np0005625203.localdomain sudo[57321]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:37 np0005625203.localdomain python3[57329]: ansible-ansible.legacy.async_status Invoked with jid=676949965119.57052 mode=status _async_dir=/tmp/.ansible_async
Feb 20 07:59:37 np0005625203.localdomain sudo[57319]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:38 np0005625203.localdomain sudo[57350]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kriucrqjvghficikharvdrqynphyfhzw ; /usr/bin/python3
Feb 20 07:59:38 np0005625203.localdomain sudo[57350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:38 np0005625203.localdomain python3[57352]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 07:59:38 np0005625203.localdomain sudo[57350]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:38 np0005625203.localdomain sudo[57366]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-shjdxznvtftqteqxxnhtbxydtbanzfcw ; /usr/bin/python3
Feb 20 07:59:38 np0005625203.localdomain sudo[57366]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:38 np0005625203.localdomain python3[57368]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:59:38 np0005625203.localdomain sudo[57366]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:39 np0005625203.localdomain sudo[57416]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rxkvwskhfqwqxxebfmxqevqngpfokcmr ; /usr/bin/python3
Feb 20 07:59:39 np0005625203.localdomain sudo[57416]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:39 np0005625203.localdomain python3[57418]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:59:39 np0005625203.localdomain sudo[57416]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:39 np0005625203.localdomain sudo[57434]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dfhhgnnzyluivczewylwpvmjobkgfgta ; /usr/bin/python3
Feb 20 07:59:39 np0005625203.localdomain sudo[57434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:39 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 47 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.792223930s) [3,5,1] r=2 lpr=47 pi=[43,47)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1200.793823242s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:39 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 47 pg[7.2( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.791179657s) [3,5,1] r=2 lpr=47 pi=[43,47)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1200.792968750s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:39 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 47 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.792135239s) [3,5,1] r=2 lpr=47 pi=[43,47)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1200.793823242s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:39 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 47 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.792162895s) [3,5,1] r=2 lpr=47 pi=[43,47)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1200.793945312s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:39 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 47 pg[7.2( v 40'39 (0'0,40'39] local-lis/les=43/44 n=2 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.791073799s) [3,5,1] r=2 lpr=47 pi=[43,47)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1200.792968750s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:39 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 47 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.792043686s) [3,5,1] r=2 lpr=47 pi=[43,47)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1200.793945312s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:39 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 47 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.791642189s) [3,5,1] r=2 lpr=47 pi=[43,47)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1200.793823242s@ mbc={}] start_peering_interval up [1,5,3] -> [3,5,1], acting [1,5,3] -> [3,5,1], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:39 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 47 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.791506767s) [3,5,1] r=2 lpr=47 pi=[43,47)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1200.793823242s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:39 np0005625203.localdomain python3[57436]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpd2mlman0 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 07:59:39 np0005625203.localdomain sudo[57434]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:40 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Feb 20 07:59:40 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Feb 20 07:59:40 np0005625203.localdomain sudo[57464]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blcufadlgygurzestnzhftbzxfecxvyk ; /usr/bin/python3
Feb 20 07:59:40 np0005625203.localdomain sudo[57464]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:40 np0005625203.localdomain python3[57466]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:40 np0005625203.localdomain sudo[57464]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:40 np0005625203.localdomain sudo[57480]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofutbctxmxdztlzgznqdvrqrknkyktnb ; /usr/bin/python3
Feb 20 07:59:40 np0005625203.localdomain sudo[57480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:40 np0005625203.localdomain sudo[57480]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:41 np0005625203.localdomain sudo[57567]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-raohyfyvggyhupujkqdgsgawqzoycqrf ; /usr/bin/python3
Feb 20 07:59:41 np0005625203.localdomain sudo[57567]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:41 np0005625203.localdomain python3[57569]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Feb 20 07:59:41 np0005625203.localdomain sudo[57567]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:41 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 49 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.808780670s) [3,4,2] r=1 lpr=49 pi=[45,49)/1 crt=40'39 mlcod 0'0 active pruub 1202.503784180s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:41 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 49 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=45/46 n=2 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.807779312s) [3,4,2] r=1 lpr=49 pi=[45,49)/1 crt=40'39 mlcod 0'0 active pruub 1202.502929688s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:41 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 49 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.808680534s) [3,4,2] r=1 lpr=49 pi=[45,49)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1202.503784180s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:41 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 49 pg[7.3( v 40'39 (0'0,40'39] local-lis/les=45/46 n=2 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.807701111s) [3,4,2] r=1 lpr=49 pi=[45,49)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1202.502929688s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:41 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 49 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.807963371s) [3,4,2] r=1 lpr=49 pi=[45,49)/1 crt=40'39 mlcod 0'0 active pruub 1202.503540039s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:41 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 49 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.807896614s) [3,4,2] r=1 lpr=49 pi=[45,49)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1202.503540039s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:41 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 49 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.811567307s) [3,4,2] r=1 lpr=49 pi=[45,49)/1 crt=40'39 mlcod 0'0 active pruub 1202.507324219s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [3,4,2], acting [4,2,3] -> [3,4,2], acting_primary 4 -> 3, up_primary 4 -> 3, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:41 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 49 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=49 pruub=12.811153412s) [3,4,2] r=1 lpr=49 pi=[45,49)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1202.507324219s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:41 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Feb 20 07:59:41 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Feb 20 07:59:42 np0005625203.localdomain sudo[57586]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhuyapmualfomrvxnufgjvgpokavsgsp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:42 np0005625203.localdomain sudo[57586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:42 np0005625203.localdomain python3[57588]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:42 np0005625203.localdomain sudo[57586]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:42 np0005625203.localdomain sudo[57602]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yueljnmviqwijcdbunvspjxzsezlvbwp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:42 np0005625203.localdomain sudo[57602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:42 np0005625203.localdomain sudo[57602]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:42 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Feb 20 07:59:42 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Feb 20 07:59:43 np0005625203.localdomain sudo[57618]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nrcydavlzjmxmxietpygpqlnersiqquo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:43 np0005625203.localdomain sudo[57618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:43 np0005625203.localdomain python3[57620]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 07:59:43 np0005625203.localdomain sudo[57618]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:43 np0005625203.localdomain sudo[57668]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gqlmfiqwvoiakihugknxxlujwkhhryqf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:43 np0005625203.localdomain sudo[57668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:43 np0005625203.localdomain python3[57670]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:59:43 np0005625203.localdomain sudo[57668]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:43 np0005625203.localdomain sudo[57686]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bfupnluhvtnobgjpcuasyjlxedzblrzl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:43 np0005625203.localdomain sudo[57686]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:43 np0005625203.localdomain python3[57688]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:43 np0005625203.localdomain sudo[57686]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:44 np0005625203.localdomain sudo[57748]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-klsvzxqibtpzicosdrvxkaxagemxotkh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:44 np0005625203.localdomain sudo[57748]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:44 np0005625203.localdomain python3[57750]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:59:44 np0005625203.localdomain sudo[57748]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:44 np0005625203.localdomain sudo[57766]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dlqxxreavolcwyyghrzvzienavgssias ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:44 np0005625203.localdomain sudo[57766]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:44 np0005625203.localdomain python3[57768]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:44 np0005625203.localdomain sudo[57766]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:45 np0005625203.localdomain sshd[57191]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 24521 ssh2 [preauth]
Feb 20 07:59:45 np0005625203.localdomain sshd[57191]: Disconnecting authenticating user root 185.246.128.171 port 24521: Too many authentication failures [preauth]
Feb 20 07:59:45 np0005625203.localdomain sudo[57828]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yfjayccdtdlzyemmoxczfmbsucqykgii ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:45 np0005625203.localdomain sudo[57828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:45 np0005625203.localdomain python3[57830]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:59:45 np0005625203.localdomain sudo[57828]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:45 np0005625203.localdomain sudo[57846]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eedcobrxxegvdvadgfzdecfowwphahim ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:45 np0005625203.localdomain sudo[57846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:45 np0005625203.localdomain python3[57848]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:45 np0005625203.localdomain sudo[57846]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:45 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Feb 20 07:59:45 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Feb 20 07:59:45 np0005625203.localdomain sudo[57908]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rviffjidraxiucvjlbjikktqzmcxwjfg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:45 np0005625203.localdomain sudo[57908]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:46 np0005625203.localdomain python3[57910]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:59:46 np0005625203.localdomain sudo[57908]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:46 np0005625203.localdomain sudo[57926]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ajxhjxiaypxzkwmzofmrfbvejwomjqzb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:46 np0005625203.localdomain sudo[57926]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:46 np0005625203.localdomain python3[57928]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:46 np0005625203.localdomain sudo[57926]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:46 np0005625203.localdomain sshd[57956]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:59:46 np0005625203.localdomain sudo[57957]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oanpibbxkwkekkrrmjhanueygsalvzmg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:46 np0005625203.localdomain sudo[57957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:46 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.1a deep-scrub starts
Feb 20 07:59:46 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.1a deep-scrub ok
Feb 20 07:59:46 np0005625203.localdomain python3[57959]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:59:46 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:59:46 np0005625203.localdomain systemd-rc-local-generator[57981]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:59:46 np0005625203.localdomain systemd-sysv-generator[57989]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:59:47 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:59:47 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Feb 20 07:59:47 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Feb 20 07:59:47 np0005625203.localdomain sudo[57957]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:47 np0005625203.localdomain sudo[58044]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qnkfqifcchfcaacbnfpfpolwmzvtazct ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:47 np0005625203.localdomain sudo[58044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 07:59:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Cumulative writes: 4134 writes, 19K keys, 4134 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4134 writes, 372 syncs, 11.11 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 729 writes, 2626 keys, 729 commit groups, 1.0 writes per commit group, ingest: 1.30 MB, 0.00 MB/s
                                                          Interval WAL: 729 writes, 166 syncs, 4.39 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fb610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fb610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fb610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Feb 20 07:59:47 np0005625203.localdomain python3[58046]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:59:47 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.e scrub starts
Feb 20 07:59:47 np0005625203.localdomain sudo[58044]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:47 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.e scrub ok
Feb 20 07:59:47 np0005625203.localdomain sudo[58062]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qbbwscunxtiowrjtiliuhkspibdeqqms ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:47 np0005625203.localdomain sudo[58062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:47 np0005625203.localdomain python3[58064]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:47 np0005625203.localdomain sudo[58062]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:48 np0005625203.localdomain sudo[58124]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zyucrlkaldmdsbkcqqhcwndctggpffmz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:48 np0005625203.localdomain sudo[58124]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:48 np0005625203.localdomain python3[58126]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 07:59:48 np0005625203.localdomain sudo[58124]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:48 np0005625203.localdomain sudo[58142]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mbzfgvjnocbnlurauvhgmovprmpkqwny ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:48 np0005625203.localdomain sudo[58142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:48 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Feb 20 07:59:48 np0005625203.localdomain python3[58144]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:48 np0005625203.localdomain sudo[58142]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:48 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Feb 20 07:59:49 np0005625203.localdomain sudo[58172]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jkoqotubaksufvyvotmzbncbgleeebrc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:49 np0005625203.localdomain sudo[58172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:49 np0005625203.localdomain python3[58174]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 07:59:49 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 07:59:49 np0005625203.localdomain systemd-rc-local-generator[58196]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 07:59:49 np0005625203.localdomain systemd-sysv-generator[58202]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 07:59:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 07:59:49 np0005625203.localdomain systemd[1]: Starting Create netns directory...
Feb 20 07:59:49 np0005625203.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 20 07:59:49 np0005625203.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 20 07:59:49 np0005625203.localdomain systemd[1]: Finished Create netns directory.
Feb 20 07:59:49 np0005625203.localdomain sudo[58172]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:49 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 51 pg[7.4( v 40'39 (0'0,40'39] local-lis/les=43/44 n=4 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=14.584158897s) [0,1,2] r=1 lpr=51 pi=[43,51)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1216.793457031s@ mbc={}] start_peering_interval up [1,5,3] -> [0,1,2], acting [1,5,3] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:49 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 51 pg[7.4( v 40'39 (0'0,40'39] local-lis/les=43/44 n=4 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=14.584040642s) [0,1,2] r=1 lpr=51 pi=[43,51)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1216.793457031s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:49 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 51 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=14.583452225s) [0,1,2] r=1 lpr=51 pi=[43,51)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1216.792724609s@ mbc={}] start_peering_interval up [1,5,3] -> [0,1,2], acting [1,5,3] -> [0,1,2], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:49 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 51 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=43/44 n=1 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=14.582402229s) [0,1,2] r=1 lpr=51 pi=[43,51)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1216.792724609s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:50 np0005625203.localdomain sudo[58228]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjsrexryiqjfbtsvibupdqljhpdjrusl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:50 np0005625203.localdomain sudo[58228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:50 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Feb 20 07:59:50 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Feb 20 07:59:50 np0005625203.localdomain python3[58230]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Feb 20 07:59:50 np0005625203.localdomain sudo[58228]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:50 np0005625203.localdomain sudo[58244]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bgurpcwfzrwptlwpzlenddjhqcqeaymf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:50 np0005625203.localdomain sudo[58244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:50 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Feb 20 07:59:50 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Feb 20 07:59:50 np0005625203.localdomain sudo[58244]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:51 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 6.19 deep-scrub starts
Feb 20 07:59:51 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 6.19 deep-scrub ok
Feb 20 07:59:51 np0005625203.localdomain sudo[58286]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xxxxhqclfnaaigymaueqnegvddtinxbt ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 07:59:51 np0005625203.localdomain sudo[58286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 07:59:51 np0005625203.localdomain podman[58289]: 2026-02-20 07:59:51.770662365 +0000 UTC m=+0.102396569 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, architecture=x86_64, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, url=https://www.redhat.com, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z)
Feb 20 07:59:51 np0005625203.localdomain python3[58288]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step2 config_dir=/var/lib/tripleo-config/container-startup-config/step_2 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Feb 20 07:59:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 07:59:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Cumulative writes: 5083 writes, 22K keys, 5083 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5083 writes, 468 syncs, 10.86 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 1835 writes, 6411 keys, 1835 commit groups, 1.0 writes per commit group, ingest: 2.34 MB, 0.00 MB/s
                                                          Interval WAL: 1835 writes, 328 syncs, 5.59 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c3610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c3610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c3610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 1200.1 total, 600.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Feb 20 07:59:51 np0005625203.localdomain podman[58289]: 2026-02-20 07:59:51.959272593 +0000 UTC m=+0.291006757 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, architecture=x86_64, config_id=tripleo_step1, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com)
Feb 20 07:59:51 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 07:59:52 np0005625203.localdomain podman[58394]: 2026-02-20 07:59:52.282274835 +0000 UTC m=+0.116041939 container create 157c12b4a8f7cf15867de8023292871ed5ec44f724f7385f7852ec2d2f7a5c92 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, container_name=nova_virtqemud_init_logs, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-libvirt-container, name=rhosp-rhel9/openstack-nova-libvirt, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step2, build-date=2026-01-12T23:31:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, distribution-scope=public)
Feb 20 07:59:52 np0005625203.localdomain podman[58394]: 2026-02-20 07:59:52.206194609 +0000 UTC m=+0.039961733 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 07:59:52 np0005625203.localdomain podman[58395]: 2026-02-20 07:59:52.21013686 +0000 UTC m=+0.039526030 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Feb 20 07:59:52 np0005625203.localdomain systemd[1]: Started libpod-conmon-157c12b4a8f7cf15867de8023292871ed5ec44f724f7385f7852ec2d2f7a5c92.scope.
Feb 20 07:59:52 np0005625203.localdomain podman[58395]: 2026-02-20 07:59:52.386382956 +0000 UTC m=+0.215772086 container create 47118da3eb23bc2b59d1007ffaae93663452e57df30a576084a760c744d744a2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, batch=17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, config_id=tripleo_step2, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, container_name=nova_compute_init_log, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container)
Feb 20 07:59:52 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:59:52 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0db72f640c06816f89c40aafbb6106c7b45c3882ced51667574b7ebe0628b583/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff)
Feb 20 07:59:52 np0005625203.localdomain podman[58394]: 2026-02-20 07:59:52.408690075 +0000 UTC m=+0.242457169 container init 157c12b4a8f7cf15867de8023292871ed5ec44f724f7385f7852ec2d2f7a5c92 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, container_name=nova_virtqemud_init_logs, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step2, batch=17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:31:49Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 07:59:52 np0005625203.localdomain podman[58394]: 2026-02-20 07:59:52.421923433 +0000 UTC m=+0.255690527 container start 157c12b4a8f7cf15867de8023292871ed5ec44f724f7385f7852ec2d2f7a5c92 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step2, vcs-type=git, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, build-date=2026-01-12T23:31:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtqemud_init_logs, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 07:59:52 np0005625203.localdomain systemd[1]: Started libpod-conmon-47118da3eb23bc2b59d1007ffaae93663452e57df30a576084a760c744d744a2.scope.
Feb 20 07:59:52 np0005625203.localdomain systemd[1]: libpod-157c12b4a8f7cf15867de8023292871ed5ec44f724f7385f7852ec2d2f7a5c92.scope: Deactivated successfully.
Feb 20 07:59:52 np0005625203.localdomain python3[58288]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud_init_logs --conmon-pidfile /run/nova_virtqemud_init_logs.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1771573229 --label config_id=tripleo_step2 --label container_name=nova_virtqemud_init_logs --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud_init_logs.log --network none --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --user root --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /bin/bash -c chown -R tss:tss /var/log/swtpm
Feb 20 07:59:52 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:59:52 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52efa5d66e3554b905ead4ab4d91a04cce6d946ec83d9a5167ea71afe19dd150/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 07:59:52 np0005625203.localdomain podman[58395]: 2026-02-20 07:59:52.452706103 +0000 UTC m=+0.282095233 container init 47118da3eb23bc2b59d1007ffaae93663452e57df30a576084a760c744d744a2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, io.openshift.expose-services=, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=nova_compute_init_log, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 07:59:52 np0005625203.localdomain podman[58395]: 2026-02-20 07:59:52.463485915 +0000 UTC m=+0.292875045 container start 47118da3eb23bc2b59d1007ffaae93663452e57df30a576084a760c744d744a2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute_init_log, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, version=17.1.13, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, tcib_managed=true, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com)
Feb 20 07:59:52 np0005625203.localdomain python3[58288]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute_init_log --conmon-pidfile /run/nova_compute_init_log.pid --detach=True --env TRIPLEO_DEPLOY_IDENTIFIER=1771573229 --label config_id=tripleo_step2 --label container_name=nova_compute_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute_init_log.log --network none --privileged=False --user root --volume /var/log/containers/nova:/var/log/nova:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /bin/bash -c chown -R nova:nova /var/log/nova
Feb 20 07:59:52 np0005625203.localdomain systemd[1]: libpod-47118da3eb23bc2b59d1007ffaae93663452e57df30a576084a760c744d744a2.scope: Deactivated successfully.
Feb 20 07:59:52 np0005625203.localdomain podman[58432]: 2026-02-20 07:59:52.512033852 +0000 UTC m=+0.069776963 container died 157c12b4a8f7cf15867de8023292871ed5ec44f724f7385f7852ec2d2f7a5c92 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, container_name=nova_virtqemud_init_logs, version=17.1.13, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.created=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, config_id=tripleo_step2, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., release=1766032510)
Feb 20 07:59:52 np0005625203.localdomain podman[58452]: 2026-02-20 07:59:52.543163092 +0000 UTC m=+0.058524696 container died 47118da3eb23bc2b59d1007ffaae93663452e57df30a576084a760c744d744a2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, vcs-type=git, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute_init_log, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64)
Feb 20 07:59:52 np0005625203.localdomain podman[58453]: 2026-02-20 07:59:52.665017001 +0000 UTC m=+0.184251534 container cleanup 47118da3eb23bc2b59d1007ffaae93663452e57df30a576084a760c744d744a2 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': ['/bin/bash', '-c', 'chown -R nova:nova /var/log/nova'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'user': 'root', 'volumes': ['/var/log/containers/nova:/var/log/nova:z']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute_init_log, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step2, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Feb 20 07:59:52 np0005625203.localdomain systemd[1]: libpod-conmon-47118da3eb23bc2b59d1007ffaae93663452e57df30a576084a760c744d744a2.scope: Deactivated successfully.
Feb 20 07:59:52 np0005625203.localdomain podman[58431]: 2026-02-20 07:59:52.678335423 +0000 UTC m=+0.238083945 container cleanup 157c12b4a8f7cf15867de8023292871ed5ec44f724f7385f7852ec2d2f7a5c92 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud_init_logs, config_id=tripleo_step2, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, config_data={'command': ['/bin/bash', '-c', 'chown -R tss:tss /var/log/swtpm'], 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'none', 'privileged': True, 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'user': 'root', 'volumes': ['/var/log/containers/libvirt/swtpm:/var/log/swtpm:shared,z']}, com.redhat.component=openstack-nova-libvirt-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, version=17.1.13, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.created=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud_init_logs, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, architecture=x86_64)
Feb 20 07:59:52 np0005625203.localdomain systemd[1]: libpod-conmon-157c12b4a8f7cf15867de8023292871ed5ec44f724f7385f7852ec2d2f7a5c92.scope: Deactivated successfully.
Feb 20 07:59:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0db72f640c06816f89c40aafbb6106c7b45c3882ced51667574b7ebe0628b583-merged.mount: Deactivated successfully.
Feb 20 07:59:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-157c12b4a8f7cf15867de8023292871ed5ec44f724f7385f7852ec2d2f7a5c92-userdata-shm.mount: Deactivated successfully.
Feb 20 07:59:52 np0005625203.localdomain podman[58587]: 2026-02-20 07:59:52.941017623 +0000 UTC m=+0.070692370 container create ce9ce47574681de833e19482673b6e8cb14e615dbd0b3422e50e1ffd4675e094 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, vcs-type=git, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, config_id=tripleo_step2, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, build-date=2026-01-12T22:56:19Z, tcib_managed=true, container_name=create_haproxy_wrapper, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 07:59:52 np0005625203.localdomain podman[58586]: 2026-02-20 07:59:52.971275434 +0000 UTC m=+0.106670349 container create 88cfd03f69d1aad87d6addfb99b25cf23654aa9cebf4d5c5e724672e5fa95a9c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:31:49Z, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=create_virtlogd_wrapper, build-date=2026-01-12T23:31:49Z, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, config_id=tripleo_step2, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-libvirt)
Feb 20 07:59:52 np0005625203.localdomain systemd[1]: Started libpod-conmon-ce9ce47574681de833e19482673b6e8cb14e615dbd0b3422e50e1ffd4675e094.scope.
Feb 20 07:59:52 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:59:52 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20741c15c03b3a0cca3141fc20b59df52b1003984d593ca69a540b93db33da69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 07:59:53 np0005625203.localdomain systemd[1]: Started libpod-conmon-88cfd03f69d1aad87d6addfb99b25cf23654aa9cebf4d5c5e724672e5fa95a9c.scope.
Feb 20 07:59:53 np0005625203.localdomain podman[58587]: 2026-02-20 07:59:52.905013043 +0000 UTC m=+0.034687790 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Feb 20 07:59:53 np0005625203.localdomain podman[58586]: 2026-02-20 07:59:52.910099831 +0000 UTC m=+0.045494776 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 07:59:53 np0005625203.localdomain podman[58587]: 2026-02-20 07:59:53.011354501 +0000 UTC m=+0.141029238 container init ce9ce47574681de833e19482673b6e8cb14e615dbd0b3422e50e1ffd4675e094 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, container_name=create_haproxy_wrapper, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, url=https://www.redhat.com, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']})
Feb 20 07:59:53 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 07:59:53 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86b495d7449130304be6335c2cc66bfe5517118781c16cde6afb1fb27ddd4c49/merged/var/lib/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Feb 20 07:59:53 np0005625203.localdomain podman[58587]: 2026-02-20 07:59:53.021306791 +0000 UTC m=+0.150981528 container start ce9ce47574681de833e19482673b6e8cb14e615dbd0b3422e50e1ffd4675e094 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=create_haproxy_wrapper, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step2, tcib_managed=true, distribution-scope=public, batch=17.1_20260112.1, release=1766032510)
Feb 20 07:59:53 np0005625203.localdomain podman[58587]: 2026-02-20 07:59:53.021621341 +0000 UTC m=+0.151296078 container attach ce9ce47574681de833e19482673b6e8cb14e615dbd0b3422e50e1ffd4675e094 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, version=17.1.13, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step2, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, build-date=2026-01-12T22:56:19Z, container_name=create_haproxy_wrapper, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 07:59:53 np0005625203.localdomain podman[58586]: 2026-02-20 07:59:53.028720992 +0000 UTC m=+0.164115927 container init 88cfd03f69d1aad87d6addfb99b25cf23654aa9cebf4d5c5e724672e5fa95a9c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, container_name=create_virtlogd_wrapper, tcib_managed=true, build-date=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, batch=17.1_20260112.1, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, version=17.1.13, io.openshift.expose-services=, io.buildah.version=1.41.5, config_id=tripleo_step2, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team)
Feb 20 07:59:53 np0005625203.localdomain podman[58586]: 2026-02-20 07:59:53.036755081 +0000 UTC m=+0.172150006 container start 88cfd03f69d1aad87d6addfb99b25cf23654aa9cebf4d5c5e724672e5fa95a9c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, container_name=create_virtlogd_wrapper, vcs-type=git, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, url=https://www.redhat.com, batch=17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, org.opencontainers.image.created=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, tcib_managed=true, config_id=tripleo_step2, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5)
Feb 20 07:59:53 np0005625203.localdomain podman[58586]: 2026-02-20 07:59:53.037301039 +0000 UTC m=+0.172695964 container attach 88cfd03f69d1aad87d6addfb99b25cf23654aa9cebf4d5c5e724672e5fa95a9c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, release=1766032510, io.openshift.expose-services=, container_name=create_virtlogd_wrapper, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20260112.1, version=17.1.13, build-date=2026-01-12T23:31:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step2)
Feb 20 07:59:53 np0005625203.localdomain sshd[57956]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 27577 ssh2 [preauth]
Feb 20 07:59:53 np0005625203.localdomain sshd[57956]: Disconnecting authenticating user root 185.246.128.171 port 27577: Too many authentication failures [preauth]
Feb 20 07:59:53 np0005625203.localdomain sshd[58631]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 07:59:54 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 6.d scrub starts
Feb 20 07:59:54 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 6.d scrub ok
Feb 20 07:59:54 np0005625203.localdomain ovs-vsctl[58691]: ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory)
Feb 20 07:59:55 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Feb 20 07:59:55 np0005625203.localdomain systemd[1]: libpod-88cfd03f69d1aad87d6addfb99b25cf23654aa9cebf4d5c5e724672e5fa95a9c.scope: Deactivated successfully.
Feb 20 07:59:55 np0005625203.localdomain systemd[1]: libpod-88cfd03f69d1aad87d6addfb99b25cf23654aa9cebf4d5c5e724672e5fa95a9c.scope: Consumed 2.086s CPU time.
Feb 20 07:59:55 np0005625203.localdomain podman[58586]: 2026-02-20 07:59:55.151398637 +0000 UTC m=+2.286793612 container died 88cfd03f69d1aad87d6addfb99b25cf23654aa9cebf4d5c5e724672e5fa95a9c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, release=1766032510, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.buildah.version=1.41.5, config_id=tripleo_step2, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, container_name=create_virtlogd_wrapper, vcs-type=git, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 07:59:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-88cfd03f69d1aad87d6addfb99b25cf23654aa9cebf4d5c5e724672e5fa95a9c-userdata-shm.mount: Deactivated successfully.
Feb 20 07:59:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-86b495d7449130304be6335c2cc66bfe5517118781c16cde6afb1fb27ddd4c49-merged.mount: Deactivated successfully.
Feb 20 07:59:55 np0005625203.localdomain podman[58840]: 2026-02-20 07:59:55.248654723 +0000 UTC m=+0.084434458 container cleanup 88cfd03f69d1aad87d6addfb99b25cf23654aa9cebf4d5c5e724672e5fa95a9c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=create_virtlogd_wrapper, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, container_name=create_virtlogd_wrapper, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2026-01-12T23:31:49Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step2, config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']}, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 07:59:55 np0005625203.localdomain systemd[1]: libpod-conmon-88cfd03f69d1aad87d6addfb99b25cf23654aa9cebf4d5c5e724672e5fa95a9c.scope: Deactivated successfully.
Feb 20 07:59:55 np0005625203.localdomain python3[58288]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/create_virtlogd_wrapper.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1771573229 --label config_id=tripleo_step2 --label container_name=create_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::nova::virtlogd_wrapper'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_virtlogd_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /var/lib/container-config-scripts:/var/lib/container-config-scripts:shared,z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::nova::virtlogd_wrapper
Feb 20 07:59:55 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Feb 20 07:59:55 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 53 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=14.477966309s) [2,0,4] r=2 lpr=53 pi=[45,53)/1 crt=40'39 mlcod 0'0 active pruub 1218.504150391s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [2,0,4], acting [4,2,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:55 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 53 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=14.477663040s) [2,0,4] r=2 lpr=53 pi=[45,53)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1218.504150391s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:55 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 53 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=45/46 n=2 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=14.476867676s) [2,0,4] r=2 lpr=53 pi=[45,53)/1 crt=40'39 mlcod 0'0 active pruub 1218.503540039s@ mbc={255={}}] start_peering_interval up [4,2,3] -> [2,0,4], acting [4,2,3] -> [2,0,4], acting_primary 4 -> 2, up_primary 4 -> 2, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:55 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 53 pg[7.5( v 40'39 (0'0,40'39] local-lis/les=45/46 n=2 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=14.476719856s) [2,0,4] r=2 lpr=53 pi=[45,53)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1218.503540039s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:56 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Feb 20 07:59:56 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Feb 20 07:59:56 np0005625203.localdomain systemd[1]: libpod-ce9ce47574681de833e19482673b6e8cb14e615dbd0b3422e50e1ffd4675e094.scope: Deactivated successfully.
Feb 20 07:59:56 np0005625203.localdomain systemd[1]: libpod-ce9ce47574681de833e19482673b6e8cb14e615dbd0b3422e50e1ffd4675e094.scope: Consumed 2.150s CPU time.
Feb 20 07:59:56 np0005625203.localdomain podman[58587]: 2026-02-20 07:59:56.462615569 +0000 UTC m=+3.592290306 container died ce9ce47574681de833e19482673b6e8cb14e615dbd0b3422e50e1ffd4675e094 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step2, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, container_name=create_haproxy_wrapper, batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, distribution-scope=public, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 07:59:56 np0005625203.localdomain systemd[1]: tmp-crun.R4WBXP.mount: Deactivated successfully.
Feb 20 07:59:56 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ce9ce47574681de833e19482673b6e8cb14e615dbd0b3422e50e1ffd4675e094-userdata-shm.mount: Deactivated successfully.
Feb 20 07:59:56 np0005625203.localdomain podman[58882]: 2026-02-20 07:59:56.563131776 +0000 UTC m=+0.085870642 container cleanup ce9ce47574681de833e19482673b6e8cb14e615dbd0b3422e50e1ffd4675e094 (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=create_haproxy_wrapper, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step2, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=create_haproxy_wrapper, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']})
Feb 20 07:59:56 np0005625203.localdomain systemd[1]: libpod-conmon-ce9ce47574681de833e19482673b6e8cb14e615dbd0b3422e50e1ffd4675e094.scope: Deactivated successfully.
Feb 20 07:59:56 np0005625203.localdomain python3[58288]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name create_haproxy_wrapper --conmon-pidfile /run/create_haproxy_wrapper.pid --detach=False --label config_id=tripleo_step2 --label container_name=create_haproxy_wrapper --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'file', 'include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers'], 'detach': False, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'start_order': 1, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/create_haproxy_wrapper.log --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 file include ::tripleo::profile::base::neutron::ovn_metadata_agent_wrappers
Feb 20 07:59:56 np0005625203.localdomain sudo[58286]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:57 np0005625203.localdomain sudo[58936]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvzygcvydzzvomhsmudqxhskyyduqnjt ; /usr/bin/python3
Feb 20 07:59:57 np0005625203.localdomain sudo[58936]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:57 np0005625203.localdomain python3[58938]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks2.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:57 np0005625203.localdomain sudo[58936]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-20741c15c03b3a0cca3141fc20b59df52b1003984d593ca69a540b93db33da69-merged.mount: Deactivated successfully.
Feb 20 07:59:57 np0005625203.localdomain sudo[58984]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdwpczswcncldqvthrgxnnkusjnnpynq ; /usr/bin/python3
Feb 20 07:59:57 np0005625203.localdomain sudo[58984]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:57 np0005625203.localdomain sudo[58984]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:58 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 55 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=47/48 n=2 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=14.608527184s) [0,4,5] r=-1 lpr=55 pi=[47,55)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1225.046752930s@ mbc={}] start_peering_interval up [3,5,1] -> [0,4,5], acting [3,5,1] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:58 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 55 pg[7.6( v 40'39 (0'0,40'39] local-lis/les=47/48 n=2 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=14.608450890s) [0,4,5] r=-1 lpr=55 pi=[47,55)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1225.046752930s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:58 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 55 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=47/48 n=1 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=14.608015060s) [0,4,5] r=-1 lpr=55 pi=[47,55)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1225.046630859s@ mbc={}] start_peering_interval up [3,5,1] -> [0,4,5], acting [3,5,1] -> [0,4,5], acting_primary 3 -> 0, up_primary 3 -> 0, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 07:59:58 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 55 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=47/48 n=1 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=55 pruub=14.607971191s) [0,4,5] r=-1 lpr=55 pi=[47,55)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1225.046630859s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:58 np0005625203.localdomain sudo[59027]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-psvbculphcvbpvfbywmyaldvuacplhlu ; /usr/bin/python3
Feb 20 07:59:58 np0005625203.localdomain sudo[59027]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:58 np0005625203.localdomain sudo[59027]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:58 np0005625203.localdomain sudo[59057]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-spfrpxqgegkqqvsrjfdnhnxoghhcapwv ; /usr/bin/python3
Feb 20 07:59:58 np0005625203.localdomain sudo[59057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:58 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.a scrub starts
Feb 20 07:59:58 np0005625203.localdomain python3[59059]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks2.json short_hostname=np0005625203 step=2 update_config_hash_only=False
Feb 20 07:59:58 np0005625203.localdomain sudo[59057]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:58 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.a scrub ok
Feb 20 07:59:59 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 55 pg[7.6( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=55) [0,4,5] r=1 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:59 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 55 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=55) [0,4,5] r=1 lpr=55 pi=[47,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 07:59:59 np0005625203.localdomain sudo[59073]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vawxctiticfuhdmnruqhqazzpmmwhdet ; /usr/bin/python3
Feb 20 07:59:59 np0005625203.localdomain sudo[59073]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:59 np0005625203.localdomain python3[59075]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 07:59:59 np0005625203.localdomain sudo[59073]: pam_unix(sudo:session): session closed for user root
Feb 20 07:59:59 np0005625203.localdomain sudo[59089]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydpitaltkqiyiugpmqtugizgehcmhrwk ; /usr/bin/python3
Feb 20 07:59:59 np0005625203.localdomain sudo[59089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 07:59:59 np0005625203.localdomain python3[59091]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_2 config_pattern=container-puppet-*.json config_overrides={} debug=True
Feb 20 07:59:59 np0005625203.localdomain sudo[59089]: pam_unix(sudo:session): session closed for user root
Feb 20 08:00:00 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 57 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=49/50 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.602295876s) [1,5,3] r=-1 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1222.738647461s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,3], acting [3,4,2] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 08:00:00 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 57 pg[7.7( v 40'39 (0'0,40'39] local-lis/les=49/50 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.602244377s) [1,5,3] r=-1 lpr=57 pi=[49,57)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1222.738647461s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:00 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 57 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=49/50 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.602050781s) [1,5,3] r=-1 lpr=57 pi=[49,57)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1222.738647461s@ mbc={}] start_peering_interval up [3,4,2] -> [1,5,3], acting [3,4,2] -> [1,5,3], acting_primary 3 -> 1, up_primary 3 -> 1, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 08:00:00 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 57 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=49/50 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57 pruub=14.602003098s) [1,5,3] r=-1 lpr=57 pi=[49,57)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1222.738647461s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:00 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 57 pg[7.f( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57) [1,5,3] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 08:00:00 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 57 pg[7.7( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57) [1,5,3] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 08:00:00 np0005625203.localdomain sshd[58631]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 7221 ssh2 [preauth]
Feb 20 08:00:00 np0005625203.localdomain sshd[58631]: Disconnecting authenticating user root 185.246.128.171 port 7221: Too many authentication failures [preauth]
Feb 20 08:00:01 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 58 pg[7.7( v 40'39 lc 38'21 (0'0,40'39] local-lis/les=57/58 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57) [1,5,3] r=0 lpr=57 pi=[49,57)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(1+2)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 08:00:01 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 58 pg[7.f( v 40'39 lc 38'1 (0'0,40'39] local-lis/les=57/58 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=57) [1,5,3] r=0 lpr=57 pi=[49,57)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(1+2)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 08:00:02 np0005625203.localdomain sshd[59092]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:02 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 59 pg[7.8( v 40'39 (0'0,40'39] local-lis/les=43/44 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=59 pruub=10.261906624s) [3,4,5] r=-1 lpr=59 pi=[43,59)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1224.794433594s@ mbc={}] start_peering_interval up [1,5,3] -> [3,4,5], acting [1,5,3] -> [3,4,5], acting_primary 1 -> 3, up_primary 1 -> 3, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 08:00:02 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 59 pg[7.8( v 40'39 (0'0,40'39] local-lis/les=43/44 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=59 pruub=10.261727333s) [3,4,5] r=-1 lpr=59 pi=[43,59)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1224.794433594s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:02 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Feb 20 08:00:02 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Feb 20 08:00:03 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 59 pg[7.8( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=43/43 les/c/f=44/44/0 sis=59) [3,4,5] r=1 lpr=59 pi=[43,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:03 np0005625203.localdomain sshd[59094]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:04 np0005625203.localdomain sshd[59094]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:00:04 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Feb 20 08:00:04 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Feb 20 08:00:05 np0005625203.localdomain sshd[59096]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:05 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Feb 20 08:00:05 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Feb 20 08:00:06 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Feb 20 08:00:06 np0005625203.localdomain sshd[59096]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:00:06 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Feb 20 08:00:07 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Feb 20 08:00:07 np0005625203.localdomain sshd[59092]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 59624 ssh2 [preauth]
Feb 20 08:00:07 np0005625203.localdomain sshd[59092]: Disconnecting authenticating user root 185.246.128.171 port 59624: Too many authentication failures [preauth]
Feb 20 08:00:07 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Feb 20 08:00:07 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.15 scrub starts
Feb 20 08:00:07 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 6.15 scrub ok
Feb 20 08:00:08 np0005625203.localdomain sshd[59098]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:08 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Feb 20 08:00:08 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Feb 20 08:00:09 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 61 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=8.982342720s) [0,2,4] r=2 lpr=61 pi=[45,61)/1 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1226.507934570s@ mbc={}] start_peering_interval up [4,2,3] -> [0,2,4], acting [4,2,3] -> [0,2,4], acting_primary 4 -> 0, up_primary 4 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 08:00:09 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 61 pg[7.9( v 40'39 (0'0,40'39] local-lis/les=45/46 n=1 ec=43/36 lis/c=45/45 les/c/f=46/46/0 sis=61 pruub=8.982144356s) [0,2,4] r=2 lpr=61 pi=[45,61)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1226.507934570s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:10 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Feb 20 08:00:10 np0005625203.localdomain ceph-osd[32924]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Feb 20 08:00:13 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Feb 20 08:00:13 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Feb 20 08:00:14 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.f scrub starts
Feb 20 08:00:14 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.f scrub ok
Feb 20 08:00:15 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 63 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=47/48 n=1 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=13.062070847s) [2,0,4] r=-1 lpr=63 pi=[47,63)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1241.047241211s@ mbc={}] start_peering_interval up [3,5,1] -> [2,0,4], acting [3,5,1] -> [2,0,4], acting_primary 3 -> 2, up_primary 3 -> 2, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 08:00:15 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 63 pg[7.a( v 40'39 (0'0,40'39] local-lis/les=47/48 n=1 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=63 pruub=13.061990738s) [2,0,4] r=-1 lpr=63 pi=[47,63)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 1241.047241211s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:16 np0005625203.localdomain sshd[59098]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 34765 ssh2 [preauth]
Feb 20 08:00:16 np0005625203.localdomain sshd[59098]: Disconnecting authenticating user root 185.246.128.171 port 34765: Too many authentication failures [preauth]
Feb 20 08:00:16 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 63 pg[7.a( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=47/47 les/c/f=48/48/0 sis=63) [2,0,4] r=2 lpr=63 pi=[47,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:17 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 65 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=49/50 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=65 pruub=13.057934761s) [3,1,2] r=-1 lpr=65 pi=[49,65)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1238.744140625s@ mbc={}] start_peering_interval up [3,4,2] -> [3,1,2], acting [3,4,2] -> [3,1,2], acting_primary 3 -> 3, up_primary 3 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 08:00:17 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 65 pg[7.b( v 40'39 (0'0,40'39] local-lis/les=49/50 n=1 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=65 pruub=13.057872772s) [3,1,2] r=-1 lpr=65 pi=[49,65)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1238.744140625s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:18 np0005625203.localdomain sshd[59100]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:19 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 65 pg[7.b( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=49/49 les/c/f=50/50/0 sis=65) [3,1,2] r=1 lpr=65 pi=[49,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:19 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 67 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=51/52 n=1 ec=43/36 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=11.165537834s) [1,3,2] r=0 lpr=67 pi=[51,67)/1 luod=0'0 crt=40'39 lcod 0'0 mlcod 0'0 active pruub 1243.253540039s@ mbc={}] start_peering_interval up [0,1,2] -> [1,3,2], acting [0,1,2] -> [1,3,2], acting_primary 0 -> 1, up_primary 0 -> 1, role 1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 08:00:19 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 67 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=51/52 n=1 ec=43/36 lis/c=51/51 les/c/f=52/52/0 sis=67 pruub=11.165537834s) [1,3,2] r=0 lpr=67 pi=[51,67)/1 crt=40'39 lcod 0'0 mlcod 0'0 unknown pruub 1243.253540039s@ mbc={}] state<Start>: transitioning to Primary
Feb 20 08:00:20 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 68 pg[7.c( v 40'39 (0'0,40'39] local-lis/les=67/68 n=1 ec=43/36 lis/c=51/51 les/c/f=52/52/0 sis=67) [1,3,2] r=0 lpr=67 pi=[51,67)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded mbc={255={(2+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 08:00:21 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 69 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=53/54 n=1 ec=43/36 lis/c=53/53 les/c/f=54/54/0 sis=69 pruub=15.305843353s) [1,3,5] r=-1 lpr=69 pi=[53,69)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1245.101562500s@ mbc={}] start_peering_interval up [2,0,4] -> [1,3,5], acting [2,0,4] -> [1,3,5], acting_primary 2 -> 1, up_primary 2 -> 1, role 2 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 08:00:21 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 69 pg[7.d( v 40'39 (0'0,40'39] local-lis/les=53/54 n=1 ec=43/36 lis/c=53/53 les/c/f=54/54/0 sis=69 pruub=15.305624008s) [1,3,5] r=-1 lpr=69 pi=[53,69)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1245.101562500s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:21 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 69 pg[7.d( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=53/53 les/c/f=54/54/0 sis=69) [1,3,5] r=0 lpr=69 pi=[53,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Feb 20 08:00:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:00:22 np0005625203.localdomain podman[59102]: 2026-02-20 08:00:22.781952695 +0000 UTC m=+0.092375244 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:00:23 np0005625203.localdomain sshd[59100]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 32585 ssh2 [preauth]
Feb 20 08:00:23 np0005625203.localdomain podman[59102]: 2026-02-20 08:00:23.002423064 +0000 UTC m=+0.312845583 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vendor=Red Hat, Inc., release=1766032510)
Feb 20 08:00:23 np0005625203.localdomain sshd[59100]: Disconnecting authenticating user root 185.246.128.171 port 32585: Too many authentication failures [preauth]
Feb 20 08:00:23 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:00:23 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 70 pg[7.d( v 40'39 lc 38'13 (0'0,40'39] local-lis/les=69/70 n=1 ec=43/36 lis/c=53/53 les/c/f=54/54/0 sis=69) [1,3,5] r=0 lpr=69 pi=[53,69)/1 crt=40'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+3)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Feb 20 08:00:24 np0005625203.localdomain sshd[59133]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:24 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.d scrub starts
Feb 20 08:00:24 np0005625203.localdomain ceph-osd[31970]: log_channel(cluster) log [DBG] : 7.d scrub ok
Feb 20 08:00:29 np0005625203.localdomain sshd[59133]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 4462 ssh2 [preauth]
Feb 20 08:00:29 np0005625203.localdomain sshd[59133]: Disconnecting authenticating user root 185.246.128.171 port 4462: Too many authentication failures [preauth]
Feb 20 08:00:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 71 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=55/56 n=1 ec=43/36 lis/c=55/55 les/c/f=56/56/0 sis=71 pruub=9.151802063s) [3,5,1] r=-1 lpr=71 pi=[55,71)/1 luod=0'0 crt=40'39 mlcod 0'0 active pruub 1247.130371094s@ mbc={}] start_peering_interval up [0,4,5] -> [3,5,1], acting [0,4,5] -> [3,5,1], acting_primary 0 -> 3, up_primary 0 -> 3, role 1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 08:00:29 np0005625203.localdomain ceph-osd[32924]: osd.4 pg_epoch: 71 pg[7.e( v 40'39 (0'0,40'39] local-lis/les=55/56 n=1 ec=43/36 lis/c=55/55 les/c/f=56/56/0 sis=71 pruub=9.151257515s) [3,5,1] r=-1 lpr=71 pi=[55,71)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1247.130371094s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:30 np0005625203.localdomain sshd[59135]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 71 pg[7.e( empty local-lis/les=0/0 n=0 ec=43/36 lis/c=55/55 les/c/f=56/56/0 sis=71) [3,5,1] r=2 lpr=71 pi=[55,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 73 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=57/58 n=3 ec=43/36 lis/c=57/57 les/c/f=58/58/0 sis=73 pruub=9.194403648s) [0,5,1] r=2 lpr=73 pi=[57,73)/1 crt=40'39 mlcod 0'0 active pruub 1253.554931641s@ mbc={255={}}] start_peering_interval up [1,5,3] -> [0,5,1], acting [1,5,3] -> [0,5,1], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> 2, features acting 4540138322906710015 upacting 4540138322906710015
Feb 20 08:00:31 np0005625203.localdomain ceph-osd[31970]: osd.1 pg_epoch: 73 pg[7.f( v 40'39 (0'0,40'39] local-lis/les=57/58 n=3 ec=43/36 lis/c=57/57 les/c/f=58/58/0 sis=73 pruub=9.194298744s) [0,5,1] r=2 lpr=73 pi=[57,73)/1 crt=40'39 mlcod 0'0 unknown NOTIFY pruub 1253.554931641s@ mbc={}] state<Start>: transitioning to Stray
Feb 20 08:00:35 np0005625203.localdomain sshd[59137]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:35 np0005625203.localdomain sshd[59137]: Invalid user ubuntu from 102.211.152.28 port 34932
Feb 20 08:00:35 np0005625203.localdomain sshd[59137]: Received disconnect from 102.211.152.28 port 34932:11: Bye Bye [preauth]
Feb 20 08:00:35 np0005625203.localdomain sshd[59137]: Disconnected from invalid user ubuntu 102.211.152.28 port 34932 [preauth]
Feb 20 08:00:37 np0005625203.localdomain sshd[59135]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 43427 ssh2 [preauth]
Feb 20 08:00:37 np0005625203.localdomain sshd[59135]: Disconnecting authenticating user root 185.246.128.171 port 43427: Too many authentication failures [preauth]
Feb 20 08:00:37 np0005625203.localdomain sudo[59139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:00:37 np0005625203.localdomain sudo[59139]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:00:37 np0005625203.localdomain sudo[59139]: pam_unix(sudo:session): session closed for user root
Feb 20 08:00:38 np0005625203.localdomain sudo[59154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:00:38 np0005625203.localdomain sudo[59154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:00:38 np0005625203.localdomain sudo[59154]: pam_unix(sudo:session): session closed for user root
Feb 20 08:00:39 np0005625203.localdomain sudo[59201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:00:39 np0005625203.localdomain sudo[59201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:00:39 np0005625203.localdomain sudo[59201]: pam_unix(sudo:session): session closed for user root
Feb 20 08:00:40 np0005625203.localdomain sshd[59216]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:43 np0005625203.localdomain sshd[59216]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 41849 ssh2 [preauth]
Feb 20 08:00:43 np0005625203.localdomain sshd[59216]: Disconnecting authenticating user root 185.246.128.171 port 41849: Too many authentication failures [preauth]
Feb 20 08:00:43 np0005625203.localdomain sshd[59218]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:45 np0005625203.localdomain sshd[59218]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 61277 ssh2 [preauth]
Feb 20 08:00:45 np0005625203.localdomain sshd[59218]: Disconnecting authenticating user root 185.246.128.171 port 61277: Too many authentication failures [preauth]
Feb 20 08:00:45 np0005625203.localdomain sshd[59220]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:48 np0005625203.localdomain sshd[59222]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:49 np0005625203.localdomain sshd[59222]: Invalid user claude from 189.190.2.14 port 47194
Feb 20 08:00:49 np0005625203.localdomain sshd[59222]: Received disconnect from 189.190.2.14 port 47194:11: Bye Bye [preauth]
Feb 20 08:00:49 np0005625203.localdomain sshd[59222]: Disconnected from invalid user claude 189.190.2.14 port 47194 [preauth]
Feb 20 08:00:52 np0005625203.localdomain sshd[59224]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:52 np0005625203.localdomain sshd[59224]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:00:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:00:53 np0005625203.localdomain podman[59226]: 2026-02-20 08:00:53.77278262 +0000 UTC m=+0.090643701 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.13)
Feb 20 08:00:53 np0005625203.localdomain podman[59226]: 2026-02-20 08:00:53.963788732 +0000 UTC m=+0.281649793 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc.)
Feb 20 08:00:53 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:00:55 np0005625203.localdomain sshd[59220]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 14057 ssh2 [preauth]
Feb 20 08:00:55 np0005625203.localdomain sshd[59220]: Disconnecting authenticating user root 185.246.128.171 port 14057: Too many authentication failures [preauth]
Feb 20 08:00:55 np0005625203.localdomain sshd[59256]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:00:58 np0005625203.localdomain sshd[59256]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 13310 ssh2 [preauth]
Feb 20 08:00:58 np0005625203.localdomain sshd[59256]: Disconnecting authenticating user root 185.246.128.171 port 13310: Too many authentication failures [preauth]
Feb 20 08:00:59 np0005625203.localdomain sshd[59258]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:01:01 np0005625203.localdomain CROND[59261]: (root) CMD (run-parts /etc/cron.hourly)
Feb 20 08:01:01 np0005625203.localdomain run-parts[59264]: (/etc/cron.hourly) starting 0anacron
Feb 20 08:01:01 np0005625203.localdomain run-parts[59270]: (/etc/cron.hourly) finished 0anacron
Feb 20 08:01:01 np0005625203.localdomain CROND[59260]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 20 08:01:05 np0005625203.localdomain sshd[59258]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 35710 ssh2 [preauth]
Feb 20 08:01:05 np0005625203.localdomain sshd[59258]: Disconnecting authenticating user root 185.246.128.171 port 35710: Too many authentication failures [preauth]
Feb 20 08:01:06 np0005625203.localdomain sshd[59271]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:01:10 np0005625203.localdomain sshd[59271]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 17620 ssh2 [preauth]
Feb 20 08:01:10 np0005625203.localdomain sshd[59271]: Disconnecting authenticating user root 185.246.128.171 port 17620: Too many authentication failures [preauth]
Feb 20 08:01:11 np0005625203.localdomain sshd[59273]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:01:15 np0005625203.localdomain sshd[59273]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 49231 ssh2 [preauth]
Feb 20 08:01:15 np0005625203.localdomain sshd[59273]: Disconnecting authenticating user root 185.246.128.171 port 49231: Too many authentication failures [preauth]
Feb 20 08:01:16 np0005625203.localdomain sshd[59275]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:01:17 np0005625203.localdomain sshd[59275]: Invalid user deployuser from 40.81.244.142 port 58904
Feb 20 08:01:17 np0005625203.localdomain sshd[59275]: Received disconnect from 40.81.244.142 port 58904:11: Bye Bye [preauth]
Feb 20 08:01:17 np0005625203.localdomain sshd[59275]: Disconnected from invalid user deployuser 40.81.244.142 port 58904 [preauth]
Feb 20 08:01:17 np0005625203.localdomain sshd[59277]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:01:22 np0005625203.localdomain sshd[59279]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:01:23 np0005625203.localdomain sshd[59279]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:01:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:01:24 np0005625203.localdomain podman[59281]: 2026-02-20 08:01:24.773504342 +0000 UTC m=+0.089417083 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, release=1766032510, config_id=tripleo_step1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:01:24 np0005625203.localdomain podman[59281]: 2026-02-20 08:01:24.972219344 +0000 UTC m=+0.288131985 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, io.openshift.expose-services=, container_name=metrics_qdr, url=https://www.redhat.com)
Feb 20 08:01:24 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:01:26 np0005625203.localdomain sshd[59277]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 28621 ssh2 [preauth]
Feb 20 08:01:26 np0005625203.localdomain sshd[59277]: Disconnecting authenticating user root 185.246.128.171 port 28621: Too many authentication failures [preauth]
Feb 20 08:01:27 np0005625203.localdomain sshd[59312]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:01:32 np0005625203.localdomain sshd[59312]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 29785 ssh2 [preauth]
Feb 20 08:01:32 np0005625203.localdomain sshd[59312]: Disconnecting authenticating user root 185.246.128.171 port 29785: Too many authentication failures [preauth]
Feb 20 08:01:36 np0005625203.localdomain sshd[59314]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:01:39 np0005625203.localdomain sudo[59316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:01:39 np0005625203.localdomain sudo[59316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:01:39 np0005625203.localdomain sudo[59316]: pam_unix(sudo:session): session closed for user root
Feb 20 08:01:39 np0005625203.localdomain sudo[59331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 08:01:39 np0005625203.localdomain sudo[59331]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:01:40 np0005625203.localdomain systemd[1]: tmp-crun.9W5l2U.mount: Deactivated successfully.
Feb 20 08:01:40 np0005625203.localdomain podman[59419]: 2026-02-20 08:01:40.425079027 +0000 UTC m=+0.103915783 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, GIT_BRANCH=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, ceph=True, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, distribution-scope=public, version=7, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 08:01:40 np0005625203.localdomain podman[59419]: 2026-02-20 08:01:40.531991674 +0000 UTC m=+0.210828480 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, RELEASE=main, GIT_CLEAN=True, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, ceph=True, distribution-scope=public, GIT_BRANCH=main, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.buildah.version=1.42.2, release=1770267347, version=7, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git)
Feb 20 08:01:40 np0005625203.localdomain sshd[59451]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:01:40 np0005625203.localdomain sshd[59451]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:01:40 np0005625203.localdomain sudo[59331]: pam_unix(sudo:session): session closed for user root
Feb 20 08:01:40 np0005625203.localdomain sudo[59487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:01:40 np0005625203.localdomain sudo[59487]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:01:40 np0005625203.localdomain sudo[59487]: pam_unix(sudo:session): session closed for user root
Feb 20 08:01:40 np0005625203.localdomain sudo[59502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:01:40 np0005625203.localdomain sudo[59502]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:01:41 np0005625203.localdomain sudo[59502]: pam_unix(sudo:session): session closed for user root
Feb 20 08:01:41 np0005625203.localdomain sshd[59314]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 23662 ssh2 [preauth]
Feb 20 08:01:41 np0005625203.localdomain sshd[59314]: Disconnecting authenticating user root 185.246.128.171 port 23662: Too many authentication failures [preauth]
Feb 20 08:01:42 np0005625203.localdomain sudo[59549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:01:42 np0005625203.localdomain sudo[59549]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:01:42 np0005625203.localdomain sudo[59549]: pam_unix(sudo:session): session closed for user root
Feb 20 08:01:43 np0005625203.localdomain sshd[59564]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:01:48 np0005625203.localdomain sshd[59564]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 6613 ssh2 [preauth]
Feb 20 08:01:48 np0005625203.localdomain sshd[59564]: Disconnecting authenticating user root 185.246.128.171 port 6613: Too many authentication failures [preauth]
Feb 20 08:01:50 np0005625203.localdomain sshd[59566]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:01:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:01:55 np0005625203.localdomain podman[59568]: 2026-02-20 08:01:55.772647014 +0000 UTC m=+0.085324066 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:01:55 np0005625203.localdomain sshd[59566]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 59507 ssh2 [preauth]
Feb 20 08:01:55 np0005625203.localdomain sshd[59566]: Disconnecting authenticating user root 185.246.128.171 port 59507: Too many authentication failures [preauth]
Feb 20 08:01:56 np0005625203.localdomain podman[59568]: 2026-02-20 08:01:56.014374715 +0000 UTC m=+0.327051727 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=metrics_qdr, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, release=1766032510)
Feb 20 08:01:56 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:01:56 np0005625203.localdomain sshd[59597]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:00 np0005625203.localdomain sshd[59597]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 33987 ssh2 [preauth]
Feb 20 08:02:00 np0005625203.localdomain sshd[59597]: Disconnecting authenticating user root 185.246.128.171 port 33987: Too many authentication failures [preauth]
Feb 20 08:02:03 np0005625203.localdomain sshd[59599]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:08 np0005625203.localdomain sshd[59599]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 15456 ssh2 [preauth]
Feb 20 08:02:08 np0005625203.localdomain sshd[59599]: Disconnecting authenticating user root 185.246.128.171 port 15456: Too many authentication failures [preauth]
Feb 20 08:02:08 np0005625203.localdomain sshd[59601]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:15 np0005625203.localdomain sshd[59601]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 54177 ssh2 [preauth]
Feb 20 08:02:15 np0005625203.localdomain sshd[59601]: Disconnecting authenticating user root 185.246.128.171 port 54177: Too many authentication failures [preauth]
Feb 20 08:02:17 np0005625203.localdomain sshd[59603]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:19 np0005625203.localdomain sshd[59605]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:20 np0005625203.localdomain sshd[59605]: Invalid user ubuntu from 123.204.132.127 port 52522
Feb 20 08:02:20 np0005625203.localdomain sshd[59605]: Received disconnect from 123.204.132.127 port 52522:11: Bye Bye [preauth]
Feb 20 08:02:20 np0005625203.localdomain sshd[59605]: Disconnected from invalid user ubuntu 123.204.132.127 port 52522 [preauth]
Feb 20 08:02:22 np0005625203.localdomain sshd[59603]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 45892 ssh2 [preauth]
Feb 20 08:02:22 np0005625203.localdomain sshd[59603]: Disconnecting authenticating user root 185.246.128.171 port 45892: Too many authentication failures [preauth]
Feb 20 08:02:22 np0005625203.localdomain sshd[59607]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:23 np0005625203.localdomain sshd[59608]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:24 np0005625203.localdomain sshd[59608]: Invalid user builduser from 103.171.84.20 port 43400
Feb 20 08:02:24 np0005625203.localdomain sshd[59608]: Received disconnect from 103.171.84.20 port 43400:11: Bye Bye [preauth]
Feb 20 08:02:24 np0005625203.localdomain sshd[59608]: Disconnected from invalid user builduser 103.171.84.20 port 43400 [preauth]
Feb 20 08:02:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:02:26 np0005625203.localdomain systemd[1]: tmp-crun.kYnxoW.mount: Deactivated successfully.
Feb 20 08:02:26 np0005625203.localdomain podman[59611]: 2026-02-20 08:02:26.770316102 +0000 UTC m=+0.088500765 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, release=1766032510, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc.)
Feb 20 08:02:26 np0005625203.localdomain podman[59611]: 2026-02-20 08:02:26.958024659 +0000 UTC m=+0.276209332 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, container_name=metrics_qdr, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:02:26 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:02:27 np0005625203.localdomain sshd[59641]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:27 np0005625203.localdomain sshd[59641]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:02:29 np0005625203.localdomain sshd[59607]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 22611 ssh2 [preauth]
Feb 20 08:02:29 np0005625203.localdomain sshd[59607]: Disconnecting authenticating user root 185.246.128.171 port 22611: Too many authentication failures [preauth]
Feb 20 08:02:30 np0005625203.localdomain sshd[59643]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:33 np0005625203.localdomain sshd[59643]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 11177 ssh2 [preauth]
Feb 20 08:02:33 np0005625203.localdomain sshd[59643]: Disconnecting authenticating user root 185.246.128.171 port 11177: Too many authentication failures [preauth]
Feb 20 08:02:36 np0005625203.localdomain sshd[59645]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:37 np0005625203.localdomain sshd[59647]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:38 np0005625203.localdomain sshd[59647]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:02:42 np0005625203.localdomain sudo[59649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:02:42 np0005625203.localdomain sudo[59649]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:02:42 np0005625203.localdomain sudo[59649]: pam_unix(sudo:session): session closed for user root
Feb 20 08:02:42 np0005625203.localdomain sudo[59664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:02:42 np0005625203.localdomain sudo[59664]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:02:42 np0005625203.localdomain sudo[59664]: pam_unix(sudo:session): session closed for user root
Feb 20 08:02:43 np0005625203.localdomain sshd[59645]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 56476 ssh2 [preauth]
Feb 20 08:02:43 np0005625203.localdomain sshd[59645]: Disconnecting authenticating user root 185.246.128.171 port 56476: Too many authentication failures [preauth]
Feb 20 08:02:43 np0005625203.localdomain sudo[59711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:02:43 np0005625203.localdomain sudo[59711]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:02:43 np0005625203.localdomain sudo[59711]: pam_unix(sudo:session): session closed for user root
Feb 20 08:02:45 np0005625203.localdomain sshd[59726]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:47 np0005625203.localdomain sshd[59728]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:47 np0005625203.localdomain sshd[59728]: Invalid user deployuser from 102.211.152.28 port 51690
Feb 20 08:02:48 np0005625203.localdomain sshd[59728]: Received disconnect from 102.211.152.28 port 51690:11: Bye Bye [preauth]
Feb 20 08:02:48 np0005625203.localdomain sshd[59728]: Disconnected from invalid user deployuser 102.211.152.28 port 51690 [preauth]
Feb 20 08:02:52 np0005625203.localdomain sshd[59726]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 55058 ssh2 [preauth]
Feb 20 08:02:52 np0005625203.localdomain sshd[59726]: Disconnecting authenticating user root 185.246.128.171 port 55058: Too many authentication failures [preauth]
Feb 20 08:02:53 np0005625203.localdomain sshd[59730]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:02:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:02:57 np0005625203.localdomain podman[59732]: 2026-02-20 08:02:57.775265398 +0000 UTC m=+0.090365935 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, batch=17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z)
Feb 20 08:02:57 np0005625203.localdomain podman[59732]: 2026-02-20 08:02:57.995574095 +0000 UTC m=+0.310674572 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, vcs-type=git, tcib_managed=true, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:02:58 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:02:58 np0005625203.localdomain sshd[59730]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 46500 ssh2 [preauth]
Feb 20 08:02:58 np0005625203.localdomain sshd[59730]: Disconnecting authenticating user root 185.246.128.171 port 46500: Too many authentication failures [preauth]
Feb 20 08:03:00 np0005625203.localdomain sshd[59762]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:02 np0005625203.localdomain sshd[59762]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 30289 ssh2 [preauth]
Feb 20 08:03:02 np0005625203.localdomain sshd[59762]: Disconnecting authenticating user root 185.246.128.171 port 30289: Too many authentication failures [preauth]
Feb 20 08:03:03 np0005625203.localdomain sshd[59764]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:11 np0005625203.localdomain sshd[59764]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 48774 ssh2 [preauth]
Feb 20 08:03:11 np0005625203.localdomain sshd[59764]: Disconnecting authenticating user root 185.246.128.171 port 48774: Too many authentication failures [preauth]
Feb 20 08:03:12 np0005625203.localdomain sshd[59766]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:12 np0005625203.localdomain sshd[59768]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:12 np0005625203.localdomain sshd[59768]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:03:13 np0005625203.localdomain sshd[59770]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:13 np0005625203.localdomain sshd[59766]: Invalid user ftptest from 187.87.206.21 port 50534
Feb 20 08:03:13 np0005625203.localdomain sshd[59766]: Received disconnect from 187.87.206.21 port 50534:11: Bye Bye [preauth]
Feb 20 08:03:13 np0005625203.localdomain sshd[59766]: Disconnected from invalid user ftptest 187.87.206.21 port 50534 [preauth]
Feb 20 08:03:18 np0005625203.localdomain sshd[59770]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 56573 ssh2 [preauth]
Feb 20 08:03:18 np0005625203.localdomain sshd[59770]: Disconnecting authenticating user root 185.246.128.171 port 56573: Too many authentication failures [preauth]
Feb 20 08:03:20 np0005625203.localdomain sshd[59772]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:24 np0005625203.localdomain sshd[59772]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 42786 ssh2 [preauth]
Feb 20 08:03:24 np0005625203.localdomain sshd[59772]: Disconnecting authenticating user root 185.246.128.171 port 42786: Too many authentication failures [preauth]
Feb 20 08:03:25 np0005625203.localdomain sshd[59774]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:03:28 np0005625203.localdomain podman[59776]: 2026-02-20 08:03:28.761628362 +0000 UTC m=+0.081955606 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.5, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., version=17.1.13, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team)
Feb 20 08:03:28 np0005625203.localdomain podman[59776]: 2026-02-20 08:03:28.968923814 +0000 UTC m=+0.289251038 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.13, release=1766032510, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, distribution-scope=public, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:03:28 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:03:29 np0005625203.localdomain sshd[59774]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 13303 ssh2 [preauth]
Feb 20 08:03:29 np0005625203.localdomain sshd[59774]: Disconnecting authenticating user root 185.246.128.171 port 13303: Too many authentication failures [preauth]
Feb 20 08:03:30 np0005625203.localdomain sshd[59805]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:36 np0005625203.localdomain sshd[59805]: error: maximum authentication attempts exceeded for root from 185.246.128.171 port 47251 ssh2 [preauth]
Feb 20 08:03:36 np0005625203.localdomain sshd[59805]: Disconnecting authenticating user root 185.246.128.171 port 47251: Too many authentication failures [preauth]
Feb 20 08:03:37 np0005625203.localdomain sshd[59807]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:37 np0005625203.localdomain sshd[59807]: Invalid user bitrix from 147.135.114.8 port 57640
Feb 20 08:03:37 np0005625203.localdomain sshd[59807]: Received disconnect from 147.135.114.8 port 57640:11: Bye Bye [preauth]
Feb 20 08:03:37 np0005625203.localdomain sshd[59807]: Disconnected from invalid user bitrix 147.135.114.8 port 57640 [preauth]
Feb 20 08:03:39 np0005625203.localdomain sshd[59809]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:43 np0005625203.localdomain sudo[59811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:03:43 np0005625203.localdomain sudo[59811]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:03:43 np0005625203.localdomain sudo[59811]: pam_unix(sudo:session): session closed for user root
Feb 20 08:03:43 np0005625203.localdomain sudo[59826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:03:43 np0005625203.localdomain sudo[59826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:03:44 np0005625203.localdomain sudo[59826]: pam_unix(sudo:session): session closed for user root
Feb 20 08:03:45 np0005625203.localdomain sudo[59872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:03:45 np0005625203.localdomain sudo[59872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:03:45 np0005625203.localdomain sudo[59872]: pam_unix(sudo:session): session closed for user root
Feb 20 08:03:45 np0005625203.localdomain sshd[59809]: Disconnecting authenticating user root 185.246.128.171 port 45808: Change of username or service not allowed: (root,ssh-connection) -> (httpadmin,ssh-connection) [preauth]
Feb 20 08:03:47 np0005625203.localdomain sshd[59887]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:49 np0005625203.localdomain sshd[59887]: Invalid user httpadmin from 185.246.128.171 port 37917
Feb 20 08:03:50 np0005625203.localdomain sshd[59887]: Disconnecting invalid user httpadmin 185.246.128.171 port 37917: Change of username or service not allowed: (httpadmin,ssh-connection) -> (huawei,ssh-connection) [preauth]
Feb 20 08:03:52 np0005625203.localdomain sshd[59889]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:53 np0005625203.localdomain sshd[59891]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:54 np0005625203.localdomain sshd[59891]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:03:55 np0005625203.localdomain sshd[59889]: Invalid user huawei from 185.246.128.171 port 9164
Feb 20 08:03:58 np0005625203.localdomain sshd[59893]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:03:58 np0005625203.localdomain sshd[59893]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:03:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:03:59 np0005625203.localdomain podman[59895]: 2026-02-20 08:03:59.77825612 +0000 UTC m=+0.089231226 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, container_name=metrics_qdr, config_id=tripleo_step1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:03:59 np0005625203.localdomain sshd[59889]: Disconnecting invalid user huawei 185.246.128.171 port 9164: Change of username or service not allowed: (huawei,ssh-connection) -> (nexus,ssh-connection) [preauth]
Feb 20 08:03:59 np0005625203.localdomain podman[59895]: 2026-02-20 08:03:59.9801117 +0000 UTC m=+0.291086786 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, config_id=tripleo_step1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:03:59 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:04:02 np0005625203.localdomain sshd[59926]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:05 np0005625203.localdomain sshd[59926]: Invalid user nexus from 185.246.128.171 port 16664
Feb 20 08:04:07 np0005625203.localdomain sshd[59926]: Disconnecting invalid user nexus 185.246.128.171 port 16664: Change of username or service not allowed: (nexus,ssh-connection) -> (alan,ssh-connection) [preauth]
Feb 20 08:04:08 np0005625203.localdomain sshd[59928]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:10 np0005625203.localdomain sshd[59928]: Invalid user alan from 185.246.128.171 port 59661
Feb 20 08:04:11 np0005625203.localdomain sshd[59928]: Disconnecting invalid user alan 185.246.128.171 port 59661: Change of username or service not allowed: (alan,ssh-connection) -> (db2admin,ssh-connection) [preauth]
Feb 20 08:04:12 np0005625203.localdomain sshd[59930]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:15 np0005625203.localdomain sshd[59930]: Invalid user db2admin from 185.246.128.171 port 22314
Feb 20 08:04:15 np0005625203.localdomain sshd[59930]: Disconnecting invalid user db2admin 185.246.128.171 port 22314: Change of username or service not allowed: (db2admin,ssh-connection) -> (btf,ssh-connection) [preauth]
Feb 20 08:04:17 np0005625203.localdomain sshd[59932]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:19 np0005625203.localdomain sshd[59932]: Invalid user btf from 185.246.128.171 port 50451
Feb 20 08:04:20 np0005625203.localdomain sshd[59932]: Disconnecting invalid user btf 185.246.128.171 port 50451: Change of username or service not allowed: (btf,ssh-connection) -> (ftpuser,ssh-connection) [preauth]
Feb 20 08:04:21 np0005625203.localdomain sshd[59934]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:24 np0005625203.localdomain sshd[59934]: Invalid user ftpuser from 185.246.128.171 port 16033
Feb 20 08:04:29 np0005625203.localdomain sshd[59934]: error: maximum authentication attempts exceeded for invalid user ftpuser from 185.246.128.171 port 16033 ssh2 [preauth]
Feb 20 08:04:29 np0005625203.localdomain sshd[59934]: Disconnecting invalid user ftpuser 185.246.128.171 port 16033: Too many authentication failures [preauth]
Feb 20 08:04:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:04:30 np0005625203.localdomain systemd[1]: tmp-crun.jqdXtk.mount: Deactivated successfully.
Feb 20 08:04:30 np0005625203.localdomain podman[59936]: 2026-02-20 08:04:30.095468605 +0000 UTC m=+0.076636002 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, distribution-scope=public, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Feb 20 08:04:30 np0005625203.localdomain podman[59936]: 2026-02-20 08:04:30.283182156 +0000 UTC m=+0.264349523 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1)
Feb 20 08:04:30 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:04:30 np0005625203.localdomain sudo[60010]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wrakuvbdzplnoesgtwnfuqkxfsitsaib ; /usr/bin/python3
Feb 20 08:04:30 np0005625203.localdomain sudo[60010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:30 np0005625203.localdomain python3[60012]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:04:30 np0005625203.localdomain sudo[60010]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:30 np0005625203.localdomain sudo[60055]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zdblblldxssgaixodueqwxuxholkjjpd ; /usr/bin/python3
Feb 20 08:04:30 np0005625203.localdomain sudo[60055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:31 np0005625203.localdomain python3[60057]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574670.3822527-99239-249394465667068/source _original_basename=tmpd7zk8ylj follow=False checksum=62439dd24dde40c90e7a39f6a1b31cc6061fe59b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:04:31 np0005625203.localdomain sudo[60055]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:31 np0005625203.localdomain sshd[60072]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:32 np0005625203.localdomain sudo[60087]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofoskrpqapoxvagkkyspzhbrqzcvtllb ; /usr/bin/python3
Feb 20 08:04:32 np0005625203.localdomain sudo[60087]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:32 np0005625203.localdomain python3[60089]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:04:32 np0005625203.localdomain sudo[60087]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:32 np0005625203.localdomain sshd[60072]: Invalid user ftpuser from 185.246.128.171 port 18415
Feb 20 08:04:32 np0005625203.localdomain sudo[60137]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvgxqdayvfjhgsixionvymbrfhqsygyu ; /usr/bin/python3
Feb 20 08:04:32 np0005625203.localdomain sudo[60137]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:32 np0005625203.localdomain sudo[60137]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:32 np0005625203.localdomain sudo[60155]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qvxfuviieowtitjqjlslnmqatvzohwmm ; /usr/bin/python3
Feb 20 08:04:32 np0005625203.localdomain sudo[60155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:33 np0005625203.localdomain sudo[60155]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:33 np0005625203.localdomain sshd[60072]: error: maximum authentication attempts exceeded for invalid user ftpuser from 185.246.128.171 port 18415 ssh2 [preauth]
Feb 20 08:04:33 np0005625203.localdomain sshd[60072]: Disconnecting invalid user ftpuser 185.246.128.171 port 18415: Too many authentication failures [preauth]
Feb 20 08:04:33 np0005625203.localdomain sudo[60259]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ektnzkoihjnfstxvjqspjhbzcowvaefu ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574673.399741-99453-69064109680794/async_wrapper.py 311665111685 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574673.399741-99453-69064109680794/AnsiballZ_command.py _
Feb 20 08:04:33 np0005625203.localdomain sudo[60259]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Feb 20 08:04:33 np0005625203.localdomain ansible-async_wrapper.py[60261]: Invoked with 311665111685 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574673.399741-99453-69064109680794/AnsiballZ_command.py _
Feb 20 08:04:33 np0005625203.localdomain ansible-async_wrapper.py[60264]: Starting module and watcher
Feb 20 08:04:33 np0005625203.localdomain ansible-async_wrapper.py[60264]: Start watching 60265 (3600)
Feb 20 08:04:33 np0005625203.localdomain ansible-async_wrapper.py[60265]: Start module (60265)
Feb 20 08:04:33 np0005625203.localdomain ansible-async_wrapper.py[60261]: Return async_wrapper task started.
Feb 20 08:04:33 np0005625203.localdomain sudo[60259]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:34 np0005625203.localdomain sudo[60280]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wtykhpvrkpiftipypxiuzklzfimtnnfz ; /usr/bin/python3
Feb 20 08:04:34 np0005625203.localdomain sudo[60280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:34 np0005625203.localdomain python3[60282]: ansible-ansible.legacy.async_status Invoked with jid=311665111685.60261 mode=status _async_dir=/tmp/.ansible_async
Feb 20 08:04:34 np0005625203.localdomain sudo[60280]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:35 np0005625203.localdomain sshd[60299]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:36 np0005625203.localdomain sshd[60299]: Invalid user ftpuser from 185.246.128.171 port 45088
Feb 20 08:04:37 np0005625203.localdomain puppet-user[60285]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 08:04:37 np0005625203.localdomain puppet-user[60285]:    (file: /etc/puppet/hiera.yaml)
Feb 20 08:04:37 np0005625203.localdomain puppet-user[60285]: Warning: Undefined variable '::deploy_config_name';
Feb 20 08:04:37 np0005625203.localdomain puppet-user[60285]:    (file & line not available)
Feb 20 08:04:37 np0005625203.localdomain puppet-user[60285]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 08:04:37 np0005625203.localdomain puppet-user[60285]:    (file & line not available)
Feb 20 08:04:37 np0005625203.localdomain puppet-user[60285]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Feb 20 08:04:37 np0005625203.localdomain puppet-user[60285]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Feb 20 08:04:37 np0005625203.localdomain puppet-user[60285]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.11 seconds
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]: Notice: Applied catalog in 0.04 seconds
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]: Application:
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:    Initial environment: production
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:    Converged environment: production
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:          Run mode: user
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]: Changes:
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]: Events:
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]: Resources:
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:             Total: 10
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]: Time:
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:          Schedule: 0.00
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:              File: 0.00
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:              Exec: 0.01
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:            Augeas: 0.01
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:    Transaction evaluation: 0.03
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:    Catalog application: 0.04
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:    Config retrieval: 0.15
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:          Last run: 1771574678
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:        Filebucket: 0.00
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:             Total: 0.05
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]: Version:
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:            Config: 1771574677
Feb 20 08:04:38 np0005625203.localdomain puppet-user[60285]:            Puppet: 7.10.0
Feb 20 08:04:38 np0005625203.localdomain ansible-async_wrapper.py[60265]: Module complete (60265)
Feb 20 08:04:38 np0005625203.localdomain sshd[60299]: Disconnecting invalid user ftpuser 185.246.128.171 port 45088: Change of username or service not allowed: (ftpuser,ssh-connection) -> (t128,ssh-connection) [preauth]
Feb 20 08:04:38 np0005625203.localdomain ansible-async_wrapper.py[60264]: Done in kid B.
Feb 20 08:04:41 np0005625203.localdomain sshd[60398]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:43 np0005625203.localdomain sshd[60400]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:43 np0005625203.localdomain sshd[60400]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:04:44 np0005625203.localdomain sshd[60398]: Invalid user t128 from 185.246.128.171 port 24072
Feb 20 08:04:44 np0005625203.localdomain sudo[60415]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ebnslqbnsescwfyjfvzphjkcywgvppku ; /usr/bin/python3
Feb 20 08:04:44 np0005625203.localdomain sudo[60415]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:44 np0005625203.localdomain python3[60417]: ansible-ansible.legacy.async_status Invoked with jid=311665111685.60261 mode=status _async_dir=/tmp/.ansible_async
Feb 20 08:04:44 np0005625203.localdomain sudo[60415]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:44 np0005625203.localdomain sshd[60398]: Disconnecting invalid user t128 185.246.128.171 port 24072: Change of username or service not allowed: (t128,ssh-connection) -> (draytek,ssh-connection) [preauth]
Feb 20 08:04:45 np0005625203.localdomain sudo[60444]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-defxmqkzklpzghpxafqmcodpawftvvlq ; /usr/bin/python3
Feb 20 08:04:45 np0005625203.localdomain sudo[60422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:04:45 np0005625203.localdomain sudo[60444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:45 np0005625203.localdomain sudo[60422]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:04:45 np0005625203.localdomain sudo[60422]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:45 np0005625203.localdomain sudo[60449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:04:45 np0005625203.localdomain sudo[60449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:04:45 np0005625203.localdomain python3[60447]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 08:04:45 np0005625203.localdomain sudo[60444]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:45 np0005625203.localdomain sudo[60477]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aoficpgyfonbcngvlxbvxmoueskllgsc ; /usr/bin/python3
Feb 20 08:04:45 np0005625203.localdomain sudo[60477]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:45 np0005625203.localdomain python3[60479]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:04:45 np0005625203.localdomain sudo[60477]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:45 np0005625203.localdomain sshd[60502]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:45 np0005625203.localdomain sudo[60449]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:46 np0005625203.localdomain sudo[60560]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lctplumrzlvytdvcyudzjmepbddgoyyt ; /usr/bin/python3
Feb 20 08:04:46 np0005625203.localdomain sudo[60560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:46 np0005625203.localdomain python3[60562]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:04:46 np0005625203.localdomain sudo[60560]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:46 np0005625203.localdomain sudo[60579]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-welhvmmyofaluocfbueahrmnkfpfnrza ; /usr/bin/python3
Feb 20 08:04:46 np0005625203.localdomain sudo[60579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:46 np0005625203.localdomain sudo[60582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:04:46 np0005625203.localdomain sudo[60582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:04:46 np0005625203.localdomain sudo[60582]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:46 np0005625203.localdomain python3[60581]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmp2jywctl9 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 08:04:46 np0005625203.localdomain sudo[60579]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:46 np0005625203.localdomain sudo[60624]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxvzmkrhdzghqxfthxgnmwlkpswaljry ; /usr/bin/python3
Feb 20 08:04:46 np0005625203.localdomain sudo[60624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:46 np0005625203.localdomain python3[60626]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:04:46 np0005625203.localdomain sudo[60624]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:47 np0005625203.localdomain sudo[60640]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gqojmddubrijnaebnndupnflwyzojfhp ; /usr/bin/python3
Feb 20 08:04:47 np0005625203.localdomain sudo[60640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:47 np0005625203.localdomain sshd[60502]: Invalid user draytek from 185.246.128.171 port 52635
Feb 20 08:04:47 np0005625203.localdomain sshd[60502]: Disconnecting invalid user draytek 185.246.128.171 port 52635: Change of username or service not allowed: (draytek,ssh-connection) -> (telecomadmin,ssh-connection) [preauth]
Feb 20 08:04:47 np0005625203.localdomain sudo[60640]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:48 np0005625203.localdomain sudo[60727]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blfsrejzwdagshbakwjwmfxohhkxkfko ; /usr/bin/python3
Feb 20 08:04:48 np0005625203.localdomain sudo[60727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:48 np0005625203.localdomain python3[60729]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Feb 20 08:04:48 np0005625203.localdomain sudo[60727]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:49 np0005625203.localdomain sudo[60746]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aduoegyhawfxhnmpsxsoxikzdavgldsb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:49 np0005625203.localdomain sudo[60746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:49 np0005625203.localdomain python3[60748]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:04:49 np0005625203.localdomain sudo[60746]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:49 np0005625203.localdomain sudo[60762]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zkshvofiqltwqwqycvjdigepqtctuawv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:49 np0005625203.localdomain sudo[60762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:49 np0005625203.localdomain sudo[60762]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:49 np0005625203.localdomain sshd[60765]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:50 np0005625203.localdomain sudo[60779]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hgcqhwyrrrljswqbwkkppspvxhufcpqk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:50 np0005625203.localdomain sudo[60779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:50 np0005625203.localdomain python3[60781]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:04:50 np0005625203.localdomain sudo[60779]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:50 np0005625203.localdomain sshd[60804]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:50 np0005625203.localdomain sudo[60832]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fthwtadbnnriwrraduymexuikkiafhze ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:50 np0005625203.localdomain sudo[60832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:50 np0005625203.localdomain python3[60834]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:04:50 np0005625203.localdomain sudo[60832]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:51 np0005625203.localdomain sudo[60850]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zvdrrseqbiwisjpzfwqxmyjkdirjuocv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:51 np0005625203.localdomain sudo[60850]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:51 np0005625203.localdomain python3[60852]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:04:51 np0005625203.localdomain sudo[60850]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:51 np0005625203.localdomain sshd[60804]: Invalid user student from 102.211.152.28 port 47068
Feb 20 08:04:51 np0005625203.localdomain sshd[60804]: Received disconnect from 102.211.152.28 port 47068:11: Bye Bye [preauth]
Feb 20 08:04:51 np0005625203.localdomain sshd[60804]: Disconnected from invalid user student 102.211.152.28 port 47068 [preauth]
Feb 20 08:04:51 np0005625203.localdomain sudo[60912]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhipdkycnxvvvgfeiqqtwslqlsxyjjjz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:51 np0005625203.localdomain sudo[60912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:51 np0005625203.localdomain python3[60914]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:04:51 np0005625203.localdomain sudo[60912]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:51 np0005625203.localdomain sudo[60930]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orrwfhvfikzkfvfmlvmaafhebchlvfwc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:51 np0005625203.localdomain sudo[60930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:51 np0005625203.localdomain python3[60932]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:04:51 np0005625203.localdomain sudo[60930]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:52 np0005625203.localdomain sudo[60992]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eikorgomxiechrrqvhwbgmzybqunrrqs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:52 np0005625203.localdomain sudo[60992]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:52 np0005625203.localdomain python3[60994]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:04:52 np0005625203.localdomain sudo[60992]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:52 np0005625203.localdomain sudo[61010]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzniloikulguhipijkgbqcqkiavjqarb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:52 np0005625203.localdomain sudo[61010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:52 np0005625203.localdomain python3[61012]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:04:52 np0005625203.localdomain sudo[61010]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:53 np0005625203.localdomain sudo[61072]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fwppfmuwadzsathnrreuxlgrhzrpplxo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:53 np0005625203.localdomain sudo[61072]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:53 np0005625203.localdomain sshd[60765]: Invalid user telecomadmin from 185.246.128.171 port 17234
Feb 20 08:04:53 np0005625203.localdomain python3[61074]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:04:53 np0005625203.localdomain sudo[61072]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:53 np0005625203.localdomain sudo[61090]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqyiejveohhuchmwbqeenmijiokkjzbu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:53 np0005625203.localdomain sudo[61090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:53 np0005625203.localdomain python3[61092]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:04:53 np0005625203.localdomain sudo[61090]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:53 np0005625203.localdomain sshd[60765]: Disconnecting invalid user telecomadmin 185.246.128.171 port 17234: Change of username or service not allowed: (telecomadmin,ssh-connection) -> (sonos,ssh-connection) [preauth]
Feb 20 08:04:53 np0005625203.localdomain sudo[61120]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jxinvywmqeswvijyvpbykowauvhmmzqv ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:53 np0005625203.localdomain sudo[61120]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:54 np0005625203.localdomain python3[61122]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:04:54 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:04:54 np0005625203.localdomain systemd-sysv-generator[61148]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:04:54 np0005625203.localdomain systemd-rc-local-generator[61144]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:04:54 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:04:54 np0005625203.localdomain sudo[61120]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:54 np0005625203.localdomain sudo[61206]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ypbzhddnqtxsirzghlhdeujnkhqywuww ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:54 np0005625203.localdomain sudo[61206]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:54 np0005625203.localdomain python3[61208]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:04:54 np0005625203.localdomain sudo[61206]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:54 np0005625203.localdomain sshd[61211]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:55 np0005625203.localdomain sudo[61225]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrgpfwuullvwbyrwjoogtsfdkeminjgk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:55 np0005625203.localdomain sudo[61225]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:55 np0005625203.localdomain python3[61227]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:04:55 np0005625203.localdomain sudo[61225]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:55 np0005625203.localdomain sudo[61287]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fwfqbebvxukymkkmbxasrgszavppqdlc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:55 np0005625203.localdomain sudo[61287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:55 np0005625203.localdomain python3[61289]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:04:55 np0005625203.localdomain sudo[61287]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:55 np0005625203.localdomain sudo[61305]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lwgnlyoquomfnytuaymadvbbtswoplkf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:55 np0005625203.localdomain sudo[61305]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:56 np0005625203.localdomain python3[61307]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:04:56 np0005625203.localdomain sudo[61305]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:56 np0005625203.localdomain sudo[61336]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zrhdgtfuulnrlsosyceydvwpdrqivqvm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:56 np0005625203.localdomain sudo[61336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:56 np0005625203.localdomain python3[61338]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:04:56 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:04:56 np0005625203.localdomain systemd-rc-local-generator[61361]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:04:56 np0005625203.localdomain systemd-sysv-generator[61364]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:04:56 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:04:56 np0005625203.localdomain systemd[1]: Starting Create netns directory...
Feb 20 08:04:56 np0005625203.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 20 08:04:56 np0005625203.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 20 08:04:56 np0005625203.localdomain systemd[1]: Finished Create netns directory.
Feb 20 08:04:56 np0005625203.localdomain sudo[61336]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:57 np0005625203.localdomain sudo[61394]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-khnekcadimcqnyhngmpjxefqykgnuekh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:57 np0005625203.localdomain sudo[61394]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:57 np0005625203.localdomain python3[61396]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Feb 20 08:04:57 np0005625203.localdomain sudo[61394]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:57 np0005625203.localdomain sshd[61211]: Invalid user sonos from 185.246.128.171 port 50537
Feb 20 08:04:57 np0005625203.localdomain sudo[61410]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-faiyhagylirxstesxucmsbfixemzjfsg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:58 np0005625203.localdomain sudo[61410]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:58 np0005625203.localdomain sudo[61410]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:58 np0005625203.localdomain sshd[61440]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:04:59 np0005625203.localdomain sudo[61455]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wshhpxrqwsuwrikervpnrmoifoaziwbn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:04:59 np0005625203.localdomain sudo[61455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:04:59 np0005625203.localdomain sshd[61211]: Disconnecting invalid user sonos 185.246.128.171 port 50537: Change of username or service not allowed: (sonos,ssh-connection) -> (alberto,ssh-connection) [preauth]
Feb 20 08:04:59 np0005625203.localdomain python3[61457]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step3 config_dir=/var/lib/tripleo-config/container-startup-config/step_3 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Feb 20 08:04:59 np0005625203.localdomain podman[61628]: 2026-02-20 08:04:59.648068141 +0000 UTC m=+0.075751404 container create 63a5f0af8a13ca44c84f40643a2c176ce000ee4e14c5b640cded40546e9f0038 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, container_name=nova_statedir_owner, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:04:59 np0005625203.localdomain podman[61625]: 2026-02-20 08:04:59.655169711 +0000 UTC m=+0.089065177 container create 43574e7f2e8aeca074fdd4e8544dcd5b03a668e63dbdf8da8ca806e1ad7100b1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-type=git, tcib_managed=true, build-date=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, version=17.1.13, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, architecture=x86_64, container_name=nova_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Feb 20 08:04:59 np0005625203.localdomain podman[61640]: 2026-02-20 08:04:59.682370676 +0000 UTC m=+0.090599775 container create 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-type=git, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step3, build-date=2026-01-12T22:10:09Z, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1766032510, version=17.1.13)
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Started libpod-conmon-63a5f0af8a13ca44c84f40643a2c176ce000ee4e14c5b640cded40546e9f0038.scope.
Feb 20 08:04:59 np0005625203.localdomain podman[61648]: 2026-02-20 08:04:59.700980554 +0000 UTC m=+0.107131698 container create d02fe483b6f3e5ff491881b9ef70d6362ccfaf280cc77a03c601608f05d72756 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:04:59 np0005625203.localdomain podman[61628]: 2026-02-20 08:04:59.60296459 +0000 UTC m=+0.030647933 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399537d55839d97f33c8f4bb32cbd0a91de930b46800cfbfe38988d47bef0997/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399537d55839d97f33c8f4bb32cbd0a91de930b46800cfbfe38988d47bef0997/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399537d55839d97f33c8f4bb32cbd0a91de930b46800cfbfe38988d47bef0997/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Started libpod-conmon-03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f.scope.
Feb 20 08:04:59 np0005625203.localdomain podman[61625]: 2026-02-20 08:04:59.612061492 +0000 UTC m=+0.045956998 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:04:59 np0005625203.localdomain podman[61633]: 2026-02-20 08:04:59.714434062 +0000 UTC m=+0.134126827 container create 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, container_name=collectd, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.expose-services=)
Feb 20 08:04:59 np0005625203.localdomain podman[61633]: 2026-02-20 08:04:59.619940237 +0000 UTC m=+0.039633012 image pull  registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Started libpod-conmon-d02fe483b6f3e5ff491881b9ef70d6362ccfaf280cc77a03c601608f05d72756.scope.
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain podman[61640]: 2026-02-20 08:04:59.732927526 +0000 UTC m=+0.141156615 container init 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, build-date=2026-01-12T22:10:09Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:09Z, name=rhosp-rhel9/openstack-rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, vendor=Red Hat, Inc., version=17.1.13, io.openshift.expose-services=, release=1766032510, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, batch=17.1_20260112.1, vcs-type=git)
Feb 20 08:04:59 np0005625203.localdomain podman[61648]: 2026-02-20 08:04:59.635696576 +0000 UTC m=+0.041847720 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Feb 20 08:04:59 np0005625203.localdomain podman[61640]: 2026-02-20 08:04:59.740504892 +0000 UTC m=+0.148733991 container start 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, name=rhosp-rhel9/openstack-rsyslog, build-date=2026-01-12T22:10:09Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, org.opencontainers.image.created=2026-01-12T22:10:09Z, release=1766032510)
Feb 20 08:04:59 np0005625203.localdomain podman[61640]: 2026-02-20 08:04:59.647950897 +0000 UTC m=+0.056180046 image pull  registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Feb 20 08:04:59 np0005625203.localdomain python3[61457]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name rsyslog --conmon-pidfile /run/rsyslog.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=97414cfc893df553083a7f7bb1c65a4f --label config_id=tripleo_step3 --label container_name=rsyslog --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/rsyslog.log --network host --privileged=True --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:ro --volume /var/log/containers/rsyslog:/var/log/rsyslog:rw,z --volume /var/log:/var/log/host:ro --volume /var/lib/rsyslog.container:/var/lib/rsyslog:rw,z registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Started libpod-conmon-3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.scope.
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882ca1ee31e7e66703911e849aa154cc24abce007ac0cf03d820cf958d55c0d1/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain sudo[61705]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:04:59 np0005625203.localdomain sudo[61705]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:04:59 np0005625203.localdomain podman[61648]: 2026-02-20 08:04:59.759643087 +0000 UTC m=+0.165794261 container init d02fe483b6f3e5ff491881b9ef70d6362ccfaf280cc77a03c601608f05d72756 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_init_log, io.buildah.version=1.41.5, vendor=Red Hat, Inc., tcib_managed=true, build-date=2026-01-12T23:07:30Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:04:59 np0005625203.localdomain podman[61628]: 2026-02-20 08:04:59.762275168 +0000 UTC m=+0.189958451 container init 63a5f0af8a13ca44c84f40643a2c176ce000ee4e14c5b640cded40546e9f0038 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, tcib_managed=true, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, build-date=2026-01-12T23:32:04Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_statedir_owner, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:04:59 np0005625203.localdomain podman[61628]: 2026-02-20 08:04:59.773823497 +0000 UTC m=+0.201506760 container start 63a5f0af8a13ca44c84f40643a2c176ce000ee4e14c5b640cded40546e9f0038 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., container_name=nova_statedir_owner, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible)
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:04:59 np0005625203.localdomain podman[61628]: 2026-02-20 08:04:59.774374244 +0000 UTC m=+0.202057537 container attach 63a5f0af8a13ca44c84f40643a2c176ce000ee4e14c5b640cded40546e9f0038 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_statedir_owner, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: libpod-d02fe483b6f3e5ff491881b9ef70d6362ccfaf280cc77a03c601608f05d72756.scope: Deactivated successfully.
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4853d8cbf1b20eda152efab24d0e4cb43146df568657aa0bf8852ddc75389e64/merged/scripts supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4853d8cbf1b20eda152efab24d0e4cb43146df568657aa0bf8852ddc75389e64/merged/var/log/collectd supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Started libpod-conmon-43574e7f2e8aeca074fdd4e8544dcd5b03a668e63dbdf8da8ca806e1ad7100b1.scope.
Feb 20 08:04:59 np0005625203.localdomain sudo[61705]: pam_unix(sudo:session): session closed for user root
Feb 20 08:04:59 np0005625203.localdomain podman[61648]: 2026-02-20 08:04:59.824552773 +0000 UTC m=+0.230703907 container start d02fe483b6f3e5ff491881b9ef70d6362ccfaf280cc77a03c601608f05d72756 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_init_log, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.)
Feb 20 08:04:59 np0005625203.localdomain python3[61457]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_init_log --conmon-pidfile /run/ceilometer_init_log.pid --detach=True --label config_id=tripleo_step3 --label container_name=ceilometer_init_log --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_init_log.log --network none --user root --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1 /bin/bash -c chown -R ceilometer:ceilometer /var/log/ceilometer
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: libpod-03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f.scope: Deactivated successfully.
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cf91af479bb1e20b302e6f839411cf8d51c7bb716b7360ee3fc14275286500/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cf91af479bb1e20b302e6f839411cf8d51c7bb716b7360ee3fc14275286500/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cf91af479bb1e20b302e6f839411cf8d51c7bb716b7360ee3fc14275286500/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cf91af479bb1e20b302e6f839411cf8d51c7bb716b7360ee3fc14275286500/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cf91af479bb1e20b302e6f839411cf8d51c7bb716b7360ee3fc14275286500/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cf91af479bb1e20b302e6f839411cf8d51c7bb716b7360ee3fc14275286500/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain podman[61633]: 2026-02-20 08:04:59.835154072 +0000 UTC m=+0.254846837 container init 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container)
Feb 20 08:04:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28cf91af479bb1e20b302e6f839411cf8d51c7bb716b7360ee3fc14275286500/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: libpod-63a5f0af8a13ca44c84f40643a2c176ce000ee4e14c5b640cded40546e9f0038.scope: Deactivated successfully.
Feb 20 08:04:59 np0005625203.localdomain podman[61625]: 2026-02-20 08:04:59.842452778 +0000 UTC m=+0.276348244 container init 43574e7f2e8aeca074fdd4e8544dcd5b03a668e63dbdf8da8ca806e1ad7100b1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, url=https://www.redhat.com, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:04:59 np0005625203.localdomain sudo[61765]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:04:59 np0005625203.localdomain podman[61633]: 2026-02-20 08:04:59.853177041 +0000 UTC m=+0.272869796 container start 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, container_name=collectd, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Feb 20 08:04:59 np0005625203.localdomain systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring.
Feb 20 08:04:59 np0005625203.localdomain python3[61457]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name collectd --cap-add IPC_LOCK --conmon-pidfile /run/collectd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=4767aaabc3de112d8791c290aa2b669d --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=collectd --label managed_by=tripleo_ansible --label config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/collectd.log --memory 512m --network host --pid host --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro --volume /var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/collectd:/var/log/collectd:rw,z --volume /var/lib/container-config-scripts:/config-scripts:ro --volume /var/lib/container-user-scripts:/scripts:z --volume /run:/run:rw --volume /sys/fs/cgroup:/sys/fs/cgroup:ro registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Created slice User Slice of UID 0.
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 20 08:04:59 np0005625203.localdomain podman[61730]: 2026-02-20 08:04:59.88080812 +0000 UTC m=+0.089511231 container died d02fe483b6f3e5ff491881b9ef70d6362ccfaf280cc77a03c601608f05d72756 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_init_log, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true)
Feb 20 08:04:59 np0005625203.localdomain sudo[61780]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 20 08:04:59 np0005625203.localdomain podman[61628]: 2026-02-20 08:04:59.893485584 +0000 UTC m=+0.321168847 container died 63a5f0af8a13ca44c84f40643a2c176ce000ee4e14c5b640cded40546e9f0038 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step3, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-type=git, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: Starting User Manager for UID 0...
Feb 20 08:04:59 np0005625203.localdomain systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring.
Feb 20 08:04:59 np0005625203.localdomain podman[61625]: 2026-02-20 08:04:59.901832723 +0000 UTC m=+0.335728189 container start 43574e7f2e8aeca074fdd4e8544dcd5b03a668e63dbdf8da8ca806e1ad7100b1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, build-date=2026-01-12T23:31:49Z, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., container_name=nova_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13)
Feb 20 08:04:59 np0005625203.localdomain python3[61457]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtlogd_wrapper --cgroupns=host --conmon-pidfile /run/nova_virtlogd_wrapper.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=2eb7e8e9794eebaba92e1ff8facc8868 --label config_id=tripleo_step3 --label container_name=nova_virtlogd_wrapper --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtlogd_wrapper.log --network host --pid host --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:04:59 np0005625203.localdomain podman[61730]: 2026-02-20 08:04:59.919694448 +0000 UTC m=+0.128397539 container cleanup d02fe483b6f3e5ff491881b9ef70d6362ccfaf280cc77a03c601608f05d72756 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_init_log, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step3, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, config_data={'command': ['/bin/bash', '-c', 'chown -R ceilometer:ceilometer /var/log/ceilometer'], 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'none', 'start_order': 0, 'user': 'root', 'volumes': ['/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_init_log, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, release=1766032510, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible)
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: libpod-conmon-d02fe483b6f3e5ff491881b9ef70d6362ccfaf280cc77a03c601608f05d72756.scope: Deactivated successfully.
Feb 20 08:04:59 np0005625203.localdomain systemd[61806]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:04:59 np0005625203.localdomain podman[61751]: 2026-02-20 08:04:59.936584963 +0000 UTC m=+0.096609743 container died 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.13, io.openshift.expose-services=, name=rhosp-rhel9/openstack-rsyslog, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:09Z, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-rsyslog-container, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:09Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog)
Feb 20 08:04:59 np0005625203.localdomain podman[61751]: 2026-02-20 08:04:59.96932646 +0000 UTC m=+0.129351200 container cleanup 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-rsyslog-container, build-date=2026-01-12T22:10:09Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, release=1766032510, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog)
Feb 20 08:04:59 np0005625203.localdomain systemd[1]: libpod-conmon-03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f.scope: Deactivated successfully.
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Queued start job for default target Main User Target.
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Created slice User Application Slice.
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Started Daily Cleanup of User's Temporary Directories.
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Reached target Paths.
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Reached target Timers.
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Starting D-Bus User Message Bus Socket...
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Starting Create User's Volatile Files and Directories...
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Finished Create User's Volatile Files and Directories.
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Listening on D-Bus User Message Bus Socket.
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Reached target Sockets.
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Reached target Basic System.
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Reached target Main User Target.
Feb 20 08:05:00 np0005625203.localdomain systemd[61806]: Startup finished in 110ms.
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: Started User Manager for UID 0.
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: Started Session c1 of User root.
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: Started Session c2 of User root.
Feb 20 08:05:00 np0005625203.localdomain sudo[61765]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:00 np0005625203.localdomain sudo[61780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:00 np0005625203.localdomain podman[61767]: 2026-02-20 08:05:00.07816274 +0000 UTC m=+0.225836546 container cleanup 63a5f0af8a13ca44c84f40643a2c176ce000ee4e14c5b640cded40546e9f0038 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_statedir_owner, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_id=tripleo_step3, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_statedir_owner, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: libpod-conmon-63a5f0af8a13ca44c84f40643a2c176ce000ee4e14c5b640cded40546e9f0038.scope: Deactivated successfully.
Feb 20 08:05:00 np0005625203.localdomain python3[61457]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_statedir_owner --conmon-pidfile /run/nova_statedir_owner.pid --detach=False --env NOVA_STATEDIR_OWNERSHIP_SKIP=triliovault-mounts --env TRIPLEO_DEPLOY_IDENTIFIER=1771573229 --env __OS_DEBUG=true --label config_id=tripleo_step3 --label container_name=nova_statedir_owner --label managed_by=tripleo_ansible --label config_data={'command': '/container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': 'triliovault-mounts', 'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229', '__OS_DEBUG': 'true'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'none', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/container-config-scripts:/container-config-scripts:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_statedir_owner.log --network none --privileged=False --security-opt label=disable --user root --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/container-config-scripts:/container-config-scripts:z registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1 /container-config-scripts/pyshim.sh /container-config-scripts/nova_statedir_ownership.py
Feb 20 08:05:00 np0005625203.localdomain sshd[61440]: Received disconnect from 40.81.244.142 port 56758:11: Bye Bye [preauth]
Feb 20 08:05:00 np0005625203.localdomain sshd[61440]: Disconnected from authenticating user root 40.81.244.142 port 56758 [preauth]
Feb 20 08:05:00 np0005625203.localdomain sudo[61780]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: session-c2.scope: Deactivated successfully.
Feb 20 08:05:00 np0005625203.localdomain sudo[61765]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: session-c1.scope: Deactivated successfully.
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:05:00 np0005625203.localdomain podman[61768]: 2026-02-20 08:05:00.287048029 +0000 UTC m=+0.431143914 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=starting, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, container_name=collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, release=1766032510, config_id=tripleo_step3)
Feb 20 08:05:00 np0005625203.localdomain podman[61990]: 2026-02-20 08:05:00.400159082 +0000 UTC m=+0.072739620 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container)
Feb 20 08:05:00 np0005625203.localdomain podman[61991]: 2026-02-20 08:05:00.467126642 +0000 UTC m=+0.137620675 container create e44638810041aae6c932c01f4e4fb5e56658e5232a39c0d9fa7699f76e050660 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, build-date=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, name=rhosp-rhel9/openstack-nova-libvirt)
Feb 20 08:05:00 np0005625203.localdomain podman[61768]: 2026-02-20 08:05:00.47444902 +0000 UTC m=+0.618544855 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, container_name=collectd, vcs-type=git, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z)
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: Started libpod-conmon-e44638810041aae6c932c01f4e4fb5e56658e5232a39c0d9fa7699f76e050660.scope.
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:05:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7dd65b6e00d2007afbafb03dbcb7c5d21b53f2ae8aaceb2ca2bcb487184bc6/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7dd65b6e00d2007afbafb03dbcb7c5d21b53f2ae8aaceb2ca2bcb487184bc6/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7dd65b6e00d2007afbafb03dbcb7c5d21b53f2ae8aaceb2ca2bcb487184bc6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7dd65b6e00d2007afbafb03dbcb7c5d21b53f2ae8aaceb2ca2bcb487184bc6/merged/var/log/swtpm/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:00 np0005625203.localdomain podman[61991]: 2026-02-20 08:05:00.429399831 +0000 UTC m=+0.099893864 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:05:00 np0005625203.localdomain podman[61991]: 2026-02-20 08:05:00.531818132 +0000 UTC m=+0.202312175 container init e44638810041aae6c932c01f4e4fb5e56658e5232a39c0d9fa7699f76e050660 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, version=17.1.13, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com)
Feb 20 08:05:00 np0005625203.localdomain podman[61991]: 2026-02-20 08:05:00.54014107 +0000 UTC m=+0.210635103 container start e44638810041aae6c932c01f4e4fb5e56658e5232a39c0d9fa7699f76e050660 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, name=rhosp-rhel9/openstack-nova-libvirt, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, version=17.1.13, batch=17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt)
Feb 20 08:05:00 np0005625203.localdomain podman[61990]: 2026-02-20 08:05:00.594276973 +0000 UTC m=+0.266857511 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.13, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd)
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb-merged.mount: Deactivated successfully.
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f-userdata-shm.mount: Deactivated successfully.
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-399537d55839d97f33c8f4bb32cbd0a91de930b46800cfbfe38988d47bef0997-merged.mount: Deactivated successfully.
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63a5f0af8a13ca44c84f40643a2c176ce000ee4e14c5b640cded40546e9f0038-userdata-shm.mount: Deactivated successfully.
Feb 20 08:05:00 np0005625203.localdomain podman[62073]: 2026-02-20 08:05:00.733333142 +0000 UTC m=+0.077007573 container create d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, batch=17.1_20260112.1, container_name=nova_virtsecretd, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-libvirt, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc.)
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: Started libpod-conmon-d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9.scope.
Feb 20 08:05:00 np0005625203.localdomain podman[62073]: 2026-02-20 08:05:00.689399588 +0000 UTC m=+0.033074069 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:05:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd727fde2209c79d9821216422c7a3f4abac9c93918abc294ef4cb9196199ef/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd727fde2209c79d9821216422c7a3f4abac9c93918abc294ef4cb9196199ef/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd727fde2209c79d9821216422c7a3f4abac9c93918abc294ef4cb9196199ef/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd727fde2209c79d9821216422c7a3f4abac9c93918abc294ef4cb9196199ef/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd727fde2209c79d9821216422c7a3f4abac9c93918abc294ef4cb9196199ef/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd727fde2209c79d9821216422c7a3f4abac9c93918abc294ef4cb9196199ef/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd727fde2209c79d9821216422c7a3f4abac9c93918abc294ef4cb9196199ef/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:00 np0005625203.localdomain podman[62073]: 2026-02-20 08:05:00.806829855 +0000 UTC m=+0.150504266 container init d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, release=1766032510, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step3, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:31:49Z, container_name=nova_virtsecretd, distribution-scope=public, com.redhat.component=openstack-nova-libvirt-container, name=rhosp-rhel9/openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:31:49Z, batch=17.1_20260112.1, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:05:00 np0005625203.localdomain podman[62073]: 2026-02-20 08:05:00.816781964 +0000 UTC m=+0.160456375 container start d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, build-date=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, distribution-scope=public, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtsecretd, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.created=2026-01-12T23:31:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510)
Feb 20 08:05:00 np0005625203.localdomain python3[61457]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtsecretd --cgroupns=host --conmon-pidfile /run/nova_virtsecretd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=2eb7e8e9794eebaba92e1ff8facc8868 --label config_id=tripleo_step3 --label container_name=nova_virtsecretd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtsecretd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:05:00 np0005625203.localdomain sudo[62092]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:05:00 np0005625203.localdomain systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring.
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: Started Session c3 of User root.
Feb 20 08:05:00 np0005625203.localdomain sudo[62092]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:00 np0005625203.localdomain sudo[62092]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:00 np0005625203.localdomain systemd[1]: session-c3.scope: Deactivated successfully.
Feb 20 08:05:01 np0005625203.localdomain podman[62208]: 2026-02-20 08:05:01.251196708 +0000 UTC m=+0.064238027 container create 5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtnodedevd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, version=17.1.13, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, build-date=2026-01-12T23:31:49Z, release=1766032510, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vendor=Red Hat, Inc.)
Feb 20 08:05:01 np0005625203.localdomain podman[62220]: 2026-02-20 08:05:01.278447415 +0000 UTC m=+0.068748567 container create 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, io.openshift.expose-services=, release=1766032510, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1)
Feb 20 08:05:01 np0005625203.localdomain systemd[1]: Started libpod-conmon-5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9.scope.
Feb 20 08:05:01 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:05:01 np0005625203.localdomain systemd[1]: Started libpod-conmon-54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.scope.
Feb 20 08:05:01 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44b9ee3b74198541ae59b815b4be56ebc6ae69bec4e8dadcdc16fb5e4a48b77/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:01 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44b9ee3b74198541ae59b815b4be56ebc6ae69bec4e8dadcdc16fb5e4a48b77/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:01 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44b9ee3b74198541ae59b815b4be56ebc6ae69bec4e8dadcdc16fb5e4a48b77/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:01 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44b9ee3b74198541ae59b815b4be56ebc6ae69bec4e8dadcdc16fb5e4a48b77/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:01 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44b9ee3b74198541ae59b815b4be56ebc6ae69bec4e8dadcdc16fb5e4a48b77/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:01 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44b9ee3b74198541ae59b815b4be56ebc6ae69bec4e8dadcdc16fb5e4a48b77/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:01 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c44b9ee3b74198541ae59b815b4be56ebc6ae69bec4e8dadcdc16fb5e4a48b77/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:01 np0005625203.localdomain podman[62208]: 2026-02-20 08:05:01.217006756 +0000 UTC m=+0.030048065 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:05:01 np0005625203.localdomain podman[62208]: 2026-02-20 08:05:01.320488101 +0000 UTC m=+0.133529420 container init 5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, vendor=Red Hat, Inc., build-date=2026-01-12T23:31:49Z, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, container_name=nova_virtnodedevd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5)
Feb 20 08:05:01 np0005625203.localdomain podman[62208]: 2026-02-20 08:05:01.329902573 +0000 UTC m=+0.142943902 container start 5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, batch=17.1_20260112.1, container_name=nova_virtnodedevd, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, vcs-type=git, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.created=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, architecture=x86_64)
Feb 20 08:05:01 np0005625203.localdomain python3[61457]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtnodedevd --cgroupns=host --conmon-pidfile /run/nova_virtnodedevd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=2eb7e8e9794eebaba92e1ff8facc8868 --label config_id=tripleo_step3 --label container_name=nova_virtnodedevd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtnodedevd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:05:01 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:05:01 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd7902c54f5e5aa9d28d09659c7eace41e35d954c014d763e6c73387e43d5dd/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:01 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd7902c54f5e5aa9d28d09659c7eace41e35d954c014d763e6c73387e43d5dd/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:01 np0005625203.localdomain podman[62220]: 2026-02-20 08:05:01.242193089 +0000 UTC m=+0.032494221 image pull  registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Feb 20 08:05:01 np0005625203.localdomain sudo[62246]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:05:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:05:01 np0005625203.localdomain podman[62220]: 2026-02-20 08:05:01.371434243 +0000 UTC m=+0.161735385 container init 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container)
Feb 20 08:05:01 np0005625203.localdomain systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring.
Feb 20 08:05:01 np0005625203.localdomain sshd[62259]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:01 np0005625203.localdomain systemd[1]: Started Session c4 of User root.
Feb 20 08:05:01 np0005625203.localdomain sudo[62263]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:05:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:05:01 np0005625203.localdomain systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring.
Feb 20 08:05:01 np0005625203.localdomain sudo[62246]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:01 np0005625203.localdomain systemd[1]: Started Session c5 of User root.
Feb 20 08:05:01 np0005625203.localdomain sudo[62263]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:01 np0005625203.localdomain podman[62220]: 2026-02-20 08:05:01.465191476 +0000 UTC m=+0.255492598 container start 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, url=https://www.redhat.com, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public)
Feb 20 08:05:01 np0005625203.localdomain python3[61457]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name iscsid --conmon-pidfile /run/iscsid.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ea40a11d6c51260bfa854053d924f0d3 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step3 --label container_name=iscsid --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/iscsid.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1
Feb 20 08:05:01 np0005625203.localdomain podman[62264]: 2026-02-20 08:05:01.488110198 +0000 UTC m=+0.066655592 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, version=17.1.13, container_name=iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.component=openstack-iscsid-container, release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public)
Feb 20 08:05:01 np0005625203.localdomain podman[62264]: 2026-02-20 08:05:01.494290659 +0000 UTC m=+0.072836073 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, container_name=iscsid, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container)
Feb 20 08:05:01 np0005625203.localdomain sudo[62263]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:01 np0005625203.localdomain systemd[1]: session-c5.scope: Deactivated successfully.
Feb 20 08:05:01 np0005625203.localdomain podman[62264]: unhealthy
Feb 20 08:05:01 np0005625203.localdomain sudo[62246]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:01 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:05:01 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Failed with result 'exit-code'.
Feb 20 08:05:01 np0005625203.localdomain systemd[1]: session-c4.scope: Deactivated successfully.
Feb 20 08:05:01 np0005625203.localdomain kernel: Loading iSCSI transport class v2.0-870.
Feb 20 08:05:02 np0005625203.localdomain podman[62384]: 2026-02-20 08:05:02.038082721 +0000 UTC m=+0.090037868 container create 73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, vcs-type=git, container_name=nova_virtstoraged, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:05:02 np0005625203.localdomain podman[62384]: 2026-02-20 08:05:01.98880902 +0000 UTC m=+0.040764257 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:05:02 np0005625203.localdomain systemd[1]: Started libpod-conmon-73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c.scope.
Feb 20 08:05:02 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126ec7093b4409981b35de203497fb4c932b8ea0a58e787934a1a228394ab4e1/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126ec7093b4409981b35de203497fb4c932b8ea0a58e787934a1a228394ab4e1/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126ec7093b4409981b35de203497fb4c932b8ea0a58e787934a1a228394ab4e1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126ec7093b4409981b35de203497fb4c932b8ea0a58e787934a1a228394ab4e1/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126ec7093b4409981b35de203497fb4c932b8ea0a58e787934a1a228394ab4e1/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126ec7093b4409981b35de203497fb4c932b8ea0a58e787934a1a228394ab4e1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/126ec7093b4409981b35de203497fb4c932b8ea0a58e787934a1a228394ab4e1/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain podman[62384]: 2026-02-20 08:05:02.118374265 +0000 UTC m=+0.170329402 container init 73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, container_name=nova_virtstoraged, io.openshift.expose-services=, release=1766032510, build-date=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-type=git)
Feb 20 08:05:02 np0005625203.localdomain podman[62384]: 2026-02-20 08:05:02.127637233 +0000 UTC m=+0.179592370 container start 73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, vendor=Red Hat, Inc., tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtstoraged, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:05:02 np0005625203.localdomain python3[61457]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtstoraged --cgroupns=host --conmon-pidfile /run/nova_virtstoraged.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=2eb7e8e9794eebaba92e1ff8facc8868 --label config_id=tripleo_step3 --label container_name=nova_virtstoraged --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtstoraged.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:05:02 np0005625203.localdomain sudo[62403]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:05:02 np0005625203.localdomain systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring.
Feb 20 08:05:02 np0005625203.localdomain systemd[1]: Started Session c6 of User root.
Feb 20 08:05:02 np0005625203.localdomain sudo[62403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:02 np0005625203.localdomain sudo[62403]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:02 np0005625203.localdomain systemd[1]: session-c6.scope: Deactivated successfully.
Feb 20 08:05:02 np0005625203.localdomain podman[62486]: 2026-02-20 08:05:02.57792402 +0000 UTC m=+0.085628201 container create 441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_virtqemud, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, distribution-scope=public, release=1766032510, version=17.1.13)
Feb 20 08:05:02 np0005625203.localdomain systemd[1]: Started libpod-conmon-441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e.scope.
Feb 20 08:05:02 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:05:02 np0005625203.localdomain podman[62486]: 2026-02-20 08:05:02.534611174 +0000 UTC m=+0.042315375 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca430835467936ee65e9cfbaab428c832bc1d43f5f133ae27dcd34c9dca062b5/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca430835467936ee65e9cfbaab428c832bc1d43f5f133ae27dcd34c9dca062b5/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca430835467936ee65e9cfbaab428c832bc1d43f5f133ae27dcd34c9dca062b5/merged/var/log/swtpm supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca430835467936ee65e9cfbaab428c832bc1d43f5f133ae27dcd34c9dca062b5/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca430835467936ee65e9cfbaab428c832bc1d43f5f133ae27dcd34c9dca062b5/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca430835467936ee65e9cfbaab428c832bc1d43f5f133ae27dcd34c9dca062b5/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca430835467936ee65e9cfbaab428c832bc1d43f5f133ae27dcd34c9dca062b5/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca430835467936ee65e9cfbaab428c832bc1d43f5f133ae27dcd34c9dca062b5/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:02 np0005625203.localdomain podman[62486]: 2026-02-20 08:05:02.644001762 +0000 UTC m=+0.151705943 container init 441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_virtqemud, org.opencontainers.image.created=2026-01-12T23:31:49Z, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp-rhel9/openstack-nova-libvirt)
Feb 20 08:05:02 np0005625203.localdomain podman[62486]: 2026-02-20 08:05:02.657469721 +0000 UTC m=+0.165173902 container start 441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, container_name=nova_virtqemud, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, build-date=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., release=1766032510)
Feb 20 08:05:02 np0005625203.localdomain python3[61457]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtqemud --cgroupns=host --conmon-pidfile /run/nova_virtqemud.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=2eb7e8e9794eebaba92e1ff8facc8868 --label config_id=tripleo_step3 --label container_name=nova_virtqemud --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtqemud.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro --volume /var/log/containers/libvirt/swtpm:/var/log/swtpm:z registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:05:02 np0005625203.localdomain sudo[62506]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:05:02 np0005625203.localdomain systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring.
Feb 20 08:05:02 np0005625203.localdomain systemd[1]: Started Session c7 of User root.
Feb 20 08:05:02 np0005625203.localdomain sudo[62506]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:02 np0005625203.localdomain sudo[62506]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:02 np0005625203.localdomain systemd[1]: session-c7.scope: Deactivated successfully.
Feb 20 08:05:03 np0005625203.localdomain podman[62595]: 2026-02-20 08:05:03.112800315 +0000 UTC m=+0.076592481 container create 2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtproxyd, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, config_id=tripleo_step3, batch=17.1_20260112.1, build-date=2026-01-12T23:31:49Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, architecture=x86_64, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc.)
Feb 20 08:05:03 np0005625203.localdomain systemd[1]: Started libpod-conmon-2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e.scope.
Feb 20 08:05:03 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:05:03 np0005625203.localdomain podman[62595]: 2026-02-20 08:05:03.07110381 +0000 UTC m=+0.034896016 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:05:03 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5839c92e4849940a9cba2db411bd09f73de0be67a96e976da2027adb67ab877e/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:03 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5839c92e4849940a9cba2db411bd09f73de0be67a96e976da2027adb67ab877e/merged/var/lib/vhost_sockets supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:03 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5839c92e4849940a9cba2db411bd09f73de0be67a96e976da2027adb67ab877e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:03 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5839c92e4849940a9cba2db411bd09f73de0be67a96e976da2027adb67ab877e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:03 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5839c92e4849940a9cba2db411bd09f73de0be67a96e976da2027adb67ab877e/merged/var/log/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:03 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5839c92e4849940a9cba2db411bd09f73de0be67a96e976da2027adb67ab877e/merged/var/cache/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:03 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5839c92e4849940a9cba2db411bd09f73de0be67a96e976da2027adb67ab877e/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:03 np0005625203.localdomain podman[62595]: 2026-02-20 08:05:03.182305054 +0000 UTC m=+0.146097220 container init 2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:31:49Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-libvirt, container_name=nova_virtproxyd, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Feb 20 08:05:03 np0005625203.localdomain podman[62595]: 2026-02-20 08:05:03.193390508 +0000 UTC m=+0.157182684 container start 2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, distribution-scope=public, build-date=2026-01-12T23:31:49Z, tcib_managed=true, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtproxyd, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc.)
Feb 20 08:05:03 np0005625203.localdomain python3[61457]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_virtproxyd --cgroupns=host --conmon-pidfile /run/nova_virtproxyd.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=2eb7e8e9794eebaba92e1ff8facc8868 --label config_id=tripleo_step3 --label container_name=nova_virtproxyd --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_virtproxyd.log --network host --pid host --pids-limit 65536 --privileged=True --security-opt label=level:s0 --security-opt label=type:spc_t --security-opt label=filetype:container_file_t --ulimit nofile=131072 --ulimit nproc=126960 --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/libvirt:/var/log/libvirt:shared,z --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /run:/run --volume /sys/fs/cgroup:/sys/fs/cgroup --volume /sys/fs/selinux:/sys/fs/selinux --volume /etc/selinux/config:/etc/selinux/config:ro --volume /etc/libvirt:/etc/libvirt:shared --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/cache/libvirt:/var/cache/libvirt:shared --volume /var/lib/vhost_sockets:/var/lib/vhost_sockets --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:05:03 np0005625203.localdomain sudo[62614]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:05:03 np0005625203.localdomain systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring.
Feb 20 08:05:03 np0005625203.localdomain systemd[1]: Started Session c8 of User root.
Feb 20 08:05:03 np0005625203.localdomain sudo[62614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:03 np0005625203.localdomain sudo[62614]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:03 np0005625203.localdomain systemd[1]: session-c8.scope: Deactivated successfully.
Feb 20 08:05:03 np0005625203.localdomain sudo[61455]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:03 np0005625203.localdomain sudo[62673]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udfqjpzogwdyvbqdjeahiiiovugfqqqi ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:03 np0005625203.localdomain sudo[62673]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:03 np0005625203.localdomain python3[62675]: ansible-file Invoked with path=/etc/systemd/system/tripleo_collectd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:03 np0005625203.localdomain sudo[62673]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:03 np0005625203.localdomain sudo[62689]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wqassyvihxrlhvtmjntcfjnjpgmczfvt ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:03 np0005625203.localdomain sudo[62689]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:04 np0005625203.localdomain python3[62691]: ansible-file Invoked with path=/etc/systemd/system/tripleo_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:04 np0005625203.localdomain sudo[62689]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:04 np0005625203.localdomain sudo[62705]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ktpjmlkzgifkntsosslktmkroptrzqne ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:04 np0005625203.localdomain sudo[62705]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:04 np0005625203.localdomain python3[62707]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:04 np0005625203.localdomain sudo[62705]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:04 np0005625203.localdomain sudo[62721]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bqkqfbfzaarsjmedwlewzxlrqdnounqz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:04 np0005625203.localdomain sudo[62721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:04 np0005625203.localdomain sshd[62259]: Invalid user alberto from 185.246.128.171 port 30708
Feb 20 08:05:04 np0005625203.localdomain python3[62723]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:04 np0005625203.localdomain sudo[62721]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:04 np0005625203.localdomain sudo[62737]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-labtdewwxmdiuyubvxuztexocnxatzwu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:04 np0005625203.localdomain sudo[62737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:04 np0005625203.localdomain python3[62739]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:04 np0005625203.localdomain sudo[62737]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:04 np0005625203.localdomain sudo[62753]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lwswnxvwjoxsxjiuxdtgunqmdkgkcueh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:04 np0005625203.localdomain sudo[62753]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:05 np0005625203.localdomain python3[62755]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:05 np0005625203.localdomain sudo[62753]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:05 np0005625203.localdomain sudo[62769]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wwfblaczbqlsukmvxufdtwheoxggyiwg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:05 np0005625203.localdomain sudo[62769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:05 np0005625203.localdomain sshd[62259]: Disconnecting invalid user alberto 185.246.128.171 port 30708: Change of username or service not allowed: (alberto,ssh-connection) -> (deployer,ssh-connection) [preauth]
Feb 20 08:05:05 np0005625203.localdomain python3[62771]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:05 np0005625203.localdomain sudo[62769]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:05 np0005625203.localdomain sudo[62785]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-buqjrkksfqrcmobwtzgqqepbxvvnuwit ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:05 np0005625203.localdomain sudo[62785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:05 np0005625203.localdomain python3[62787]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:05 np0005625203.localdomain sudo[62785]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:05 np0005625203.localdomain sudo[62801]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fbbenmgayhqbrfkhffanmbmhrviihdfb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:05 np0005625203.localdomain sudo[62801]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:05 np0005625203.localdomain python3[62803]: ansible-file Invoked with path=/etc/systemd/system/tripleo_rsyslog.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:05 np0005625203.localdomain sudo[62801]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:05 np0005625203.localdomain sudo[62817]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lgkfsarackrisnzubztsdedpzfbhenta ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:05 np0005625203.localdomain sudo[62817]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:06 np0005625203.localdomain python3[62819]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_collectd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:05:06 np0005625203.localdomain sudo[62817]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:06 np0005625203.localdomain sudo[62833]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmmwyfouaitnlmxcdlmkswjsknthslrd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:06 np0005625203.localdomain sudo[62833]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:06 np0005625203.localdomain python3[62835]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_iscsid_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:05:06 np0005625203.localdomain sudo[62833]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:06 np0005625203.localdomain sudo[62849]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-poexptwndjvllzwuckbnulhryptdmbcl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:06 np0005625203.localdomain sudo[62849]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:06 np0005625203.localdomain python3[62851]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:05:06 np0005625203.localdomain sudo[62849]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:06 np0005625203.localdomain sudo[62865]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-plwakwhzipmzeuldjygidbykpdbawfln ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:06 np0005625203.localdomain sudo[62865]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:06 np0005625203.localdomain python3[62867]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:05:06 np0005625203.localdomain sudo[62865]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:07 np0005625203.localdomain sudo[62881]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-novmpfiytzehxxdvahwchkebwnwsmmkj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:07 np0005625203.localdomain sudo[62881]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:07 np0005625203.localdomain sshd[62884]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:07 np0005625203.localdomain python3[62883]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:05:07 np0005625203.localdomain sudo[62881]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:07 np0005625203.localdomain sudo[62899]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tdobvuxnsjnvtyrbbugygnainxstzous ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:07 np0005625203.localdomain sudo[62899]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:07 np0005625203.localdomain python3[62901]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:05:07 np0005625203.localdomain sudo[62899]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:07 np0005625203.localdomain sudo[62915]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwqbwdhdguizhvkspyurbxlkztnztplk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:07 np0005625203.localdomain sudo[62915]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:07 np0005625203.localdomain python3[62917]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:05:07 np0005625203.localdomain sudo[62915]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:07 np0005625203.localdomain sudo[62931]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwgpkskiagunlpzzhwpxnzjtbpmgtwjr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:07 np0005625203.localdomain sudo[62931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:07 np0005625203.localdomain python3[62933]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:05:07 np0005625203.localdomain sudo[62931]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:08 np0005625203.localdomain sudo[62947]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qfgapvuxmegobepzbvfmvoocwjcptsep ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:08 np0005625203.localdomain sudo[62947]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:08 np0005625203.localdomain python3[62949]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_rsyslog_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:05:08 np0005625203.localdomain sudo[62947]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:08 np0005625203.localdomain sshd[62982]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:08 np0005625203.localdomain sudo[63010]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhfammxornthehsticeslmgvgamrzhhm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:08 np0005625203.localdomain sudo[63010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:08 np0005625203.localdomain sshd[62982]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:05:08 np0005625203.localdomain python3[63012]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.2483356-100705-140034622997730/source dest=/etc/systemd/system/tripleo_collectd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:08 np0005625203.localdomain sudo[63010]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:09 np0005625203.localdomain sudo[63039]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ragvmxgshcendgdlbmnuzvzcvmbpifmn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:09 np0005625203.localdomain sudo[63039]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:09 np0005625203.localdomain sshd[62884]: Invalid user deployer from 185.246.128.171 port 4755
Feb 20 08:05:09 np0005625203.localdomain python3[63041]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.2483356-100705-140034622997730/source dest=/etc/systemd/system/tripleo_iscsid.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:09 np0005625203.localdomain sudo[63039]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:09 np0005625203.localdomain sudo[63068]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hrjzhmrdlpiqnkgyipbfpzlmytkwrtmq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:09 np0005625203.localdomain sudo[63068]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:09 np0005625203.localdomain python3[63070]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.2483356-100705-140034622997730/source dest=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:09 np0005625203.localdomain sudo[63068]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:10 np0005625203.localdomain sudo[63097]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqchhakssnflxwihxdporqaymhelebre ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:10 np0005625203.localdomain sudo[63097]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:10 np0005625203.localdomain sshd[62884]: Disconnecting invalid user deployer 185.246.128.171 port 4755: Change of username or service not allowed: (deployer,ssh-connection) -> (luis,ssh-connection) [preauth]
Feb 20 08:05:10 np0005625203.localdomain python3[63099]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.2483356-100705-140034622997730/source dest=/etc/systemd/system/tripleo_nova_virtnodedevd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:10 np0005625203.localdomain sudo[63097]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:10 np0005625203.localdomain sudo[63126]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-freuywpweezxeztzcpabozxgwkmsiwjs ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:10 np0005625203.localdomain sudo[63126]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:10 np0005625203.localdomain python3[63128]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.2483356-100705-140034622997730/source dest=/etc/systemd/system/tripleo_nova_virtproxyd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:10 np0005625203.localdomain sudo[63126]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:11 np0005625203.localdomain sudo[63155]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtufzmzvjzhdlxytsdghnondszghffvn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:11 np0005625203.localdomain sudo[63155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:11 np0005625203.localdomain python3[63157]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.2483356-100705-140034622997730/source dest=/etc/systemd/system/tripleo_nova_virtqemud.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:11 np0005625203.localdomain sudo[63155]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:11 np0005625203.localdomain sudo[63184]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gozaaqjxxauwgdfyidjshfcyflqkwxsl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:11 np0005625203.localdomain sudo[63184]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:11 np0005625203.localdomain python3[63186]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.2483356-100705-140034622997730/source dest=/etc/systemd/system/tripleo_nova_virtsecretd.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:11 np0005625203.localdomain sudo[63184]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:12 np0005625203.localdomain sshd[63187]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:12 np0005625203.localdomain sudo[63215]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-epjqdtstazyrahpqmedvvkdcywcmxvon ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:12 np0005625203.localdomain sudo[63215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:12 np0005625203.localdomain python3[63217]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.2483356-100705-140034622997730/source dest=/etc/systemd/system/tripleo_nova_virtstoraged.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:12 np0005625203.localdomain sudo[63215]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:12 np0005625203.localdomain sudo[63244]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zyvjamsfblyxijacdbespugukaeenkmm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:12 np0005625203.localdomain sudo[63244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:12 np0005625203.localdomain python3[63246]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574708.2483356-100705-140034622997730/source dest=/etc/systemd/system/tripleo_rsyslog.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:12 np0005625203.localdomain sudo[63244]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:12 np0005625203.localdomain sudo[63260]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tpwqsuthodmgjhlggyueovbqxqyyiaob ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:12 np0005625203.localdomain sudo[63260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:13 np0005625203.localdomain python3[63262]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 08:05:13 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:13 np0005625203.localdomain systemd-rc-local-generator[63285]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:13 np0005625203.localdomain systemd-sysv-generator[63288]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:13 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:13 np0005625203.localdomain systemd[1]: Stopping User Manager for UID 0...
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Activating special unit Exit the Session...
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Stopped target Main User Target.
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Stopped target Basic System.
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Stopped target Paths.
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Stopped target Sockets.
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Stopped target Timers.
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Closed D-Bus User Message Bus Socket.
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Stopped Create User's Volatile Files and Directories.
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Removed slice User Application Slice.
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Reached target Shutdown.
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Finished Exit the Session.
Feb 20 08:05:13 np0005625203.localdomain systemd[61806]: Reached target Exit the Session.
Feb 20 08:05:13 np0005625203.localdomain systemd[1]: user@0.service: Deactivated successfully.
Feb 20 08:05:13 np0005625203.localdomain systemd[1]: Stopped User Manager for UID 0.
Feb 20 08:05:13 np0005625203.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb 20 08:05:13 np0005625203.localdomain sudo[63260]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:13 np0005625203.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Feb 20 08:05:13 np0005625203.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb 20 08:05:13 np0005625203.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb 20 08:05:13 np0005625203.localdomain systemd[1]: Removed slice User Slice of UID 0.
Feb 20 08:05:13 np0005625203.localdomain sshd[63187]: Invalid user luis from 185.246.128.171 port 39150
Feb 20 08:05:13 np0005625203.localdomain sudo[63313]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aipemzuwjnymhiavhdeeffbcdadwgabw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:13 np0005625203.localdomain sudo[63313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:14 np0005625203.localdomain python3[63315]: ansible-systemd Invoked with state=restarted name=tripleo_collectd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:05:14 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:14 np0005625203.localdomain systemd-sysv-generator[63344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:14 np0005625203.localdomain systemd-rc-local-generator[63341]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:14 np0005625203.localdomain sshd[63187]: Disconnecting invalid user luis 185.246.128.171 port 39150: Change of username or service not allowed: (luis,ssh-connection) -> (cirros,ssh-connection) [preauth]
Feb 20 08:05:14 np0005625203.localdomain systemd[1]: Starting collectd container...
Feb 20 08:05:14 np0005625203.localdomain systemd[1]: Started collectd container.
Feb 20 08:05:14 np0005625203.localdomain sudo[63313]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:14 np0005625203.localdomain sudo[63379]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-esroecaudffjnbjfqmpkotawlvadvcvj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:14 np0005625203.localdomain sudo[63379]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:15 np0005625203.localdomain python3[63381]: ansible-systemd Invoked with state=restarted name=tripleo_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:05:15 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:15 np0005625203.localdomain systemd-rc-local-generator[63405]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:15 np0005625203.localdomain systemd-sysv-generator[63411]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:15 np0005625203.localdomain systemd[1]: Starting iscsid container...
Feb 20 08:05:15 np0005625203.localdomain systemd[1]: Started iscsid container.
Feb 20 08:05:15 np0005625203.localdomain sudo[63379]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:15 np0005625203.localdomain sshd[63437]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:15 np0005625203.localdomain sudo[63445]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phwzyvyjulqhaqiwhngkyickonkajawx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:15 np0005625203.localdomain sudo[63445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:16 np0005625203.localdomain python3[63447]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtlogd_wrapper.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:05:16 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:16 np0005625203.localdomain systemd-rc-local-generator[63473]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:16 np0005625203.localdomain systemd-sysv-generator[63477]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:16 np0005625203.localdomain systemd[1]: Starting nova_virtlogd_wrapper container...
Feb 20 08:05:16 np0005625203.localdomain systemd[1]: Started nova_virtlogd_wrapper container.
Feb 20 08:05:16 np0005625203.localdomain sudo[63445]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:16 np0005625203.localdomain sudo[63512]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kywocgldrftcisgcnppswyikelnlddid ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:16 np0005625203.localdomain sudo[63512]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:17 np0005625203.localdomain python3[63514]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtnodedevd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:05:17 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:17 np0005625203.localdomain systemd-rc-local-generator[63538]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:17 np0005625203.localdomain systemd-sysv-generator[63542]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:17 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:17 np0005625203.localdomain systemd[1]: Starting nova_virtnodedevd container...
Feb 20 08:05:17 np0005625203.localdomain tripleo-start-podman-container[63554]: Creating additional drop-in dependency for "nova_virtnodedevd" (5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9)
Feb 20 08:05:17 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:17 np0005625203.localdomain systemd-sysv-generator[63617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:17 np0005625203.localdomain systemd-rc-local-generator[63614]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:17 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:17 np0005625203.localdomain systemd[1]: Started nova_virtnodedevd container.
Feb 20 08:05:18 np0005625203.localdomain sudo[63512]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:18 np0005625203.localdomain sudo[63636]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kwcwcoibnkocingpnbbkbuzvonxaqtiq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:18 np0005625203.localdomain sudo[63636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:18 np0005625203.localdomain python3[63638]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtproxyd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:05:18 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:18 np0005625203.localdomain systemd-rc-local-generator[63663]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:18 np0005625203.localdomain systemd-sysv-generator[63668]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:18 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:18 np0005625203.localdomain systemd[1]: Starting nova_virtproxyd container...
Feb 20 08:05:19 np0005625203.localdomain tripleo-start-podman-container[63678]: Creating additional drop-in dependency for "nova_virtproxyd" (2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e)
Feb 20 08:05:19 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:19 np0005625203.localdomain systemd-sysv-generator[63742]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:19 np0005625203.localdomain systemd-rc-local-generator[63737]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:19 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:19 np0005625203.localdomain systemd[1]: Started nova_virtproxyd container.
Feb 20 08:05:19 np0005625203.localdomain sudo[63636]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:19 np0005625203.localdomain sshd[63437]: Invalid user cirros from 185.246.128.171 port 2984
Feb 20 08:05:19 np0005625203.localdomain sudo[63760]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqmgrekcujmvumkwpwjpqkaxnqbmmkng ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:19 np0005625203.localdomain sudo[63760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:20 np0005625203.localdomain python3[63762]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtqemud.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:05:20 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:20 np0005625203.localdomain systemd-rc-local-generator[63789]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:20 np0005625203.localdomain systemd-sysv-generator[63794]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:20 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:20 np0005625203.localdomain systemd[1]: Starting nova_virtqemud container...
Feb 20 08:05:20 np0005625203.localdomain tripleo-start-podman-container[63802]: Creating additional drop-in dependency for "nova_virtqemud" (441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e)
Feb 20 08:05:20 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:20 np0005625203.localdomain systemd-sysv-generator[63866]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:20 np0005625203.localdomain systemd-rc-local-generator[63863]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:20 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:20 np0005625203.localdomain systemd[1]: Started nova_virtqemud container.
Feb 20 08:05:20 np0005625203.localdomain sudo[63760]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:21 np0005625203.localdomain sudo[63885]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bscsjunsubgdcerdokozocjjzahkuhcj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:21 np0005625203.localdomain sudo[63885]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:21 np0005625203.localdomain python3[63887]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtsecretd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:05:21 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:21 np0005625203.localdomain systemd-sysv-generator[63918]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:21 np0005625203.localdomain systemd-rc-local-generator[63914]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:21 np0005625203.localdomain systemd[1]: Starting nova_virtsecretd container...
Feb 20 08:05:22 np0005625203.localdomain sshd[63437]: Disconnecting invalid user cirros 185.246.128.171 port 2984: Change of username or service not allowed: (cirros,ssh-connection) -> (avax,ssh-connection) [preauth]
Feb 20 08:05:22 np0005625203.localdomain tripleo-start-podman-container[63928]: Creating additional drop-in dependency for "nova_virtsecretd" (d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9)
Feb 20 08:05:22 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:22 np0005625203.localdomain systemd-rc-local-generator[63980]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:22 np0005625203.localdomain systemd-sysv-generator[63983]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:22 np0005625203.localdomain systemd[1]: Started nova_virtsecretd container.
Feb 20 08:05:22 np0005625203.localdomain sudo[63885]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:22 np0005625203.localdomain sudo[64012]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xdfftklrjconynjlobwaoouhtxprdase ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:22 np0005625203.localdomain sudo[64012]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:22 np0005625203.localdomain python3[64014]: ansible-systemd Invoked with state=restarted name=tripleo_nova_virtstoraged.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:05:22 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:23 np0005625203.localdomain systemd-sysv-generator[64043]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:23 np0005625203.localdomain systemd-rc-local-generator[64040]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:23 np0005625203.localdomain systemd[1]: Starting nova_virtstoraged container...
Feb 20 08:05:23 np0005625203.localdomain tripleo-start-podman-container[64054]: Creating additional drop-in dependency for "nova_virtstoraged" (73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c)
Feb 20 08:05:23 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:23 np0005625203.localdomain systemd-sysv-generator[64111]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:23 np0005625203.localdomain systemd-rc-local-generator[64107]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:23 np0005625203.localdomain systemd[1]: Started nova_virtstoraged container.
Feb 20 08:05:23 np0005625203.localdomain sudo[64012]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:23 np0005625203.localdomain sshd[64123]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:24 np0005625203.localdomain sudo[64138]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kkrcutasbrjlhcaqkshxsqrodmlxfhzj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:05:24 np0005625203.localdomain sudo[64138]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:24 np0005625203.localdomain python3[64140]: ansible-systemd Invoked with state=restarted name=tripleo_rsyslog.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:05:24 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:05:24 np0005625203.localdomain systemd-rc-local-generator[64168]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:05:24 np0005625203.localdomain systemd-sysv-generator[64171]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:05:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:05:24 np0005625203.localdomain systemd[1]: Starting rsyslog container...
Feb 20 08:05:24 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:05:24 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:24 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:24 np0005625203.localdomain podman[64181]: 2026-02-20 08:05:24.874369568 +0000 UTC m=+0.136574454 container init 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, name=rhosp-rhel9/openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:09Z, description=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, version=17.1.13)
Feb 20 08:05:24 np0005625203.localdomain podman[64181]: 2026-02-20 08:05:24.886425712 +0000 UTC m=+0.148630618 container start 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:09Z, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, summary=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, architecture=x86_64, container_name=rsyslog, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:05:24 np0005625203.localdomain podman[64181]: rsyslog
Feb 20 08:05:24 np0005625203.localdomain systemd[1]: Started rsyslog container.
Feb 20 08:05:24 np0005625203.localdomain sudo[64201]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:05:24 np0005625203.localdomain sudo[64201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:24 np0005625203.localdomain sudo[64138]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:24 np0005625203.localdomain sudo[64201]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: libpod-03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f.scope: Deactivated successfully.
Feb 20 08:05:25 np0005625203.localdomain sshd[64123]: Invalid user avax from 185.246.128.171 port 55962
Feb 20 08:05:25 np0005625203.localdomain podman[64219]: 2026-02-20 08:05:25.063683498 +0000 UTC m=+0.048559399 container died 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_step3, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:09Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=rsyslog)
Feb 20 08:05:25 np0005625203.localdomain podman[64219]: 2026-02-20 08:05:25.089067626 +0000 UTC m=+0.073943457 container cleanup 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:09Z, build-date=2026-01-12T22:10:09Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, distribution-scope=public, config_id=tripleo_step3, vcs-type=git, com.redhat.component=openstack-rsyslog-container, name=rhosp-rhel9/openstack-rsyslog, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:05:25 np0005625203.localdomain sudo[64254]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bjgcfifugevxbeiuxrrttmhbgfntiisn ; /usr/bin/python3
Feb 20 08:05:25 np0005625203.localdomain podman[64233]: 2026-02-20 08:05:25.181072694 +0000 UTC m=+0.059852579 container cleanup 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, name=rhosp-rhel9/openstack-rsyslog, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, container_name=rsyslog, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:05:25 np0005625203.localdomain podman[64233]: rsyslog
Feb 20 08:05:25 np0005625203.localdomain sudo[64254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Feb 20 08:05:25 np0005625203.localdomain python3[64259]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks3.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:25 np0005625203.localdomain sshd[64123]: Disconnecting invalid user avax 185.246.128.171 port 55962: Change of username or service not allowed: (avax,ssh-connection) -> (suresh,ssh-connection) [preauth]
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 1.
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: Stopped rsyslog container.
Feb 20 08:05:25 np0005625203.localdomain sudo[64254]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: Starting rsyslog container...
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:05:25 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:25 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:25 np0005625203.localdomain podman[64260]: 2026-02-20 08:05:25.495118299 +0000 UTC m=+0.122125004 container init 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.buildah.version=1.41.5, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-rsyslog-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, tcib_managed=true, build-date=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510)
Feb 20 08:05:25 np0005625203.localdomain podman[64260]: 2026-02-20 08:05:25.505361228 +0000 UTC m=+0.132367933 container start 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_id=tripleo_step3, vendor=Red Hat, Inc., com.redhat.component=openstack-rsyslog-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.buildah.version=1.41.5, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, container_name=rsyslog, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-rsyslog, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, build-date=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public)
Feb 20 08:05:25 np0005625203.localdomain podman[64260]: rsyslog
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: Started rsyslog container.
Feb 20 08:05:25 np0005625203.localdomain sudo[64280]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:05:25 np0005625203.localdomain sudo[64280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:25 np0005625203.localdomain sudo[64280]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: libpod-03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f.scope: Deactivated successfully.
Feb 20 08:05:25 np0005625203.localdomain podman[64283]: 2026-02-20 08:05:25.668497105 +0000 UTC m=+0.050339865 container died 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:09Z, com.redhat.component=openstack-rsyslog-container, name=rhosp-rhel9/openstack-rsyslog, container_name=rsyslog, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:09Z, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public)
Feb 20 08:05:25 np0005625203.localdomain podman[64283]: 2026-02-20 08:05:25.691380726 +0000 UTC m=+0.073223436 container cleanup 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:09Z, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, name=rhosp-rhel9/openstack-rsyslog, container_name=rsyslog)
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:05:25 np0005625203.localdomain sshd[64295]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:25 np0005625203.localdomain podman[64296]: 2026-02-20 08:05:25.775168618 +0000 UTC m=+0.056435183 container cleanup 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, container_name=rsyslog, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, name=rhosp-rhel9/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:09Z)
Feb 20 08:05:25 np0005625203.localdomain podman[64296]: rsyslog
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb-merged.mount: Deactivated successfully.
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f-userdata-shm.mount: Deactivated successfully.
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 2.
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: Stopped rsyslog container.
Feb 20 08:05:25 np0005625203.localdomain systemd[1]: Starting rsyslog container...
Feb 20 08:05:25 np0005625203.localdomain sudo[64361]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dpxebmzjwvnlbikrabjcghxqdzhcuohd ; /usr/bin/python3
Feb 20 08:05:25 np0005625203.localdomain sudo[64361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:05:26 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:26 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:26 np0005625203.localdomain podman[64348]: 2026-02-20 08:05:26.055951171 +0000 UTC m=+0.121659491 container init 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, managed_by=tripleo_ansible, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-rsyslog, vcs-type=git, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2026-01-12T22:10:09Z, org.opencontainers.image.created=2026-01-12T22:10:09Z)
Feb 20 08:05:26 np0005625203.localdomain podman[64348]: 2026-02-20 08:05:26.064102124 +0000 UTC m=+0.129810444 container start 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, container_name=rsyslog, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:09Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vcs-type=git, batch=17.1_20260112.1, build-date=2026-01-12T22:10:09Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, managed_by=tripleo_ansible)
Feb 20 08:05:26 np0005625203.localdomain podman[64348]: rsyslog
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: Started rsyslog container.
Feb 20 08:05:26 np0005625203.localdomain sudo[64378]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:05:26 np0005625203.localdomain sudo[64378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:26 np0005625203.localdomain sudo[64361]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:26 np0005625203.localdomain sudo[64378]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: libpod-03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f.scope: Deactivated successfully.
Feb 20 08:05:26 np0005625203.localdomain podman[64381]: 2026-02-20 08:05:26.197620521 +0000 UTC m=+0.029052943 container died 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-rsyslog-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:09Z, tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, release=1766032510, container_name=rsyslog, name=rhosp-rhel9/openstack-rsyslog, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, description=Red Hat OpenStack Platform 17.1 rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 rsyslog, distribution-scope=public, build-date=2026-01-12T22:10:09Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog)
Feb 20 08:05:26 np0005625203.localdomain podman[64381]: 2026-02-20 08:05:26.217992954 +0000 UTC m=+0.049425356 container cleanup 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=rsyslog, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step3, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, build-date=2026-01-12T22:10:09Z, url=https://www.redhat.com, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:09Z)
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:05:26 np0005625203.localdomain podman[64407]: 2026-02-20 08:05:26.286249435 +0000 UTC m=+0.044776182 container cleanup 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, name=rhosp-rhel9/openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, batch=17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:09Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=rsyslog, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=)
Feb 20 08:05:26 np0005625203.localdomain podman[64407]: rsyslog
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Feb 20 08:05:26 np0005625203.localdomain sudo[64447]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cakgrwykvzilkflcopaotdgkswvszemb ; /usr/bin/python3
Feb 20 08:05:26 np0005625203.localdomain sudo[64447]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 3.
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: Stopped rsyslog container.
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: Starting rsyslog container...
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:05:26 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:26 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:26 np0005625203.localdomain podman[64450]: 2026-02-20 08:05:26.547064906 +0000 UTC m=+0.112204656 container init 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 rsyslog, name=rhosp-rhel9/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:09Z, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:09Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:05:26 np0005625203.localdomain sudo[64447]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:26 np0005625203.localdomain podman[64450]: 2026-02-20 08:05:26.555501518 +0000 UTC m=+0.120641268 container start 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-rsyslog, build-date=2026-01-12T22:10:09Z, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:09Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step3, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13)
Feb 20 08:05:26 np0005625203.localdomain podman[64450]: rsyslog
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: Started rsyslog container.
Feb 20 08:05:26 np0005625203.localdomain sudo[64469]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:05:26 np0005625203.localdomain sudo[64469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:26 np0005625203.localdomain sudo[64469]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: libpod-03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f.scope: Deactivated successfully.
Feb 20 08:05:26 np0005625203.localdomain podman[64486]: 2026-02-20 08:05:26.679413647 +0000 UTC m=+0.031906542 container died 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, container_name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, batch=17.1_20260112.1, build-date=2026-01-12T22:10:09Z, tcib_managed=true, release=1766032510, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-rsyslog-container, name=rhosp-rhel9/openstack-rsyslog, version=17.1.13, description=Red Hat OpenStack Platform 17.1 rsyslog, managed_by=tripleo_ansible, vcs-type=git)
Feb 20 08:05:26 np0005625203.localdomain podman[64486]: 2026-02-20 08:05:26.699973455 +0000 UTC m=+0.052466330 container cleanup 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, description=Red Hat OpenStack Platform 17.1 rsyslog, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-rsyslog, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:09Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:09Z, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-rsyslog-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, release=1766032510, container_name=rsyslog, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team)
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:05:26 np0005625203.localdomain sshd[64295]: Invalid user suresh from 185.246.128.171 port 5348
Feb 20 08:05:26 np0005625203.localdomain sudo[64518]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cjegntmbxhwzlklrpqztokahrzwbbldr ; /usr/bin/python3
Feb 20 08:05:26 np0005625203.localdomain sudo[64518]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:26 np0005625203.localdomain podman[64499]: 2026-02-20 08:05:26.787794674 +0000 UTC m=+0.056418993 container cleanup 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:09Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, name=rhosp-rhel9/openstack-rsyslog, com.redhat.component=openstack-rsyslog-container, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, container_name=rsyslog, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog)
Feb 20 08:05:26 np0005625203.localdomain podman[64499]: rsyslog
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb-merged.mount: Deactivated successfully.
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f-userdata-shm.mount: Deactivated successfully.
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 4.
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: Stopped rsyslog container.
Feb 20 08:05:26 np0005625203.localdomain systemd[1]: Starting rsyslog container...
Feb 20 08:05:26 np0005625203.localdomain python3[64525]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks3.json short_hostname=np0005625203 step=3 update_config_hash_only=False
Feb 20 08:05:26 np0005625203.localdomain sudo[64518]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:27 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:05:27 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb/merged/var/lib/rsyslog supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:27 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb/merged/var/log/rsyslog supports timestamps until 2038 (0x7fffffff)
Feb 20 08:05:27 np0005625203.localdomain podman[64527]: 2026-02-20 08:05:27.041520375 +0000 UTC m=+0.110679049 container init 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_id=tripleo_step3, release=1766032510, description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-rsyslog-container, container_name=rsyslog, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, tcib_managed=true, build-date=2026-01-12T22:10:09Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog)
Feb 20 08:05:27 np0005625203.localdomain podman[64527]: 2026-02-20 08:05:27.051303059 +0000 UTC m=+0.120461763 container start 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, batch=17.1_20260112.1, tcib_managed=true, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-rsyslog, release=1766032510, vcs-type=git, version=17.1.13, build-date=2026-01-12T22:10:09Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:09Z, container_name=rsyslog, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 rsyslog, com.redhat.component=openstack-rsyslog-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible)
Feb 20 08:05:27 np0005625203.localdomain podman[64527]: rsyslog
Feb 20 08:05:27 np0005625203.localdomain systemd[1]: Started rsyslog container.
Feb 20 08:05:27 np0005625203.localdomain sudo[64545]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:05:27 np0005625203.localdomain sudo[64545]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:05:27 np0005625203.localdomain sudo[64545]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:27 np0005625203.localdomain systemd[1]: libpod-03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f.scope: Deactivated successfully.
Feb 20 08:05:27 np0005625203.localdomain podman[64548]: 2026-02-20 08:05:27.199703089 +0000 UTC m=+0.049571391 container died 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 rsyslog, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-rsyslog, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, version=17.1.13, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:09Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, build-date=2026-01-12T22:10:09Z, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-rsyslog-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:05:27 np0005625203.localdomain podman[64548]: 2026-02-20 08:05:27.221757454 +0000 UTC m=+0.071625716 container cleanup 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, org.opencontainers.image.created=2026-01-12T22:10:09Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:09Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, architecture=x86_64, release=1766032510, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, io.buildah.version=1.41.5, tcib_managed=true, name=rhosp-rhel9/openstack-rsyslog, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, container_name=rsyslog, distribution-scope=public, batch=17.1_20260112.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-rsyslog-container, config_id=tripleo_step3)
Feb 20 08:05:27 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:05:27 np0005625203.localdomain podman[64561]: 2026-02-20 08:05:27.293315236 +0000 UTC m=+0.043625595 container cleanup 03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f (image=registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1, name=rsyslog, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '97414cfc893df553083a7f7bb1c65a4f'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-rsyslog:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/rsyslog.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/rsyslog:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:ro', '/var/log/containers/rsyslog:/var/log/rsyslog:rw,z', '/var/log:/var/log/host:ro', '/var/lib/rsyslog.container:/var/lib/rsyslog:rw,z']}, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 rsyslog, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-rsyslog, release=1766032510, description=Red Hat OpenStack Platform 17.1 rsyslog, summary=Red Hat OpenStack Platform 17.1 rsyslog, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-rsyslog, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:09Z, container_name=rsyslog, io.k8s.description=Red Hat OpenStack Platform 17.1 rsyslog, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-rsyslog-container, build-date=2026-01-12T22:10:09Z, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:05:27 np0005625203.localdomain podman[64561]: rsyslog
Feb 20 08:05:27 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Feb 20 08:05:27 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Scheduled restart job, restart counter is at 5.
Feb 20 08:05:27 np0005625203.localdomain systemd[1]: Stopped rsyslog container.
Feb 20 08:05:27 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Start request repeated too quickly.
Feb 20 08:05:27 np0005625203.localdomain systemd[1]: tripleo_rsyslog.service: Failed with result 'exit-code'.
Feb 20 08:05:27 np0005625203.localdomain systemd[1]: Failed to start rsyslog container.
Feb 20 08:05:27 np0005625203.localdomain sshd[64295]: Disconnecting invalid user suresh 185.246.128.171 port 5348: Change of username or service not allowed: (suresh,ssh-connection) -> (master,ssh-connection) [preauth]
Feb 20 08:05:27 np0005625203.localdomain sudo[64586]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zcruqlkfwtbdqyrwldjmnhwgnwnboqql ; /usr/bin/python3
Feb 20 08:05:27 np0005625203.localdomain sudo[64586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:27 np0005625203.localdomain python3[64588]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:05:27 np0005625203.localdomain sudo[64586]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb-merged.mount: Deactivated successfully.
Feb 20 08:05:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-03d98b0f72d456561b1719b9ea7cca397fa2a6eee36072d42dad24f1b64a0a8f-userdata-shm.mount: Deactivated successfully.
Feb 20 08:05:27 np0005625203.localdomain sudo[64602]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-akxhlhmlwdcvvvhclrqijbziexlrkkkj ; /usr/bin/python3
Feb 20 08:05:27 np0005625203.localdomain sudo[64602]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:05:28 np0005625203.localdomain python3[64604]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_3 config_pattern=container-puppet-*.json config_overrides={} debug=True
Feb 20 08:05:28 np0005625203.localdomain sudo[64602]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:28 np0005625203.localdomain sshd[64605]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:29 np0005625203.localdomain sshd[64606]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:30 np0005625203.localdomain sshd[64606]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:05:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:05:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:05:30 np0005625203.localdomain podman[64608]: 2026-02-20 08:05:30.778107203 +0000 UTC m=+0.090527083 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, release=1766032510, batch=17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, tcib_managed=true, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.component=openstack-collectd-container)
Feb 20 08:05:30 np0005625203.localdomain podman[64608]: 2026-02-20 08:05:30.790460537 +0000 UTC m=+0.102880467 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, architecture=x86_64, container_name=collectd, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Feb 20 08:05:30 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:05:30 np0005625203.localdomain podman[64609]: 2026-02-20 08:05:30.882232337 +0000 UTC m=+0.190105265 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, version=17.1.13, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:05:31 np0005625203.localdomain podman[64609]: 2026-02-20 08:05:31.055358385 +0000 UTC m=+0.363231323 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:05:31 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:05:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:05:31 np0005625203.localdomain podman[64658]: 2026-02-20 08:05:31.775864246 +0000 UTC m=+0.091481632 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=starting, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, release=1766032510, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64)
Feb 20 08:05:31 np0005625203.localdomain podman[64658]: 2026-02-20 08:05:31.815777507 +0000 UTC m=+0.131394923 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, vendor=Red Hat, Inc.)
Feb 20 08:05:31 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:05:33 np0005625203.localdomain sshd[64605]: Invalid user master from 185.246.128.171 port 27254
Feb 20 08:05:33 np0005625203.localdomain sshd[64605]: Disconnecting invalid user master 185.246.128.171 port 27254: Change of username or service not allowed: (master,ssh-connection) -> (fatima,ssh-connection) [preauth]
Feb 20 08:05:35 np0005625203.localdomain sshd[64677]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:37 np0005625203.localdomain sshd[64677]: Invalid user fatima from 185.246.128.171 port 9106
Feb 20 08:05:39 np0005625203.localdomain sshd[64677]: Disconnecting invalid user fatima 185.246.128.171 port 9106: Change of username or service not allowed: (fatima,ssh-connection) -> (max,ssh-connection) [preauth]
Feb 20 08:05:42 np0005625203.localdomain sshd[64679]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:43 np0005625203.localdomain sshd[64679]: Invalid user max from 185.246.128.171 port 52830
Feb 20 08:05:44 np0005625203.localdomain sshd[64679]: Disconnecting invalid user max 185.246.128.171 port 52830: Change of username or service not allowed: (max,ssh-connection) -> (tempadmin,ssh-connection) [preauth]
Feb 20 08:05:46 np0005625203.localdomain sudo[64681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:05:46 np0005625203.localdomain sudo[64681]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:05:46 np0005625203.localdomain sudo[64681]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:46 np0005625203.localdomain sudo[64696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:05:46 np0005625203.localdomain sudo[64696]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:05:46 np0005625203.localdomain sshd[64711]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:47 np0005625203.localdomain sudo[64696]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:48 np0005625203.localdomain sudo[64745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:05:48 np0005625203.localdomain sudo[64745]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:05:48 np0005625203.localdomain sudo[64745]: pam_unix(sudo:session): session closed for user root
Feb 20 08:05:48 np0005625203.localdomain sshd[64711]: Invalid user tempadmin from 185.246.128.171 port 20986
Feb 20 08:05:48 np0005625203.localdomain sshd[64711]: Disconnecting invalid user tempadmin 185.246.128.171 port 20986: Change of username or service not allowed: (tempadmin,ssh-connection) -> (riad,ssh-connection) [preauth]
Feb 20 08:05:49 np0005625203.localdomain sshd[64760]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:53 np0005625203.localdomain sshd[64760]: Invalid user riad from 185.246.128.171 port 38842
Feb 20 08:05:54 np0005625203.localdomain sshd[64760]: Disconnecting invalid user riad 185.246.128.171 port 38842: Change of username or service not allowed: (riad,ssh-connection) -> (alarm,ssh-connection) [preauth]
Feb 20 08:05:56 np0005625203.localdomain sshd[64762]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:05:58 np0005625203.localdomain sshd[64762]: Invalid user alarm from 185.246.128.171 port 31483
Feb 20 08:05:59 np0005625203.localdomain sshd[64762]: Disconnecting invalid user alarm 185.246.128.171 port 31483: Change of username or service not allowed: (alarm,ssh-connection) -> (user20,ssh-connection) [preauth]
Feb 20 08:06:01 np0005625203.localdomain sshd[64764]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:06:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:06:02 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:06:02 np0005625203.localdomain podman[64767]: 2026-02-20 08:06:02.398194044 +0000 UTC m=+0.712676108 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, version=17.1.13, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, config_id=tripleo_step1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1)
Feb 20 08:06:02 np0005625203.localdomain podman[64788]: 2026-02-20 08:06:02.463224775 +0000 UTC m=+0.084613469 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., release=1766032510, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, io.openshift.expose-services=, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible)
Feb 20 08:06:02 np0005625203.localdomain podman[64766]: 2026-02-20 08:06:02.443398649 +0000 UTC m=+0.757887873 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, container_name=collectd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.component=openstack-collectd-container, distribution-scope=public, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, version=17.1.13, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible)
Feb 20 08:06:02 np0005625203.localdomain podman[64788]: 2026-02-20 08:06:02.501370169 +0000 UTC m=+0.122758853 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-type=git, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team)
Feb 20 08:06:02 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:06:02 np0005625203.localdomain podman[64766]: 2026-02-20 08:06:02.525621373 +0000 UTC m=+0.840110647 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, config_id=tripleo_step3, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vcs-type=git)
Feb 20 08:06:02 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:06:02 np0005625203.localdomain podman[64767]: 2026-02-20 08:06:02.623819763 +0000 UTC m=+0.938301747 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:06:02 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:06:03 np0005625203.localdomain sshd[64831]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:03 np0005625203.localdomain sshd[64831]: Invalid user ubuntu from 189.190.2.14 port 33286
Feb 20 08:06:03 np0005625203.localdomain sshd[64831]: Received disconnect from 189.190.2.14 port 33286:11: Bye Bye [preauth]
Feb 20 08:06:03 np0005625203.localdomain sshd[64831]: Disconnected from invalid user ubuntu 189.190.2.14 port 33286 [preauth]
Feb 20 08:06:03 np0005625203.localdomain sshd[64764]: Invalid user user20 from 185.246.128.171 port 59835
Feb 20 08:06:04 np0005625203.localdomain sshd[64764]: Disconnecting invalid user user20 185.246.128.171 port 59835: Change of username or service not allowed: (user20,ssh-connection) -> (accounting,ssh-connection) [preauth]
Feb 20 08:06:05 np0005625203.localdomain sshd[64833]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:09 np0005625203.localdomain sshd[64833]: Invalid user accounting from 185.246.128.171 port 28451
Feb 20 08:06:11 np0005625203.localdomain sshd[64833]: Disconnecting invalid user accounting 185.246.128.171 port 28451: Change of username or service not allowed: (accounting,ssh-connection) -> (anon,ssh-connection) [preauth]
Feb 20 08:06:15 np0005625203.localdomain sshd[64835]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:16 np0005625203.localdomain sshd[64837]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:16 np0005625203.localdomain sshd[64837]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:06:17 np0005625203.localdomain sshd[64835]: Invalid user anon from 185.246.128.171 port 36284
Feb 20 08:06:17 np0005625203.localdomain sshd[64835]: Disconnecting invalid user anon 185.246.128.171 port 36284: Change of username or service not allowed: (anon,ssh-connection) -> (wilson,ssh-connection) [preauth]
Feb 20 08:06:18 np0005625203.localdomain sshd[64839]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:19 np0005625203.localdomain sshd[64839]: Invalid user wilson from 185.246.128.171 port 57645
Feb 20 08:06:20 np0005625203.localdomain sshd[64839]: Disconnecting invalid user wilson 185.246.128.171 port 57645: Change of username or service not allowed: (wilson,ssh-connection) -> (sybase,ssh-connection) [preauth]
Feb 20 08:06:22 np0005625203.localdomain sshd[64841]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:23 np0005625203.localdomain sshd[64843]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:23 np0005625203.localdomain sshd[64843]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:06:25 np0005625203.localdomain sshd[64841]: Invalid user sybase from 185.246.128.171 port 26067
Feb 20 08:06:25 np0005625203.localdomain sshd[64841]: Disconnecting invalid user sybase 185.246.128.171 port 26067: Change of username or service not allowed: (sybase,ssh-connection) -> (management,ssh-connection) [preauth]
Feb 20 08:06:27 np0005625203.localdomain sshd[64845]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:29 np0005625203.localdomain sshd[64845]: Invalid user management from 185.246.128.171 port 63388
Feb 20 08:06:30 np0005625203.localdomain sshd[64845]: Disconnecting invalid user management 185.246.128.171 port 63388: Change of username or service not allowed: (management,ssh-connection) -> (dqi,ssh-connection) [preauth]
Feb 20 08:06:30 np0005625203.localdomain sshd[64847]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:06:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:06:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:06:32 np0005625203.localdomain podman[64850]: 2026-02-20 08:06:32.767202557 +0000 UTC m=+0.078950169 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3)
Feb 20 08:06:32 np0005625203.localdomain podman[64850]: 2026-02-20 08:06:32.779260586 +0000 UTC m=+0.091008248 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, config_id=tripleo_step3, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid)
Feb 20 08:06:32 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:06:32 np0005625203.localdomain systemd[1]: tmp-crun.OvE0Op.mount: Deactivated successfully.
Feb 20 08:06:32 np0005625203.localdomain podman[64851]: 2026-02-20 08:06:32.833835607 +0000 UTC m=+0.139437451 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, container_name=metrics_qdr, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container)
Feb 20 08:06:32 np0005625203.localdomain podman[64849]: 2026-02-20 08:06:32.8717509 +0000 UTC m=+0.184974120 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, architecture=x86_64, managed_by=tripleo_ansible, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, version=17.1.13, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, tcib_managed=true, config_id=tripleo_step3)
Feb 20 08:06:32 np0005625203.localdomain podman[64849]: 2026-02-20 08:06:32.910308055 +0000 UTC m=+0.223531305 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.13, tcib_managed=true, config_id=tripleo_step3, io.buildah.version=1.41.5, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:06:32 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:06:33 np0005625203.localdomain podman[64851]: 2026-02-20 08:06:33.052314239 +0000 UTC m=+0.357916103 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, release=1766032510, config_id=tripleo_step1, batch=17.1_20260112.1, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:06:33 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:06:33 np0005625203.localdomain sshd[64847]: Invalid user dqi from 185.246.128.171 port 28031
Feb 20 08:06:33 np0005625203.localdomain sshd[64847]: Disconnecting invalid user dqi 185.246.128.171 port 28031: Change of username or service not allowed: (dqi,ssh-connection) -> (frappe,ssh-connection) [preauth]
Feb 20 08:06:33 np0005625203.localdomain sshd[64914]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:34 np0005625203.localdomain sshd[64914]: Invalid user frappe from 185.246.128.171 port 49838
Feb 20 08:06:36 np0005625203.localdomain sshd[64914]: Disconnecting invalid user frappe 185.246.128.171 port 49838: Change of username or service not allowed: (frappe,ssh-connection) -> (rob,ssh-connection) [preauth]
Feb 20 08:06:36 np0005625203.localdomain sshd[64916]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:38 np0005625203.localdomain sshd[64916]: Invalid user rob from 185.246.128.171 port 9050
Feb 20 08:06:39 np0005625203.localdomain sshd[64916]: Disconnecting invalid user rob 185.246.128.171 port 9050: Change of username or service not allowed: (rob,ssh-connection) -> (zxcloudsetup,ssh-connection) [preauth]
Feb 20 08:06:41 np0005625203.localdomain sshd[64918]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:44 np0005625203.localdomain sshd[64918]: Invalid user zxcloudsetup from 185.246.128.171 port 45934
Feb 20 08:06:44 np0005625203.localdomain sshd[64918]: Disconnecting invalid user zxcloudsetup 185.246.128.171 port 45934: Change of username or service not allowed: (zxcloudsetup,ssh-connection) -> (dbadmin,ssh-connection) [preauth]
Feb 20 08:06:46 np0005625203.localdomain sshd[64920]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:48 np0005625203.localdomain sudo[64922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:06:48 np0005625203.localdomain sudo[64922]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:06:48 np0005625203.localdomain sudo[64922]: pam_unix(sudo:session): session closed for user root
Feb 20 08:06:48 np0005625203.localdomain sshd[64920]: Invalid user dbadmin from 185.246.128.171 port 24027
Feb 20 08:06:48 np0005625203.localdomain sudo[64937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:06:48 np0005625203.localdomain sudo[64937]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:06:48 np0005625203.localdomain sudo[64937]: pam_unix(sudo:session): session closed for user root
Feb 20 08:06:49 np0005625203.localdomain sshd[64920]: Disconnecting invalid user dbadmin 185.246.128.171 port 24027: Change of username or service not allowed: (dbadmin,ssh-connection) -> (wso2,ssh-connection) [preauth]
Feb 20 08:06:49 np0005625203.localdomain sudo[64984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:06:49 np0005625203.localdomain sudo[64984]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:06:49 np0005625203.localdomain sudo[64984]: pam_unix(sudo:session): session closed for user root
Feb 20 08:06:50 np0005625203.localdomain sshd[64999]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:53 np0005625203.localdomain sshd[64999]: Invalid user wso2 from 185.246.128.171 port 56133
Feb 20 08:06:53 np0005625203.localdomain sshd[64999]: Disconnecting invalid user wso2 185.246.128.171 port 56133: Change of username or service not allowed: (wso2,ssh-connection) -> (natalie,ssh-connection) [preauth]
Feb 20 08:06:53 np0005625203.localdomain sshd[65001]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:06:55 np0005625203.localdomain sshd[65001]: Invalid user natalie from 185.246.128.171 port 13947
Feb 20 08:06:55 np0005625203.localdomain sshd[65001]: Disconnecting invalid user natalie 185.246.128.171 port 13947: Change of username or service not allowed: (natalie,ssh-connection) -> (mail,ssh-connection) [preauth]
Feb 20 08:06:57 np0005625203.localdomain sshd[65003]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:00 np0005625203.localdomain sshd[65003]: Disconnecting authenticating user mail 185.246.128.171 port 50950: Change of username or service not allowed: (mail,ssh-connection) -> (chain,ssh-connection) [preauth]
Feb 20 08:07:02 np0005625203.localdomain sshd[65005]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:03 np0005625203.localdomain sshd[65007]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:03 np0005625203.localdomain sshd[65007]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:07:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:07:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:07:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:07:03 np0005625203.localdomain systemd[1]: tmp-crun.3496pm.mount: Deactivated successfully.
Feb 20 08:07:03 np0005625203.localdomain podman[65011]: 2026-02-20 08:07:03.41479988 +0000 UTC m=+0.094311614 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.13, config_id=tripleo_step1, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:07:03 np0005625203.localdomain systemd[1]: tmp-crun.Yq3VQH.mount: Deactivated successfully.
Feb 20 08:07:03 np0005625203.localdomain podman[65009]: 2026-02-20 08:07:03.493983647 +0000 UTC m=+0.173909335 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team)
Feb 20 08:07:03 np0005625203.localdomain podman[65009]: 2026-02-20 08:07:03.508199505 +0000 UTC m=+0.188125183 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, version=17.1.13, batch=17.1_20260112.1, config_id=tripleo_step3, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:07:03 np0005625203.localdomain podman[65010]: 2026-02-20 08:07:03.517791315 +0000 UTC m=+0.196806693 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, io.buildah.version=1.41.5, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Feb 20 08:07:03 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:07:03 np0005625203.localdomain podman[65010]: 2026-02-20 08:07:03.526842366 +0000 UTC m=+0.205857754 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, release=1766032510, container_name=iscsid, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, batch=17.1_20260112.1)
Feb 20 08:07:03 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:07:03 np0005625203.localdomain podman[65011]: 2026-02-20 08:07:03.663238999 +0000 UTC m=+0.342750743 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510)
Feb 20 08:07:03 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:07:04 np0005625203.localdomain sshd[65005]: Invalid user chain from 185.246.128.171 port 28780
Feb 20 08:07:06 np0005625203.localdomain sshd[65005]: Disconnecting invalid user chain 185.246.128.171 port 28780: Change of username or service not allowed: (chain,ssh-connection) -> (qbtuser,ssh-connection) [preauth]
Feb 20 08:07:08 np0005625203.localdomain sshd[65077]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:09 np0005625203.localdomain sshd[65077]: Invalid user qbtuser from 185.246.128.171 port 32631
Feb 20 08:07:10 np0005625203.localdomain sshd[65077]: Disconnecting invalid user qbtuser 185.246.128.171 port 32631: Change of username or service not allowed: (qbtuser,ssh-connection) -> (ai,ssh-connection) [preauth]
Feb 20 08:07:10 np0005625203.localdomain sshd[65079]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:12 np0005625203.localdomain sshd[65079]: Invalid user ai from 185.246.128.171 port 57763
Feb 20 08:07:12 np0005625203.localdomain sshd[65079]: Disconnecting invalid user ai 185.246.128.171 port 57763: Change of username or service not allowed: (ai,ssh-connection) -> (upload,ssh-connection) [preauth]
Feb 20 08:07:13 np0005625203.localdomain sshd[65081]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:13 np0005625203.localdomain sshd[65083]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:14 np0005625203.localdomain sshd[65081]: Invalid user upload from 185.246.128.171 port 18987
Feb 20 08:07:14 np0005625203.localdomain sshd[65083]: Invalid user c from 187.87.206.21 port 44078
Feb 20 08:07:14 np0005625203.localdomain sshd[65083]: Received disconnect from 187.87.206.21 port 44078:11: Bye Bye [preauth]
Feb 20 08:07:14 np0005625203.localdomain sshd[65083]: Disconnected from invalid user c 187.87.206.21 port 44078 [preauth]
Feb 20 08:07:14 np0005625203.localdomain sshd[65081]: Disconnecting invalid user upload 185.246.128.171 port 18987: Change of username or service not allowed: (upload,ssh-connection) -> (ftptest,ssh-connection) [preauth]
Feb 20 08:07:16 np0005625203.localdomain sshd[65085]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:18 np0005625203.localdomain sshd[65085]: Invalid user ftptest from 185.246.128.171 port 50766
Feb 20 08:07:18 np0005625203.localdomain sshd[65085]: Disconnecting invalid user ftptest 185.246.128.171 port 50766: Change of username or service not allowed: (ftptest,ssh-connection) -> (osmc,ssh-connection) [preauth]
Feb 20 08:07:21 np0005625203.localdomain sshd[65087]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:24 np0005625203.localdomain sshd[65087]: Invalid user osmc from 185.246.128.171 port 43495
Feb 20 08:07:25 np0005625203.localdomain sshd[65087]: Disconnecting invalid user osmc 185.246.128.171 port 43495: Change of username or service not allowed: (osmc,ssh-connection) -> (sysadmin,ssh-connection) [preauth]
Feb 20 08:07:27 np0005625203.localdomain sshd[65089]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:29 np0005625203.localdomain sshd[65089]: Invalid user sysadmin from 185.246.128.171 port 39330
Feb 20 08:07:30 np0005625203.localdomain sshd[65089]: Disconnecting invalid user sysadmin 185.246.128.171 port 39330: Change of username or service not allowed: (sysadmin,ssh-connection) -> (log,ssh-connection) [preauth]
Feb 20 08:07:31 np0005625203.localdomain sshd[65091]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:32 np0005625203.localdomain sshd[65091]: Invalid user log from 185.246.128.171 port 18292
Feb 20 08:07:32 np0005625203.localdomain sshd[65091]: Disconnecting invalid user log 185.246.128.171 port 18292: Change of username or service not allowed: (log,ssh-connection) -> (Samuel,ssh-connection) [preauth]
Feb 20 08:07:33 np0005625203.localdomain sshd[65093]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:07:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:07:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:07:33 np0005625203.localdomain systemd[1]: tmp-crun.RBidlF.mount: Deactivated successfully.
Feb 20 08:07:33 np0005625203.localdomain podman[65095]: 2026-02-20 08:07:33.772379733 +0000 UTC m=+0.073833274 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64)
Feb 20 08:07:33 np0005625203.localdomain podman[65095]: 2026-02-20 08:07:33.781692634 +0000 UTC m=+0.083146165 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, distribution-scope=public, tcib_managed=true, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Feb 20 08:07:33 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:07:33 np0005625203.localdomain systemd[1]: tmp-crun.9HkQ1B.mount: Deactivated successfully.
Feb 20 08:07:33 np0005625203.localdomain podman[65096]: 2026-02-20 08:07:33.838444765 +0000 UTC m=+0.132800607 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr)
Feb 20 08:07:33 np0005625203.localdomain podman[65094]: 2026-02-20 08:07:33.882293901 +0000 UTC m=+0.182263484 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, container_name=collectd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:07:33 np0005625203.localdomain podman[65094]: 2026-02-20 08:07:33.894248126 +0000 UTC m=+0.194217699 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, release=1766032510, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, vcs-type=git)
Feb 20 08:07:33 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:07:34 np0005625203.localdomain podman[65096]: 2026-02-20 08:07:34.051358117 +0000 UTC m=+0.345713959 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1766032510, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public)
Feb 20 08:07:34 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:07:35 np0005625203.localdomain sshd[65093]: Invalid user Samuel from 185.246.128.171 port 40989
Feb 20 08:07:36 np0005625203.localdomain sshd[65164]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:36 np0005625203.localdomain sshd[65165]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:36 np0005625203.localdomain sshd[65165]: Invalid user n8n from 147.135.114.8 port 41144
Feb 20 08:07:36 np0005625203.localdomain sshd[65165]: Received disconnect from 147.135.114.8 port 41144:11: Bye Bye [preauth]
Feb 20 08:07:36 np0005625203.localdomain sshd[65165]: Disconnected from invalid user n8n 147.135.114.8 port 41144 [preauth]
Feb 20 08:07:36 np0005625203.localdomain sshd[65164]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:07:37 np0005625203.localdomain sshd[65093]: Disconnecting invalid user Samuel 185.246.128.171 port 40989: Change of username or service not allowed: (Samuel,ssh-connection) -> (vali,ssh-connection) [preauth]
Feb 20 08:07:40 np0005625203.localdomain sshd[65168]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:42 np0005625203.localdomain sshd[65168]: Invalid user vali from 185.246.128.171 port 54875
Feb 20 08:07:42 np0005625203.localdomain sshd[65168]: Disconnecting invalid user vali 185.246.128.171 port 54875: Change of username or service not allowed: (vali,ssh-connection) -> (ui,ssh-connection) [preauth]
Feb 20 08:07:44 np0005625203.localdomain sshd[65170]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:46 np0005625203.localdomain sshd[65172]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:47 np0005625203.localdomain sshd[65170]: Invalid user ui from 185.246.128.171 port 37722
Feb 20 08:07:47 np0005625203.localdomain sshd[65170]: Disconnecting invalid user ui 185.246.128.171 port 37722: Change of username or service not allowed: (ui,ssh-connection) -> (morteza,ssh-connection) [preauth]
Feb 20 08:07:47 np0005625203.localdomain sshd[65172]: Received disconnect from 103.171.84.20 port 40462:11: Bye Bye [preauth]
Feb 20 08:07:47 np0005625203.localdomain sshd[65172]: Disconnected from authenticating user root 103.171.84.20 port 40462 [preauth]
Feb 20 08:07:49 np0005625203.localdomain sudo[65174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:07:49 np0005625203.localdomain sudo[65174]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:07:49 np0005625203.localdomain sudo[65174]: pam_unix(sudo:session): session closed for user root
Feb 20 08:07:49 np0005625203.localdomain sudo[65189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:07:49 np0005625203.localdomain sudo[65189]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:07:50 np0005625203.localdomain sshd[65221]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:50 np0005625203.localdomain sshd[65225]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:07:50 np0005625203.localdomain sudo[65189]: pam_unix(sudo:session): session closed for user root
Feb 20 08:07:50 np0005625203.localdomain sshd[65221]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:07:53 np0005625203.localdomain sshd[65225]: Invalid user morteza from 185.246.128.171 port 43636
Feb 20 08:07:54 np0005625203.localdomain sshd[65225]: Disconnecting invalid user morteza 185.246.128.171 port 43636: Change of username or service not allowed: (morteza,ssh-connection) -> (odoo18,ssh-connection) [preauth]
Feb 20 08:07:54 np0005625203.localdomain sudo[65240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:07:54 np0005625203.localdomain sudo[65240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:07:54 np0005625203.localdomain sudo[65240]: pam_unix(sudo:session): session closed for user root
Feb 20 08:07:56 np0005625203.localdomain sshd[65255]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:00 np0005625203.localdomain sshd[65255]: Invalid user odoo18 from 185.246.128.171 port 48163
Feb 20 08:08:00 np0005625203.localdomain sshd[65255]: Disconnecting invalid user odoo18 185.246.128.171 port 48163: Change of username or service not allowed: (odoo18,ssh-connection) -> (vboxuser,ssh-connection) [preauth]
Feb 20 08:08:02 np0005625203.localdomain sshd[65257]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:04 np0005625203.localdomain sshd[65257]: Invalid user vboxuser from 185.246.128.171 port 51619
Feb 20 08:08:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:08:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:08:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:08:04 np0005625203.localdomain podman[65259]: 2026-02-20 08:08:04.412855339 +0000 UTC m=+0.063686485 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, batch=17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, tcib_managed=true)
Feb 20 08:08:04 np0005625203.localdomain podman[65260]: 2026-02-20 08:08:04.473671012 +0000 UTC m=+0.121857163 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, version=17.1.13, distribution-scope=public, batch=17.1_20260112.1, io.openshift.expose-services=)
Feb 20 08:08:04 np0005625203.localdomain podman[65259]: 2026-02-20 08:08:04.493844143 +0000 UTC m=+0.144675289 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1766032510, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team)
Feb 20 08:08:04 np0005625203.localdomain podman[65261]: 2026-02-20 08:08:04.443572821 +0000 UTC m=+0.085766069 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64)
Feb 20 08:08:04 np0005625203.localdomain podman[65260]: 2026-02-20 08:08:04.508867409 +0000 UTC m=+0.157053560 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_id=tripleo_step3, batch=17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., container_name=iscsid)
Feb 20 08:08:04 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:08:04 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:08:04 np0005625203.localdomain podman[65261]: 2026-02-20 08:08:04.621141492 +0000 UTC m=+0.263334720 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vcs-type=git, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Feb 20 08:08:04 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:08:04 np0005625203.localdomain sshd[65257]: Disconnecting invalid user vboxuser 185.246.128.171 port 51619: Change of username or service not allowed: (vboxuser,ssh-connection) -> (wang,ssh-connection) [preauth]
Feb 20 08:08:05 np0005625203.localdomain systemd[1]: tmp-crun.fQ5ZKP.mount: Deactivated successfully.
Feb 20 08:08:07 np0005625203.localdomain sshd[65326]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:09 np0005625203.localdomain sshd[65326]: Invalid user wang from 185.246.128.171 port 38245
Feb 20 08:08:09 np0005625203.localdomain sshd[65326]: Disconnecting invalid user wang 185.246.128.171 port 38245: Change of username or service not allowed: (wang,ssh-connection) -> (su,ssh-connection) [preauth]
Feb 20 08:08:12 np0005625203.localdomain sshd[65328]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:14 np0005625203.localdomain sshd[65328]: Invalid user su from 185.246.128.171 port 41854
Feb 20 08:08:14 np0005625203.localdomain sshd[65328]: Disconnecting invalid user su 185.246.128.171 port 41854: Change of username or service not allowed: (su,ssh-connection) -> (wms,ssh-connection) [preauth]
Feb 20 08:08:16 np0005625203.localdomain sshd[65330]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:19 np0005625203.localdomain sshd[65330]: Invalid user wms from 185.246.128.171 port 19869
Feb 20 08:08:20 np0005625203.localdomain sshd[65330]: Disconnecting invalid user wms 185.246.128.171 port 19869: Change of username or service not allowed: (wms,ssh-connection) -> (timothy,ssh-connection) [preauth]
Feb 20 08:08:21 np0005625203.localdomain sshd[65332]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:22 np0005625203.localdomain sshd[65332]: Invalid user timothy from 185.246.128.171 port 10985
Feb 20 08:08:22 np0005625203.localdomain sshd[65332]: Disconnecting invalid user timothy 185.246.128.171 port 10985: Change of username or service not allowed: (timothy,ssh-connection) -> (helpdesk,ssh-connection) [preauth]
Feb 20 08:08:22 np0005625203.localdomain sshd[65334]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:24 np0005625203.localdomain sshd[65334]: Invalid user helpdesk from 185.246.128.171 port 32617
Feb 20 08:08:25 np0005625203.localdomain sshd[65334]: Disconnecting invalid user helpdesk 185.246.128.171 port 32617: Change of username or service not allowed: (helpdesk,ssh-connection) -> (momoru,ssh-connection) [preauth]
Feb 20 08:08:26 np0005625203.localdomain sshd[65336]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:32 np0005625203.localdomain sshd[65336]: Invalid user momoru from 185.246.128.171 port 13447
Feb 20 08:08:33 np0005625203.localdomain sshd[65336]: Disconnecting invalid user momoru 185.246.128.171 port 13447: Change of username or service not allowed: (momoru,ssh-connection) -> (vncuser,ssh-connection) [preauth]
Feb 20 08:08:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:08:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:08:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:08:34 np0005625203.localdomain systemd[1]: tmp-crun.XDho69.mount: Deactivated successfully.
Feb 20 08:08:34 np0005625203.localdomain podman[65339]: 2026-02-20 08:08:34.771455557 +0000 UTC m=+0.086999030 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, container_name=iscsid, io.openshift.expose-services=, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, name=rhosp-rhel9/openstack-iscsid)
Feb 20 08:08:34 np0005625203.localdomain podman[65338]: 2026-02-20 08:08:34.805916929 +0000 UTC m=+0.121396570 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, com.redhat.component=openstack-collectd-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, url=https://www.redhat.com, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, release=1766032510, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true)
Feb 20 08:08:34 np0005625203.localdomain podman[65339]: 2026-02-20 08:08:34.80934706 +0000 UTC m=+0.124890513 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., release=1766032510, url=https://www.redhat.com, vcs-type=git, name=rhosp-rhel9/openstack-iscsid)
Feb 20 08:08:34 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:08:34 np0005625203.localdomain podman[65340]: 2026-02-20 08:08:34.858099073 +0000 UTC m=+0.171030221 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, version=17.1.13, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=)
Feb 20 08:08:34 np0005625203.localdomain podman[65338]: 2026-02-20 08:08:34.863968932 +0000 UTC m=+0.179448593 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, release=1766032510, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, container_name=collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z)
Feb 20 08:08:34 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:08:35 np0005625203.localdomain podman[65340]: 2026-02-20 08:08:35.058651766 +0000 UTC m=+0.371583004 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.5, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, distribution-scope=public, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:08:35 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:08:36 np0005625203.localdomain sshd[65407]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:38 np0005625203.localdomain sshd[65409]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:38 np0005625203.localdomain sshd[65409]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:08:39 np0005625203.localdomain sshd[65407]: Invalid user vncuser from 185.246.128.171 port 62340
Feb 20 08:08:42 np0005625203.localdomain sshd[65407]: Disconnecting invalid user vncuser 185.246.128.171 port 62340: Change of username or service not allowed: (vncuser,ssh-connection) -> (tester,ssh-connection) [preauth]
Feb 20 08:08:43 np0005625203.localdomain sshd[65411]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:44 np0005625203.localdomain sshd[65411]: Invalid user tester from 185.246.128.171 port 21734
Feb 20 08:08:46 np0005625203.localdomain sshd[65411]: Disconnecting invalid user tester 185.246.128.171 port 21734: Change of username or service not allowed: (tester,ssh-connection) -> (vodafone,ssh-connection) [preauth]
Feb 20 08:08:46 np0005625203.localdomain sshd[65413]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:48 np0005625203.localdomain sshd[65413]: Invalid user vodafone from 185.246.128.171 port 60382
Feb 20 08:08:48 np0005625203.localdomain sshd[65413]: Disconnecting invalid user vodafone 185.246.128.171 port 60382: Change of username or service not allowed: (vodafone,ssh-connection) -> (ayush,ssh-connection) [preauth]
Feb 20 08:08:49 np0005625203.localdomain sshd[65415]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:50 np0005625203.localdomain sshd[65415]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:08:50 np0005625203.localdomain sshd[65417]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:51 np0005625203.localdomain sshd[65417]: Invalid user ayush from 185.246.128.171 port 39511
Feb 20 08:08:51 np0005625203.localdomain sshd[65417]: Disconnecting invalid user ayush 185.246.128.171 port 39511: Change of username or service not allowed: (ayush,ssh-connection) -> (docker,ssh-connection) [preauth]
Feb 20 08:08:53 np0005625203.localdomain sshd[65419]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:08:54 np0005625203.localdomain sudo[65421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:08:54 np0005625203.localdomain sudo[65421]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:08:54 np0005625203.localdomain sudo[65421]: pam_unix(sudo:session): session closed for user root
Feb 20 08:08:54 np0005625203.localdomain sudo[65436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:08:54 np0005625203.localdomain sudo[65436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:08:55 np0005625203.localdomain sudo[65436]: pam_unix(sudo:session): session closed for user root
Feb 20 08:08:55 np0005625203.localdomain sshd[65419]: Invalid user docker from 185.246.128.171 port 8395
Feb 20 08:08:55 np0005625203.localdomain sudo[65482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:08:55 np0005625203.localdomain sudo[65482]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:08:55 np0005625203.localdomain sudo[65482]: pam_unix(sudo:session): session closed for user root
Feb 20 08:08:55 np0005625203.localdomain sudo[65497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Feb 20 08:08:55 np0005625203.localdomain sudo[65497]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:08:55 np0005625203.localdomain sudo[65497]: pam_unix(sudo:session): session closed for user root
Feb 20 08:08:57 np0005625203.localdomain sshd[65419]: Disconnecting invalid user docker 185.246.128.171 port 8395: Change of username or service not allowed: (docker,ssh-connection) -> (ftpuser1,ssh-connection) [preauth]
Feb 20 08:09:00 np0005625203.localdomain sshd[65533]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:00 np0005625203.localdomain sudo[65534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:09:00 np0005625203.localdomain sudo[65534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:09:00 np0005625203.localdomain sudo[65534]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:03 np0005625203.localdomain sshd[65533]: Invalid user ftpuser1 from 185.246.128.171 port 34269
Feb 20 08:09:04 np0005625203.localdomain sshd[65533]: Disconnecting invalid user ftpuser1 185.246.128.171 port 34269: Change of username or service not allowed: (ftpuser1,ssh-connection) -> (wss,ssh-connection) [preauth]
Feb 20 08:09:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:09:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:09:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:09:05 np0005625203.localdomain podman[65552]: 2026-02-20 08:09:05.780065452 +0000 UTC m=+0.095647648 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:09:05 np0005625203.localdomain systemd[1]: tmp-crun.s4Ex77.mount: Deactivated successfully.
Feb 20 08:09:05 np0005625203.localdomain podman[65551]: 2026-02-20 08:09:05.83794959 +0000 UTC m=+0.153683151 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, release=1766032510, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:09:05 np0005625203.localdomain podman[65550]: 2026-02-20 08:09:05.874527271 +0000 UTC m=+0.190332734 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Feb 20 08:09:05 np0005625203.localdomain podman[65551]: 2026-02-20 08:09:05.898721192 +0000 UTC m=+0.214454783 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:09:05 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:09:05 np0005625203.localdomain podman[65550]: 2026-02-20 08:09:05.914293274 +0000 UTC m=+0.230098757 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, tcib_managed=true, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z)
Feb 20 08:09:05 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:09:05 np0005625203.localdomain podman[65552]: 2026-02-20 08:09:05.972403899 +0000 UTC m=+0.287986095 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=metrics_qdr, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, version=17.1.13, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Feb 20 08:09:05 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:09:08 np0005625203.localdomain sshd[65617]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:10 np0005625203.localdomain sshd[65617]: Invalid user wss from 185.246.128.171 port 55460
Feb 20 08:09:10 np0005625203.localdomain sshd[65617]: Disconnecting invalid user wss 185.246.128.171 port 55460: Change of username or service not allowed: (wss,ssh-connection) -> (itadmin,ssh-connection) [preauth]
Feb 20 08:09:11 np0005625203.localdomain sshd[65619]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:12 np0005625203.localdomain sshd[65619]: Invalid user itadmin from 185.246.128.171 port 36371
Feb 20 08:09:13 np0005625203.localdomain sshd[65619]: Disconnecting invalid user itadmin 185.246.128.171 port 36371: Change of username or service not allowed: (itadmin,ssh-connection) -> (opc,ssh-connection) [preauth]
Feb 20 08:09:14 np0005625203.localdomain sshd[65621]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:16 np0005625203.localdomain sshd[65621]: Invalid user opc from 185.246.128.171 port 13809
Feb 20 08:09:16 np0005625203.localdomain sshd[65621]: Disconnecting invalid user opc 185.246.128.171 port 13809: Change of username or service not allowed: (opc,ssh-connection) -> (squid,ssh-connection) [preauth]
Feb 20 08:09:17 np0005625203.localdomain sshd[65623]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:18 np0005625203.localdomain sudo[65670]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uefmwxuexvueearnlxdhfrtdhowgwqrr ; /usr/bin/python3
Feb 20 08:09:18 np0005625203.localdomain sudo[65670]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:18 np0005625203.localdomain python3[65672]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:09:18 np0005625203.localdomain sudo[65670]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:18 np0005625203.localdomain sshd[65623]: Invalid user squid from 185.246.128.171 port 44287
Feb 20 08:09:18 np0005625203.localdomain sudo[65715]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvlxfwpyczxloryrhwwauyhrpushcsvj ; /usr/bin/python3
Feb 20 08:09:18 np0005625203.localdomain sudo[65715]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:18 np0005625203.localdomain sshd[65623]: Disconnecting invalid user squid 185.246.128.171 port 44287: Change of username or service not allowed: (squid,ssh-connection) -> (kernel,ssh-connection) [preauth]
Feb 20 08:09:19 np0005625203.localdomain python3[65717]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574958.256579-107659-12976023247580/source _original_basename=tmpyrz7o2f1 follow=False checksum=ee48fb03297eb703b1954c8852d0f67fab51dac1 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:09:19 np0005625203.localdomain sudo[65715]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:19 np0005625203.localdomain sudo[65777]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wcachevosfmlqrmzidirnztetirkcteh ; /usr/bin/python3
Feb 20 08:09:19 np0005625203.localdomain sudo[65777]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:20 np0005625203.localdomain python3[65779]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/recover_tripleo_nova_virtqemud.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:09:20 np0005625203.localdomain sudo[65777]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:20 np0005625203.localdomain sudo[65820]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzxyahqprohhjfjiwcktodytthdacmmq ; /usr/bin/python3
Feb 20 08:09:20 np0005625203.localdomain sudo[65820]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:20 np0005625203.localdomain sshd[65822]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:20 np0005625203.localdomain python3[65823]: ansible-ansible.legacy.copy Invoked with dest=/usr/libexec/recover_tripleo_nova_virtqemud.sh mode=0755 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574959.7129164-107741-260491992411545/source _original_basename=tmpmlsmyhok follow=False checksum=922b8aa8342176110bffc2e39abdccc2b39e53a9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:09:20 np0005625203.localdomain sudo[65820]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:21 np0005625203.localdomain sudo[65884]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udaxvmtwqbeujiqlgcmnzgosxajkgidj ; /usr/bin/python3
Feb 20 08:09:21 np0005625203.localdomain sudo[65884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:21 np0005625203.localdomain python3[65886]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:09:21 np0005625203.localdomain sudo[65884]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:21 np0005625203.localdomain sudo[65927]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-quvwwvogrfwmgvmmzjgggetiibiygnwn ; /usr/bin/python3
Feb 20 08:09:21 np0005625203.localdomain sudo[65927]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:21 np0005625203.localdomain python3[65929]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.service mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574960.9803486-107804-221465392800358/source _original_basename=tmp679qcraf follow=False checksum=92f73544b703afc85885fa63ab07bdf8f8671554 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:09:21 np0005625203.localdomain sudo[65927]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:22 np0005625203.localdomain sudo[65989]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grchiovigapkbeuyxdbbxsizpcaepjbb ; /usr/bin/python3
Feb 20 08:09:22 np0005625203.localdomain sudo[65989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:22 np0005625203.localdomain python3[65991]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:09:22 np0005625203.localdomain sudo[65989]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:22 np0005625203.localdomain sudo[66032]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-llvxhbcywvrrhbnedybhgzscdpmjyzoi ; /usr/bin/python3
Feb 20 08:09:22 np0005625203.localdomain sudo[66032]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:22 np0005625203.localdomain python3[66034]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_virtqemud_recover.timer mode=0644 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574961.865542-108012-33151048703538/source _original_basename=tmp63pwxvlf follow=False checksum=c6e5f76a53c0d6ccaf46c4b48d813dc2891ad8e9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:09:22 np0005625203.localdomain sudo[66032]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:22 np0005625203.localdomain sudo[66062]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmsjilkzkvihmtofxbaktcblpogrlggw ; /usr/bin/python3
Feb 20 08:09:22 np0005625203.localdomain sudo[66062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:23 np0005625203.localdomain python3[66064]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.service daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 20 08:09:23 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:09:23 np0005625203.localdomain systemd-sysv-generator[66091]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:09:23 np0005625203.localdomain systemd-rc-local-generator[66087]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:09:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:09:23 np0005625203.localdomain sshd[65822]: Invalid user kernel from 185.246.128.171 port 14462
Feb 20 08:09:23 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:09:23 np0005625203.localdomain systemd-sysv-generator[66132]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:09:23 np0005625203.localdomain systemd-rc-local-generator[66129]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:09:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:09:23 np0005625203.localdomain sshd[65822]: Disconnecting invalid user kernel 185.246.128.171 port 14462: Change of username or service not allowed: (kernel,ssh-connection) -> (mark,ssh-connection) [preauth]
Feb 20 08:09:23 np0005625203.localdomain sudo[66062]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:23 np0005625203.localdomain sudo[66151]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylfuhgowthcflvckxdksdinsypyxyoju ; /usr/bin/python3
Feb 20 08:09:23 np0005625203.localdomain sudo[66151]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:24 np0005625203.localdomain sshd[66154]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:24 np0005625203.localdomain python3[66153]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_virtqemud_recover.timer state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:09:24 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:09:24 np0005625203.localdomain systemd-rc-local-generator[66176]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:09:24 np0005625203.localdomain systemd-sysv-generator[66181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:09:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:09:24 np0005625203.localdomain sshd[66191]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:24 np0005625203.localdomain sshd[66191]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:09:25 np0005625203.localdomain sshd[66154]: Invalid user mark from 185.246.128.171 port 59965
Feb 20 08:09:25 np0005625203.localdomain sshd[66154]: Disconnecting invalid user mark 185.246.128.171 port 59965: Change of username or service not allowed: (mark,ssh-connection) -> (xiaoxiao,ssh-connection) [preauth]
Feb 20 08:09:25 np0005625203.localdomain sshd[66194]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:25 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:09:25 np0005625203.localdomain systemd-rc-local-generator[66221]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:09:25 np0005625203.localdomain systemd-sysv-generator[66226]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:09:25 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:09:25 np0005625203.localdomain systemd[1]: Started Check and recover tripleo_nova_virtqemud every 10m.
Feb 20 08:09:25 np0005625203.localdomain sudo[66151]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:26 np0005625203.localdomain sudo[66247]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vxdvoybpmsuibxguetxplhdybiehqqkc ; /usr/bin/python3
Feb 20 08:09:26 np0005625203.localdomain sudo[66247]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:26 np0005625203.localdomain python3[66249]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl enable --now tripleo_nova_virtqemud_recover.timer _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 08:09:26 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:09:26 np0005625203.localdomain systemd-rc-local-generator[66271]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:09:26 np0005625203.localdomain systemd-sysv-generator[66275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:09:26 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:09:26 np0005625203.localdomain sudo[66247]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:27 np0005625203.localdomain sudo[66330]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wpxoboafmpogxzpyvdksfuijkzhzjpfq ; /usr/bin/python3
Feb 20 08:09:27 np0005625203.localdomain sudo[66330]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:27 np0005625203.localdomain sshd[66194]: Invalid user xiaoxiao from 185.246.128.171 port 13498
Feb 20 08:09:27 np0005625203.localdomain python3[66332]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:09:27 np0005625203.localdomain sudo[66330]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:27 np0005625203.localdomain sudo[66373]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qodrgyjefbagfrmejmlshksebkeminyu ; /usr/bin/python3
Feb 20 08:09:27 np0005625203.localdomain sudo[66373]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:27 np0005625203.localdomain sshd[66194]: Disconnecting invalid user xiaoxiao 185.246.128.171 port 13498: Change of username or service not allowed: (xiaoxiao,ssh-connection) -> (abe,ssh-connection) [preauth]
Feb 20 08:09:27 np0005625203.localdomain python3[66375]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/tripleo_nova_libvirt.target group=root mode=0644 owner=root src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574966.8250003-108312-126726418537450/source _original_basename=tmpw3axfjkv follow=False checksum=c064b4a8e7d3d1d7c62d1f80a09e350659996afd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:09:27 np0005625203.localdomain sudo[66373]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:27 np0005625203.localdomain sudo[66403]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xebmdjeisszkjptvwlcfnjyaoctitykn ; /usr/bin/python3
Feb 20 08:09:27 np0005625203.localdomain sudo[66403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:28 np0005625203.localdomain python3[66405]: ansible-systemd Invoked with daemon_reload=True enabled=True name=tripleo_nova_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:09:28 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:09:28 np0005625203.localdomain systemd-rc-local-generator[66430]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:09:28 np0005625203.localdomain systemd-sysv-generator[66435]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:09:28 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:09:28 np0005625203.localdomain systemd[1]: Reached target tripleo_nova_libvirt.target.
Feb 20 08:09:28 np0005625203.localdomain sudo[66403]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:28 np0005625203.localdomain sudo[66458]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jllmrxeztjypquvdmbljcmnkaintqlyr ; /usr/bin/python3
Feb 20 08:09:28 np0005625203.localdomain sudo[66458]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:28 np0005625203.localdomain python3[66460]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:09:28 np0005625203.localdomain sudo[66458]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:29 np0005625203.localdomain sshd[66463]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:29 np0005625203.localdomain sudo[66509]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jffgtegciegsqpxakxcnogbusynauvng ; /usr/bin/python3
Feb 20 08:09:29 np0005625203.localdomain sudo[66509]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:29 np0005625203.localdomain sudo[66509]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:29 np0005625203.localdomain sudo[66527]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kctksylqfclbcufulgvooqugqopvhyop ; /usr/bin/python3
Feb 20 08:09:29 np0005625203.localdomain sudo[66527]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:29 np0005625203.localdomain sudo[66527]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:30 np0005625203.localdomain sudo[66631]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqqrsdcwyhljtmsfwesmhrzqgehwmfvu ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574970.1236308-108433-41664042957599/async_wrapper.py 435037392731 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574970.1236308-108433-41664042957599/AnsiballZ_command.py _
Feb 20 08:09:30 np0005625203.localdomain sudo[66631]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Feb 20 08:09:30 np0005625203.localdomain ansible-async_wrapper.py[66633]: Invoked with 435037392731 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771574970.1236308-108433-41664042957599/AnsiballZ_command.py _
Feb 20 08:09:30 np0005625203.localdomain ansible-async_wrapper.py[66636]: Starting module and watcher
Feb 20 08:09:30 np0005625203.localdomain ansible-async_wrapper.py[66636]: Start watching 66637 (3600)
Feb 20 08:09:30 np0005625203.localdomain ansible-async_wrapper.py[66637]: Start module (66637)
Feb 20 08:09:30 np0005625203.localdomain ansible-async_wrapper.py[66633]: Return async_wrapper task started.
Feb 20 08:09:30 np0005625203.localdomain sudo[66631]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:30 np0005625203.localdomain sudo[66652]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scawdckrwympccikkocqcfrjsrbhbwbc ; /usr/bin/python3
Feb 20 08:09:30 np0005625203.localdomain sudo[66652]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:30 np0005625203.localdomain python3[66657]: ansible-ansible.legacy.async_status Invoked with jid=435037392731.66633 mode=status _async_dir=/tmp/.ansible_async
Feb 20 08:09:30 np0005625203.localdomain sudo[66652]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:33 np0005625203.localdomain sshd[66463]: Invalid user abe from 185.246.128.171 port 57068
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:    (file: /etc/puppet/hiera.yaml)
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]: Warning: Undefined variable '::deploy_config_name';
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:    (file & line not available)
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:    (file & line not available)
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Feb 20 08:09:34 np0005625203.localdomain sshd[66463]: Disconnecting invalid user abe 185.246.128.171 port 57068: Change of username or service not allowed: (abe,ssh-connection) -> (factory,ssh-connection) [preauth]
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:                     with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:                     with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:                     with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:                     with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:                     with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Feb 20 08:09:34 np0005625203.localdomain puppet-user[66656]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.24 seconds
Feb 20 08:09:35 np0005625203.localdomain ansible-async_wrapper.py[66636]: 66637 still running (3600)
Feb 20 08:09:36 np0005625203.localdomain sshd[66776]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:09:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:09:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:09:36 np0005625203.localdomain podman[66780]: 2026-02-20 08:09:36.782982944 +0000 UTC m=+0.093500905 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr)
Feb 20 08:09:36 np0005625203.localdomain podman[66778]: 2026-02-20 08:09:36.819989774 +0000 UTC m=+0.132802747 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z)
Feb 20 08:09:36 np0005625203.localdomain podman[66778]: 2026-02-20 08:09:36.864436275 +0000 UTC m=+0.177249258 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd)
Feb 20 08:09:36 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:09:36 np0005625203.localdomain podman[66779]: 2026-02-20 08:09:36.879982418 +0000 UTC m=+0.190370505 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, container_name=iscsid, batch=17.1_20260112.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com)
Feb 20 08:09:36 np0005625203.localdomain podman[66779]: 2026-02-20 08:09:36.966282089 +0000 UTC m=+0.276670206 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, container_name=iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, architecture=x86_64, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, release=1766032510, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:09:36 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:09:36 np0005625203.localdomain podman[66780]: 2026-02-20 08:09:36.981246984 +0000 UTC m=+0.291764975 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:09:37 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:09:37 np0005625203.localdomain systemd[1]: tmp-crun.s3kZsX.mount: Deactivated successfully.
Feb 20 08:09:37 np0005625203.localdomain sshd[66776]: Invalid user factory from 185.246.128.171 port 13254
Feb 20 08:09:38 np0005625203.localdomain sshd[66776]: Disconnecting invalid user factory 185.246.128.171 port 13254: Change of username or service not allowed: (factory,ssh-connection) -> (ss,ssh-connection) [preauth]
Feb 20 08:09:39 np0005625203.localdomain sshd[66889]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:40 np0005625203.localdomain ansible-async_wrapper.py[66636]: 66637 still running (3595)
Feb 20 08:09:41 np0005625203.localdomain sudo[66930]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gltnwccbbpdrszwgpeskvxaimanakiwo ; /usr/bin/python3
Feb 20 08:09:41 np0005625203.localdomain sudo[66930]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:41 np0005625203.localdomain sshd[66889]: Invalid user ss from 185.246.128.171 port 50744
Feb 20 08:09:41 np0005625203.localdomain python3[66932]: ansible-ansible.legacy.async_status Invoked with jid=435037392731.66633 mode=status _async_dir=/tmp/.ansible_async
Feb 20 08:09:41 np0005625203.localdomain sudo[66930]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:41 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 08:09:41 np0005625203.localdomain systemd[1]: Starting man-db-cache-update.service...
Feb 20 08:09:41 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:09:41 np0005625203.localdomain systemd-rc-local-generator[66992]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:09:41 np0005625203.localdomain systemd-sysv-generator[67003]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:09:41 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:09:42 np0005625203.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Feb 20 08:09:42 np0005625203.localdomain sshd[66889]: Disconnecting invalid user ss 185.246.128.171 port 50744: Change of username or service not allowed: (ss,ssh-connection) -> (antminermonitor,ssh-connection) [preauth]
Feb 20 08:09:42 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 08:09:42 np0005625203.localdomain systemd[1]: Finished man-db-cache-update.service.
Feb 20 08:09:42 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Consumed 1.319s CPU time.
Feb 20 08:09:42 np0005625203.localdomain systemd[1]: run-ra538d91769bf451b8ea9348fae4e597e.service: Deactivated successfully.
Feb 20 08:09:43 np0005625203.localdomain puppet-user[66656]: Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: created
Feb 20 08:09:43 np0005625203.localdomain puppet-user[66656]: Notice: /Stage[main]/Snmp/File[snmpd.conf]/content: content changed '{sha256}2b743f970e80e2150759bfc66f2d8d0fbd8b31624f79e2991248d1a5ac57494e' to '{sha256}667247452609cbf9a20024b8462a0c98ed2e16756ec831e28854c5f5c8479c15'
Feb 20 08:09:43 np0005625203.localdomain puppet-user[66656]: Notice: /Stage[main]/Snmp/File[snmpd.sysconfig]/content: content changed '{sha256}b63afb2dee7419b6834471f88581d981c8ae5c8b27b9d329ba67a02f3ddd8221' to '{sha256}3917ee8bbc680ad50d77186ad4a1d2705c2025c32fc32f823abbda7f2328dfbd'
Feb 20 08:09:43 np0005625203.localdomain puppet-user[66656]: Notice: /Stage[main]/Snmp/File[snmptrapd.conf]/content: content changed '{sha256}2e1ca894d609ef337b6243909bf5623c87fd5df98ecbd00c7d4c12cf12f03c4e' to '{sha256}3ecf18da1ba84ea3932607f2b903ee6a038b6f9ac4e1e371e48f3ef61c5052ea'
Feb 20 08:09:43 np0005625203.localdomain puppet-user[66656]: Notice: /Stage[main]/Snmp/File[snmptrapd.sysconfig]/content: content changed '{sha256}86ee5797ad10cb1ea0f631e9dfa6ae278ecf4f4d16f4c80f831cdde45601b23c' to '{sha256}2244553364afcca151958f8e2003e4c182f5e2ecfbe55405cec73fd818581e97'
Feb 20 08:09:43 np0005625203.localdomain puppet-user[66656]: Notice: /Stage[main]/Snmp/Service[snmptrapd]: Triggered 'refresh' from 2 events
Feb 20 08:09:45 np0005625203.localdomain sshd[68033]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:45 np0005625203.localdomain ansible-async_wrapper.py[66636]: 66637 still running (3590)
Feb 20 08:09:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 08:09:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1800.1 total, 600.0 interval
                                                          Cumulative writes: 4396 writes, 20K keys, 4396 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4396 writes, 502 syncs, 8.76 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 262 writes, 697 keys, 262 commit groups, 1.0 writes per commit group, ingest: 0.63 MB, 0.00 MB/s
                                                          Interval WAL: 262 writes, 130 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 08:09:48 np0005625203.localdomain puppet-user[66656]: Notice: /Stage[main]/Tripleo::Profile::Base::Snmp/Snmp::Snmpv3_user[ro_snmp_user]/Exec[create-snmpv3-user-ro_snmp_user]/returns: executed successfully
Feb 20 08:09:48 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:09:48 np0005625203.localdomain systemd-sysv-generator[68063]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:09:48 np0005625203.localdomain systemd-rc-local-generator[68059]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:09:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:09:49 np0005625203.localdomain sshd[68033]: Invalid user antminermonitor from 185.246.128.171 port 54167
Feb 20 08:09:49 np0005625203.localdomain systemd[1]: Starting Simple Network Management Protocol (SNMP) Daemon....
Feb 20 08:09:49 np0005625203.localdomain snmpd[68076]: Can't find directory of RPM packages
Feb 20 08:09:49 np0005625203.localdomain snmpd[68076]: Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB
Feb 20 08:09:49 np0005625203.localdomain systemd[1]: Started Simple Network Management Protocol (SNMP) Daemon..
Feb 20 08:09:49 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:09:49 np0005625203.localdomain systemd-sysv-generator[68105]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:09:49 np0005625203.localdomain systemd-rc-local-generator[68101]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:09:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:09:49 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:09:49 np0005625203.localdomain systemd-sysv-generator[68141]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:09:49 np0005625203.localdomain systemd-rc-local-generator[68135]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:09:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]: Notice: /Stage[main]/Snmp/Service[snmpd]/ensure: ensure changed 'stopped' to 'running'
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]: Notice: Applied catalog in 15.53 seconds
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]: Application:
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:    Initial environment: production
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:    Converged environment: production
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:          Run mode: user
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]: Changes:
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:             Total: 8
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]: Events:
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:           Success: 8
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:             Total: 8
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]: Resources:
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:         Restarted: 1
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:           Changed: 8
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:       Out of sync: 8
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:             Total: 19
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]: Time:
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:        Filebucket: 0.00
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:          Schedule: 0.00
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:            Augeas: 0.01
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:              File: 0.10
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:    Config retrieval: 0.31
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:           Service: 1.32
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:    Transaction evaluation: 15.52
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:    Catalog application: 15.53
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:          Last run: 1771574989
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:              Exec: 5.06
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:           Package: 8.84
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:             Total: 15.54
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]: Version:
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:            Config: 1771574974
Feb 20 08:09:49 np0005625203.localdomain puppet-user[66656]:            Puppet: 7.10.0
Feb 20 08:09:50 np0005625203.localdomain ansible-async_wrapper.py[66637]: Module complete (66637)
Feb 20 08:09:50 np0005625203.localdomain sshd[68033]: Disconnecting invalid user antminermonitor 185.246.128.171 port 54167: Change of username or service not allowed: (antminermonitor,ssh-connection) -> (giovanni,ssh-connect [preauth]
Feb 20 08:09:50 np0005625203.localdomain ansible-async_wrapper.py[66636]: Done in kid B.
Feb 20 08:09:51 np0005625203.localdomain sudo[68163]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efptqiutlbqjmhohnrvlfsbqzxkwnysf ; /usr/bin/python3
Feb 20 08:09:51 np0005625203.localdomain sudo[68163]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:51 np0005625203.localdomain python3[68165]: ansible-ansible.legacy.async_status Invoked with jid=435037392731.66633 mode=status _async_dir=/tmp/.ansible_async
Feb 20 08:09:51 np0005625203.localdomain sudo[68163]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 08:09:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 1800.1 total, 600.0 interval
                                                          Cumulative writes: 5293 writes, 23K keys, 5293 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5293 writes, 571 syncs, 9.27 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 210 writes, 565 keys, 210 commit groups, 1.0 writes per commit group, ingest: 0.54 MB, 0.00 MB/s
                                                          Interval WAL: 210 writes, 103 syncs, 2.04 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 08:09:52 np0005625203.localdomain sudo[68179]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ftcjsstpljtmlewjdyaymlitntptywdg ; /usr/bin/python3
Feb 20 08:09:52 np0005625203.localdomain sudo[68179]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:52 np0005625203.localdomain python3[68181]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 08:09:52 np0005625203.localdomain sudo[68179]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:52 np0005625203.localdomain sudo[68195]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grpbkvznvudcyfadjnjbqyqegogjjflz ; /usr/bin/python3
Feb 20 08:09:52 np0005625203.localdomain sudo[68195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:52 np0005625203.localdomain python3[68197]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:09:52 np0005625203.localdomain sudo[68195]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:53 np0005625203.localdomain sudo[68245]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oxvwlklvpkwkvyudrxhkdwwixsqzevsf ; /usr/bin/python3
Feb 20 08:09:53 np0005625203.localdomain sudo[68245]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:53 np0005625203.localdomain python3[68247]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:09:53 np0005625203.localdomain sudo[68245]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:53 np0005625203.localdomain sudo[68263]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zxesdnmqoapzedcokswdqvvtilxjbgns ; /usr/bin/python3
Feb 20 08:09:53 np0005625203.localdomain sudo[68263]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:53 np0005625203.localdomain sshd[68266]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:53 np0005625203.localdomain python3[68265]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpr2qio3sv recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 08:09:53 np0005625203.localdomain sudo[68263]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:53 np0005625203.localdomain sudo[68294]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tsngpjowduzlboaqenusrturxwnaoqqk ; /usr/bin/python3
Feb 20 08:09:53 np0005625203.localdomain sudo[68294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:53 np0005625203.localdomain python3[68296]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:09:53 np0005625203.localdomain sudo[68294]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:54 np0005625203.localdomain sudo[68310]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ksfpslcwwvgfcfsmnyltocjnzgqhbkcc ; /usr/bin/python3
Feb 20 08:09:54 np0005625203.localdomain sudo[68310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:54 np0005625203.localdomain sudo[68310]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:54 np0005625203.localdomain sudo[68398]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xoapdtgngaozchuinivswmjykimdquuh ; /usr/bin/python3
Feb 20 08:09:54 np0005625203.localdomain sudo[68398]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:55 np0005625203.localdomain python3[68400]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Feb 20 08:09:55 np0005625203.localdomain sudo[68398]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:55 np0005625203.localdomain sudo[68417]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ztwbcaygemjwjmobtcvemlvhvatjeeuq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:09:55 np0005625203.localdomain sudo[68417]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:55 np0005625203.localdomain python3[68419]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:09:55 np0005625203.localdomain sudo[68417]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:56 np0005625203.localdomain sudo[68433]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gwzbyuywutzusqedpvrdapuegbgwsqsk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:09:56 np0005625203.localdomain sudo[68433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:56 np0005625203.localdomain sshd[68266]: Invalid user giovanni from 185.246.128.171 port 15483
Feb 20 08:09:56 np0005625203.localdomain sudo[68433]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:56 np0005625203.localdomain sudo[68449]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzpanroqrvitbnktoaikbfmnvydxpmbf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:09:56 np0005625203.localdomain sudo[68449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:56 np0005625203.localdomain python3[68451]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:09:56 np0005625203.localdomain sudo[68449]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:56 np0005625203.localdomain sshd[68266]: Disconnecting invalid user giovanni 185.246.128.171 port 15483: Change of username or service not allowed: (giovanni,ssh-connection) -> (emby,ssh-connection) [preauth]
Feb 20 08:09:57 np0005625203.localdomain sudo[68499]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bbmzrkcwmupeuhcpmhafydoaqhomecjx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:09:57 np0005625203.localdomain sudo[68499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:57 np0005625203.localdomain python3[68501]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:09:57 np0005625203.localdomain sudo[68499]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:57 np0005625203.localdomain sudo[68517]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-obdwdzkkpurwkzysclyomtcsrxxjzagc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:09:57 np0005625203.localdomain sudo[68517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:57 np0005625203.localdomain python3[68519]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:09:57 np0005625203.localdomain sudo[68517]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:58 np0005625203.localdomain sudo[68579]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-frjsqlfzznqaoeehpkdsktokzwcrdcxz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:09:58 np0005625203.localdomain sudo[68579]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:58 np0005625203.localdomain python3[68581]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:09:58 np0005625203.localdomain sudo[68579]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:58 np0005625203.localdomain sudo[68597]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nsvunkwqoldnvbvtxthafxgttxabkcmo ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:09:58 np0005625203.localdomain sudo[68597]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:58 np0005625203.localdomain python3[68599]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:09:58 np0005625203.localdomain sudo[68597]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:59 np0005625203.localdomain sudo[68659]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-snrimflfznzhhfkelwvxcrzxtfpajods ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:09:59 np0005625203.localdomain sudo[68659]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:59 np0005625203.localdomain python3[68661]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:09:59 np0005625203.localdomain sudo[68659]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:59 np0005625203.localdomain sudo[68677]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sfiezkanlwyvpyhsluklfohjmapozzqu ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:09:59 np0005625203.localdomain sudo[68677]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:09:59 np0005625203.localdomain python3[68679]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:09:59 np0005625203.localdomain sudo[68677]: pam_unix(sudo:session): session closed for user root
Feb 20 08:09:59 np0005625203.localdomain sshd[68694]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:09:59 np0005625203.localdomain sudo[68741]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfbzejrqisltusjoduozuxfekspbcdti ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:09:59 np0005625203.localdomain sudo[68741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:00 np0005625203.localdomain python3[68743]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:10:00 np0005625203.localdomain sudo[68741]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:00 np0005625203.localdomain sudo[68759]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kkjgywvcjiuyypzxphncrsyxuigwrjmc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:00 np0005625203.localdomain sudo[68759]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:00 np0005625203.localdomain sshd[68762]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:00 np0005625203.localdomain python3[68761]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:00 np0005625203.localdomain sudo[68759]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:00 np0005625203.localdomain sudo[68791]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dllnlhqqxxirkkjeimuexrrlqphyjvsz ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:00 np0005625203.localdomain sudo[68791]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:00 np0005625203.localdomain sshd[68762]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:10:00 np0005625203.localdomain python3[68793]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:10:00 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:10:00 np0005625203.localdomain systemd-rc-local-generator[68816]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:10:00 np0005625203.localdomain systemd-sysv-generator[68820]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:10:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:10:01 np0005625203.localdomain sudo[68830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:10:01 np0005625203.localdomain sudo[68830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:10:01 np0005625203.localdomain sudo[68830]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:01 np0005625203.localdomain sudo[68791]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:01 np0005625203.localdomain sudo[68847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 08:10:01 np0005625203.localdomain sudo[68847]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:10:01 np0005625203.localdomain sudo[68907]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gsbjcrlogamkzymfnhyuyxyhapqahwuq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:01 np0005625203.localdomain sudo[68907]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:01 np0005625203.localdomain sudo[68847]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:01 np0005625203.localdomain python3[68911]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:10:01 np0005625203.localdomain sudo[68907]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:01 np0005625203.localdomain sudo[68950]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqemwmwwkmcusprvgloezkvswbcuruzq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:01 np0005625203.localdomain sudo[68950]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:01 np0005625203.localdomain sudo[68944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:10:01 np0005625203.localdomain sudo[68944]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:10:01 np0005625203.localdomain sudo[68944]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:01 np0005625203.localdomain sudo[68964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:10:01 np0005625203.localdomain sudo[68964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:10:01 np0005625203.localdomain python3[68961]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:01 np0005625203.localdomain sudo[68950]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:02 np0005625203.localdomain sudo[69052]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gsjmhreqrfkgrslejnltrzahinyqfprb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:02 np0005625203.localdomain sshd[68694]: Invalid user emby from 185.246.128.171 port 22059
Feb 20 08:10:02 np0005625203.localdomain sudo[69052]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:02 np0005625203.localdomain python3[69054]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:10:03 np0005625203.localdomain sudo[69052]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:03 np0005625203.localdomain sudo[68964]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:03 np0005625203.localdomain sudo[69087]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hlwalubcclpngrppwazidyfumqbaycxd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:03 np0005625203.localdomain sudo[69087]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:03 np0005625203.localdomain sudo[69090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:10:03 np0005625203.localdomain sudo[69090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:10:03 np0005625203.localdomain sudo[69090]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:03 np0005625203.localdomain sudo[69105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -- inventory --format=json-pretty --filter-for-batch
Feb 20 08:10:03 np0005625203.localdomain sudo[69105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:10:03 np0005625203.localdomain python3[69089]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:03 np0005625203.localdomain sudo[69087]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:03 np0005625203.localdomain sshd[68694]: Disconnecting invalid user emby 185.246.128.171 port 22059: Change of username or service not allowed: (emby,ssh-connection) -> (vtiger,ssh-connection) [preauth]
Feb 20 08:10:03 np0005625203.localdomain podman[69176]: 
Feb 20 08:10:03 np0005625203.localdomain podman[69176]: 2026-02-20 08:10:03.865387115 +0000 UTC m=+0.078609663 container create 085afee3464e63006baa32ca0d915b3eef1fa98160cf7844740ffc85a5a6f3c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_hopper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.openshift.tags=rhceph ceph, architecture=x86_64, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, ceph=True, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_BRANCH=main, release=1770267347, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Feb 20 08:10:03 np0005625203.localdomain sudo[69203]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hajxedsqtkdeowozrjoztlyelabbuixx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:03 np0005625203.localdomain systemd[1]: Started libpod-conmon-085afee3464e63006baa32ca0d915b3eef1fa98160cf7844740ffc85a5a6f3c8.scope.
Feb 20 08:10:03 np0005625203.localdomain sudo[69203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:03 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:10:03 np0005625203.localdomain podman[69176]: 2026-02-20 08:10:03.832496773 +0000 UTC m=+0.045719331 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 08:10:03 np0005625203.localdomain podman[69176]: 2026-02-20 08:10:03.947483906 +0000 UTC m=+0.160706444 container init 085afee3464e63006baa32ca0d915b3eef1fa98160cf7844740ffc85a5a6f3c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_hopper, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, architecture=x86_64, release=1770267347, build-date=2026-02-09T10:25:24Z, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, com.redhat.component=rhceph-container, version=7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc.)
Feb 20 08:10:03 np0005625203.localdomain podman[69176]: 2026-02-20 08:10:03.958752446 +0000 UTC m=+0.171974964 container start 085afee3464e63006baa32ca0d915b3eef1fa98160cf7844740ffc85a5a6f3c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_hopper, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, architecture=x86_64, GIT_BRANCH=main, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph)
Feb 20 08:10:03 np0005625203.localdomain podman[69176]: 2026-02-20 08:10:03.959030774 +0000 UTC m=+0.172253372 container attach 085afee3464e63006baa32ca0d915b3eef1fa98160cf7844740ffc85a5a6f3c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_hopper, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, architecture=x86_64, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 08:10:03 np0005625203.localdomain sleepy_hopper[69207]: 167 167
Feb 20 08:10:03 np0005625203.localdomain systemd[1]: libpod-085afee3464e63006baa32ca0d915b3eef1fa98160cf7844740ffc85a5a6f3c8.scope: Deactivated successfully.
Feb 20 08:10:03 np0005625203.localdomain podman[69176]: 2026-02-20 08:10:03.963449502 +0000 UTC m=+0.176672050 container died 085afee3464e63006baa32ca0d915b3eef1fa98160cf7844740ffc85a5a6f3c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_hopper, io.openshift.expose-services=, RELEASE=main, version=7, architecture=x86_64, release=1770267347, build-date=2026-02-09T10:25:24Z, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 08:10:04 np0005625203.localdomain podman[69213]: 2026-02-20 08:10:04.062340344 +0000 UTC m=+0.088045176 container remove 085afee3464e63006baa32ca0d915b3eef1fa98160cf7844740ffc85a5a6f3c8 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sleepy_hopper, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, distribution-scope=public, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, architecture=x86_64, version=7, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347)
Feb 20 08:10:04 np0005625203.localdomain systemd[1]: libpod-conmon-085afee3464e63006baa32ca0d915b3eef1fa98160cf7844740ffc85a5a6f3c8.scope: Deactivated successfully.
Feb 20 08:10:04 np0005625203.localdomain python3[69209]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:10:04 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:10:04 np0005625203.localdomain podman[69234]: 
Feb 20 08:10:04 np0005625203.localdomain podman[69234]: 2026-02-20 08:10:04.250640233 +0000 UTC m=+0.071112510 container create 5c1d56c94099ae17081cd7051e038a79201defe1f7a7d533d352457ff635ed6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_wilbur, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, RELEASE=main, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main)
Feb 20 08:10:04 np0005625203.localdomain systemd-rc-local-generator[69266]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:10:04 np0005625203.localdomain systemd-sysv-generator[69272]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:10:04 np0005625203.localdomain podman[69234]: 2026-02-20 08:10:04.223565043 +0000 UTC m=+0.044037370 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 08:10:04 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:10:04 np0005625203.localdomain sshd[69284]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:04 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-91ec02f3afd972a358f9ca26cdd192b0f2c721c2297c3926be70a49d62c2b579-merged.mount: Deactivated successfully.
Feb 20 08:10:04 np0005625203.localdomain systemd[1]: Started libpod-conmon-5c1d56c94099ae17081cd7051e038a79201defe1f7a7d533d352457ff635ed6b.scope.
Feb 20 08:10:04 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:10:04 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54e09f1f878b5bba78e2dfebab2d6ec853b06377f9de5b7ec84538b5068b67c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:04 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54e09f1f878b5bba78e2dfebab2d6ec853b06377f9de5b7ec84538b5068b67c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:04 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a54e09f1f878b5bba78e2dfebab2d6ec853b06377f9de5b7ec84538b5068b67c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:04 np0005625203.localdomain podman[69234]: 2026-02-20 08:10:04.560261413 +0000 UTC m=+0.380733720 container init 5c1d56c94099ae17081cd7051e038a79201defe1f7a7d533d352457ff635ed6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_wilbur, com.redhat.component=rhceph-container, architecture=x86_64, vcs-type=git, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, distribution-scope=public, io.openshift.tags=rhceph ceph, release=1770267347, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, description=Red Hat Ceph Storage 7)
Feb 20 08:10:04 np0005625203.localdomain systemd[1]: Starting Create netns directory...
Feb 20 08:10:04 np0005625203.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 20 08:10:04 np0005625203.localdomain systemd[1]: Finished Create netns directory.
Feb 20 08:10:04 np0005625203.localdomain podman[69234]: 2026-02-20 08:10:04.571787651 +0000 UTC m=+0.392259928 container start 5c1d56c94099ae17081cd7051e038a79201defe1f7a7d533d352457ff635ed6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_wilbur, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., name=rhceph, io.buildah.version=1.42.2, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container)
Feb 20 08:10:04 np0005625203.localdomain podman[69234]: 2026-02-20 08:10:04.572061449 +0000 UTC m=+0.392533806 container attach 5c1d56c94099ae17081cd7051e038a79201defe1f7a7d533d352457ff635ed6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_wilbur, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=rhceph, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_CLEAN=True, CEPH_POINT_RELEASE=)
Feb 20 08:10:04 np0005625203.localdomain sudo[69203]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:04 np0005625203.localdomain sudo[69310]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ogdmtgothrndiqmobzkygtpcxexzdtum ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:04 np0005625203.localdomain sudo[69310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:05 np0005625203.localdomain python3[69313]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Feb 20 08:10:05 np0005625203.localdomain sudo[69310]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:05 np0005625203.localdomain sudo[69812]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rximgbgwnsflihadzfhodpxjewszrkwl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:05 np0005625203.localdomain sudo[69812]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]: [
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:     {
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:         "available": false,
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:         "ceph_device": false,
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:         "lsm_data": {},
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:         "lvs": [],
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:         "path": "/dev/sr0",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:         "rejected_reasons": [
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "Has a FileSystem",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "Insufficient space (<5GB)"
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:         ],
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:         "sys_api": {
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "actuators": null,
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "device_nodes": "sr0",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "human_readable_size": "482.00 KB",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "id_bus": "ata",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "model": "QEMU DVD-ROM",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "nr_requests": "2",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "partitions": {},
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "path": "/dev/sr0",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "removable": "1",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "rev": "2.5+",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "ro": "0",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "rotational": "1",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "sas_address": "",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "sas_device_handle": "",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "scheduler_mode": "mq-deadline",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "sectors": 0,
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "sectorsize": "2048",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "size": 493568.0,
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "support_discard": "0",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "type": "disk",
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:             "vendor": "QEMU"
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:         }
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]:     }
Feb 20 08:10:05 np0005625203.localdomain great_wilbur[69286]: ]
Feb 20 08:10:05 np0005625203.localdomain systemd[1]: libpod-5c1d56c94099ae17081cd7051e038a79201defe1f7a7d533d352457ff635ed6b.scope: Deactivated successfully.
Feb 20 08:10:05 np0005625203.localdomain podman[70903]: 2026-02-20 08:10:05.573409537 +0000 UTC m=+0.040912322 container died 5c1d56c94099ae17081cd7051e038a79201defe1f7a7d533d352457ff635ed6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_wilbur, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container)
Feb 20 08:10:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-a54e09f1f878b5bba78e2dfebab2d6ec853b06377f9de5b7ec84538b5068b67c-merged.mount: Deactivated successfully.
Feb 20 08:10:05 np0005625203.localdomain podman[70903]: 2026-02-20 08:10:05.614648688 +0000 UTC m=+0.082151423 container remove 5c1d56c94099ae17081cd7051e038a79201defe1f7a7d533d352457ff635ed6b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=great_wilbur, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, release=1770267347, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.tags=rhceph ceph)
Feb 20 08:10:05 np0005625203.localdomain systemd[1]: libpod-conmon-5c1d56c94099ae17081cd7051e038a79201defe1f7a7d533d352457ff635ed6b.scope: Deactivated successfully.
Feb 20 08:10:05 np0005625203.localdomain sudo[69105]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:05 np0005625203.localdomain sshd[69284]: Invalid user vtiger from 185.246.128.171 port 9777
Feb 20 08:10:05 np0005625203.localdomain sshd[69284]: Disconnecting invalid user vtiger 185.246.128.171 port 9777: Change of username or service not allowed: (vtiger,ssh-connection) -> (support,ssh-connection) [preauth]
Feb 20 08:10:06 np0005625203.localdomain sudo[71041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:10:06 np0005625203.localdomain sudo[71041]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:10:06 np0005625203.localdomain sudo[71041]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:06 np0005625203.localdomain sshd[71118]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:10:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:10:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:10:07 np0005625203.localdomain podman[71264]: 2026-02-20 08:10:07.797363787 +0000 UTC m=+0.095141817 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, container_name=iscsid, version=17.1.13, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:10:07 np0005625203.localdomain podman[71264]: 2026-02-20 08:10:07.810227657 +0000 UTC m=+0.108005657 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20260112.1, release=1766032510, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, config_id=tripleo_step3, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:10:07 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:10:07 np0005625203.localdomain systemd[1]: tmp-crun.gXdx3W.mount: Deactivated successfully.
Feb 20 08:10:07 np0005625203.localdomain podman[71263]: 2026-02-20 08:10:07.887044753 +0000 UTC m=+0.184203873 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, batch=17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, io.buildah.version=1.41.5)
Feb 20 08:10:07 np0005625203.localdomain podman[71263]: 2026-02-20 08:10:07.902260796 +0000 UTC m=+0.199419916 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, container_name=collectd, io.buildah.version=1.41.5, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, tcib_managed=true, release=1766032510)
Feb 20 08:10:07 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:10:07 np0005625203.localdomain podman[71265]: 2026-02-20 08:10:07.95324607 +0000 UTC m=+0.246498929 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, tcib_managed=true, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public)
Feb 20 08:10:08 np0005625203.localdomain podman[71265]: 2026-02-20 08:10:08.179423836 +0000 UTC m=+0.472676645 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., architecture=x86_64, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:10:08 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:10:08 np0005625203.localdomain sudo[69812]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:08 np0005625203.localdomain sshd[71393]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:08 np0005625203.localdomain sshd[71118]: Invalid user support from 185.246.128.171 port 37162
Feb 20 08:10:08 np0005625203.localdomain systemd[1]: tmp-crun.zpgsUd.mount: Deactivated successfully.
Feb 20 08:10:08 np0005625203.localdomain sshd[71393]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:10:09 np0005625203.localdomain sudo[71408]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lgirfmrpxlwcayckabpgrjymuqaumqft ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:09 np0005625203.localdomain sudo[71408]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:09 np0005625203.localdomain python3[71410]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step4 config_dir=/var/lib/tripleo-config/container-startup-config/step_4 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Feb 20 08:10:10 np0005625203.localdomain podman[71568]: 2026-02-20 08:10:10.106697439 +0000 UTC m=+0.090240075 container create 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64)
Feb 20 08:10:10 np0005625203.localdomain podman[71578]: 2026-02-20 08:10:10.11474956 +0000 UTC m=+0.093770015 container create 4b42ee3a2e822bf9b006ccb3380a1ce6668571aa159c9078ef5462a2c0361259 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, container_name=configure_cms_options, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, io.buildah.version=1.41.5, config_id=tripleo_step4, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1)
Feb 20 08:10:10 np0005625203.localdomain podman[71591]: 2026-02-20 08:10:10.119837907 +0000 UTC m=+0.086856489 container create 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, vcs-type=git, batch=17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:10:10 np0005625203.localdomain podman[71557]: 2026-02-20 08:10:10.141220952 +0000 UTC m=+0.127946286 container create 141f4e68b3d8516bf514613aa4ecd1ae9fd4dff3bd009e05351a4a23c7a81e35 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:31:49Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_libvirt_init_secret, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5)
Feb 20 08:10:10 np0005625203.localdomain podman[71557]: 2026-02-20 08:10:10.044030652 +0000 UTC m=+0.030756006 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1
Feb 20 08:10:10 np0005625203.localdomain podman[71578]: 2026-02-20 08:10:10.054529428 +0000 UTC m=+0.033549883 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Feb 20 08:10:10 np0005625203.localdomain podman[71568]: 2026-02-20 08:10:10.055586341 +0000 UTC m=+0.039128987 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Feb 20 08:10:10 np0005625203.localdomain podman[71591]: 2026-02-20 08:10:10.063192037 +0000 UTC m=+0.030210639 image pull  registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libpod-conmon-5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.scope.
Feb 20 08:10:10 np0005625203.localdomain podman[71617]: 2026-02-20 08:10:10.164241077 +0000 UTC m=+0.105635643 container create 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=logrotate_crond, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libpod-conmon-141f4e68b3d8516bf514613aa4ecd1ae9fd4dff3bd009e05351a4a23c7a81e35.scope.
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libpod-conmon-8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.scope.
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:10:10 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787c8f875f9698c41ffef92771343c3a6a6dfa6fc78d05b953e695d5e698036f/merged/etc/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:10 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787c8f875f9698c41ffef92771343c3a6a6dfa6fc78d05b953e695d5e698036f/merged/etc/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:10 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/787c8f875f9698c41ffef92771343c3a6a6dfa6fc78d05b953e695d5e698036f/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:10:10 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b6de8b10d609c1a823e14e242768f43904250823435fbc3f960804bb5b5ac65/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:10 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cebad060ab405d52d2b962638c8f57ff1b2ca4462868ac6b7d7f52e12ed3e0a/merged/var/log/ceilometer supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libpod-conmon-0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.scope.
Feb 20 08:10:10 np0005625203.localdomain podman[71617]: 2026-02-20 08:10:10.093082476 +0000 UTC m=+0.034477042 image pull  registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Feb 20 08:10:10 np0005625203.localdomain podman[71557]: 2026-02-20 08:10:10.194497367 +0000 UTC m=+0.181222701 container init 141f4e68b3d8516bf514613aa4ecd1ae9fd4dff3bd009e05351a4a23c7a81e35 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, managed_by=tripleo_ansible, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:31:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, build-date=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_libvirt_init_secret, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libpod-conmon-4b42ee3a2e822bf9b006ccb3380a1ce6668571aa159c9078ef5462a2c0361259.scope.
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:10:10 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0b96efd9497779e5b269af474ffc26b10023ac5cdee3873df81b8013dae2e66/merged/var/log/containers supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:10 np0005625203.localdomain podman[71557]: 2026-02-20 08:10:10.210789013 +0000 UTC m=+0.197514377 container start 141f4e68b3d8516bf514613aa4ecd1ae9fd4dff3bd009e05351a4a23c7a81e35 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_libvirt_init_secret, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Feb 20 08:10:10 np0005625203.localdomain podman[71557]: 2026-02-20 08:10:10.211177615 +0000 UTC m=+0.197902979 container attach 141f4e68b3d8516bf514613aa4ecd1ae9fd4dff3bd009e05351a4a23c7a81e35 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, com.redhat.component=openstack-nova-libvirt-container, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.5, build-date=2026-01-12T23:31:49Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, container_name=nova_libvirt_init_secret, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.expose-services=)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:10:10 np0005625203.localdomain podman[71591]: 2026-02-20 08:10:10.216287874 +0000 UTC m=+0.183306486 container init 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:10:10 np0005625203.localdomain podman[71617]: 2026-02-20 08:10:10.232109295 +0000 UTC m=+0.173503861 container init 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, version=17.1.13, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, batch=17.1_20260112.1, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc.)
Feb 20 08:10:10 np0005625203.localdomain sudo[71672]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:10:10 np0005625203.localdomain sudo[71672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:10:10 np0005625203.localdomain sudo[71675]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:10:10 np0005625203.localdomain sudo[71675]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:10:10 np0005625203.localdomain podman[71591]: 2026-02-20 08:10:10.254072697 +0000 UTC m=+0.221091279 container start 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container)
Feb 20 08:10:10 np0005625203.localdomain python3[71410]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=201974126bd6c3f7e7b4f5296aea3207 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_ipmi --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_ipmi.log --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:10:10 np0005625203.localdomain podman[71568]: 2026-02-20 08:10:10.263175081 +0000 UTC m=+0.246717707 container init 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, distribution-scope=public, batch=17.1_20260112.1)
Feb 20 08:10:10 np0005625203.localdomain podman[71578]: 2026-02-20 08:10:10.276166844 +0000 UTC m=+0.255187319 container init 4b42ee3a2e822bf9b006ccb3380a1ce6668571aa159c9078ef5462a2c0361259 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.5, container_name=configure_cms_options, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:10:10 np0005625203.localdomain sudo[71696]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:10:10 np0005625203.localdomain podman[71617]: 2026-02-20 08:10:10.283734509 +0000 UTC m=+0.225129065 container start 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.component=openstack-cron-container, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:10:10 np0005625203.localdomain sudo[71696]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:10:10 np0005625203.localdomain python3[71410]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name logrotate_crond --conmon-pidfile /run/logrotate_crond.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=53ed83bb0cae779ff95edb2002262c6f --healthcheck-command /usr/share/openstack-tripleo-common/healthcheck/cron --label config_id=tripleo_step4 --label container_name=logrotate_crond --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/logrotate_crond.log --network none --pid host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro --volume /var/log/containers:/var/log/containers:z registry.redhat.io/rhosp-rhel9/openstack-cron:17.1
Feb 20 08:10:10 np0005625203.localdomain sudo[71675]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:10:10 np0005625203.localdomain sudo[71672]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:10 np0005625203.localdomain crond[71674]: (CRON) STARTUP (1.5.7)
Feb 20 08:10:10 np0005625203.localdomain crond[71674]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 69% if used.)
Feb 20 08:10:10 np0005625203.localdomain crond[71674]: (CRON) INFO (running with inotify support)
Feb 20 08:10:10 np0005625203.localdomain podman[71568]: 2026-02-20 08:10:10.326465176 +0000 UTC m=+0.310007802 container start 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5)
Feb 20 08:10:10 np0005625203.localdomain python3[71410]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=201974126bd6c3f7e7b4f5296aea3207 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ceilometer_agent_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ceilometer_agent_compute.log --network host --privileged=False --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/log/containers/ceilometer:/var/log/ceilometer:z registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1
Feb 20 08:10:10 np0005625203.localdomain sudo[71696]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:10 np0005625203.localdomain podman[71578]: 2026-02-20 08:10:10.340184723 +0000 UTC m=+0.319205168 container start 4b42ee3a2e822bf9b006ccb3380a1ce6668571aa159c9078ef5462a2c0361259 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=configure_cms_options, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.5, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git)
Feb 20 08:10:10 np0005625203.localdomain podman[71578]: 2026-02-20 08:10:10.340359588 +0000 UTC m=+0.319380033 container attach 4b42ee3a2e822bf9b006ccb3380a1ce6668571aa159c9078ef5462a2c0361259 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, tcib_managed=true, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=configure_cms_options, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: libpod-141f4e68b3d8516bf514613aa4ecd1ae9fd4dff3bd009e05351a4a23c7a81e35.scope: Deactivated successfully.
Feb 20 08:10:10 np0005625203.localdomain podman[71557]: 2026-02-20 08:10:10.381135725 +0000 UTC m=+0.367861059 container died 141f4e68b3d8516bf514613aa4ecd1ae9fd4dff3bd009e05351a4a23c7a81e35 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, vcs-type=git, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, config_id=tripleo_step4, version=17.1.13, io.openshift.expose-services=, com.redhat.component=openstack-nova-libvirt-container, org.opencontainers.image.created=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_libvirt_init_secret, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, build-date=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:10:10 np0005625203.localdomain ovs-vsctl[71786]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . external_ids ovn-cms-options
Feb 20 08:10:10 np0005625203.localdomain podman[71678]: 2026-02-20 08:10:10.458391405 +0000 UTC m=+0.203354278 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: libpod-4b42ee3a2e822bf9b006ccb3380a1ce6668571aa159c9078ef5462a2c0361259.scope: Deactivated successfully.
Feb 20 08:10:10 np0005625203.localdomain podman[71700]: 2026-02-20 08:10:10.481532783 +0000 UTC m=+0.197860727 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=starting, architecture=x86_64, io.buildah.version=1.41.5, distribution-scope=public, release=1766032510, io.openshift.expose-services=, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, vcs-type=git)
Feb 20 08:10:10 np0005625203.localdomain podman[71761]: 2026-02-20 08:10:10.502811705 +0000 UTC m=+0.115610452 container cleanup 141f4e68b3d8516bf514613aa4ecd1ae9fd4dff3bd009e05351a4a23c7a81e35 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_libvirt_init_secret, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']}, maintainer=OpenStack TripleO Team, container_name=nova_libvirt_init_secret, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.expose-services=, url=https://www.redhat.com, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, distribution-scope=public, version=17.1.13, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-libvirt, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, config_id=tripleo_step4)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: libpod-conmon-141f4e68b3d8516bf514613aa4ecd1ae9fd4dff3bd009e05351a4a23c7a81e35.scope: Deactivated successfully.
Feb 20 08:10:10 np0005625203.localdomain podman[71700]: 2026-02-20 08:10:10.541413204 +0000 UTC m=+0.257741168 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:10:10 np0005625203.localdomain python3[71410]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_libvirt_init_secret --cgroupns=host --conmon-pidfile /run/nova_libvirt_init_secret.pid --detach=False --env LIBVIRT_DEFAULT_URI=qemu:///system --env TRIPLEO_CONFIG_HASH=2eb7e8e9794eebaba92e1ff8facc8868 --label config_id=tripleo_step4 --label container_name=nova_libvirt_init_secret --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'command': '/nova_libvirt_init_secret.sh ceph:openstack', 'detach': False, 'environment': {'LIBVIRT_DEFAULT_URI': 'qemu:///system', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'privileged': False, 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova', '/etc/libvirt:/etc/libvirt', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro', '/var/lib/tripleo-config/ceph:/etc/ceph:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_libvirt_init_secret.log --network host --privileged=False --security-opt label=disable --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova:/etc/nova --volume /etc/libvirt:/etc/libvirt --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /var/lib/container-config-scripts/nova_libvirt_init_secret.sh:/nova_libvirt_init_secret.sh:ro --volume /var/lib/tripleo-config/ceph:/etc/ceph:ro registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1 /nova_libvirt_init_secret.sh ceph:openstack
Feb 20 08:10:10 np0005625203.localdomain podman[71578]: 2026-02-20 08:10:10.568234097 +0000 UTC m=+0.547254632 container died 4b42ee3a2e822bf9b006ccb3380a1ce6668571aa159c9078ef5462a2c0361259 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, url=https://www.redhat.com, container_name=configure_cms_options, release=1766032510, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:10:10 np0005625203.localdomain podman[71720]: 2026-02-20 08:10:10.467008412 +0000 UTC m=+0.150664341 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, config_id=tripleo_step4, version=17.1.13, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com)
Feb 20 08:10:10 np0005625203.localdomain podman[71678]: 2026-02-20 08:10:10.57571325 +0000 UTC m=+0.320676173 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64)
Feb 20 08:10:10 np0005625203.localdomain podman[71678]: unhealthy
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Failed with result 'exit-code'.
Feb 20 08:10:10 np0005625203.localdomain podman[71720]: 2026-02-20 08:10:10.602162002 +0000 UTC m=+0.285817941 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:10:10 np0005625203.localdomain podman[71720]: unhealthy
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Failed with result 'exit-code'.
Feb 20 08:10:10 np0005625203.localdomain podman[71807]: 2026-02-20 08:10:10.647737398 +0000 UTC m=+0.171890962 container cleanup 4b42ee3a2e822bf9b006ccb3380a1ce6668571aa159c9078ef5462a2c0361259 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=configure_cms_options, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, container_name=configure_cms_options, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: libpod-conmon-4b42ee3a2e822bf9b006ccb3380a1ce6668571aa159c9078ef5462a2c0361259.scope: Deactivated successfully.
Feb 20 08:10:10 np0005625203.localdomain python3[71410]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name configure_cms_options --conmon-pidfile /run/configure_cms_options.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1771573229 --label config_id=tripleo_step4 --label container_name=configure_cms_options --label managed_by=tripleo_ansible --label config_data={'command': ['/bin/bash', '-c', 'CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/configure_cms_options.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1 /bin/bash -c CMS_OPTS=$(hiera ovn::controller::ovn_cms_options -c /etc/puppet/hiera.yaml);  if [ X"$CMS_OPTS" !=  X ]; then ovs-vsctl set open . external_ids:ovn-cms-options=$CMS_OPTS;else ovs-vsctl remove open . external_ids ovn-cms-options; fi
Feb 20 08:10:10 np0005625203.localdomain podman[71935]: 2026-02-20 08:10:10.780321236 +0000 UTC m=+0.063561786 container create d93a0b3d6d3f9611a8cce5a7ef59cca4608466101b17666506129bfd18de1a8f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, container_name=setup_ovs_manager, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libpod-conmon-d93a0b3d6d3f9611a8cce5a7ef59cca4608466101b17666506129bfd18de1a8f.scope.
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:10:10 np0005625203.localdomain podman[71967]: 2026-02-20 08:10:10.838071521 +0000 UTC m=+0.075434575 container create 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, version=17.1.13, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, distribution-scope=public, architecture=x86_64)
Feb 20 08:10:10 np0005625203.localdomain podman[71935]: 2026-02-20 08:10:10.847058069 +0000 UTC m=+0.130298629 container init d93a0b3d6d3f9611a8cce5a7ef59cca4608466101b17666506129bfd18de1a8f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, container_name=setup_ovs_manager, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, url=https://www.redhat.com, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:10:10 np0005625203.localdomain podman[71935]: 2026-02-20 08:10:10.75469794 +0000 UTC m=+0.037938500 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Feb 20 08:10:10 np0005625203.localdomain podman[71935]: 2026-02-20 08:10:10.857775403 +0000 UTC m=+0.141015943 container start d93a0b3d6d3f9611a8cce5a7ef59cca4608466101b17666506129bfd18de1a8f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=setup_ovs_manager, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.5, batch=17.1_20260112.1, version=17.1.13, io.openshift.expose-services=)
Feb 20 08:10:10 np0005625203.localdomain podman[71935]: 2026-02-20 08:10:10.858088912 +0000 UTC m=+0.141329542 container attach d93a0b3d6d3f9611a8cce5a7ef59cca4608466101b17666506129bfd18de1a8f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, tcib_managed=true, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=setup_ovs_manager, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libpod-conmon-1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.scope.
Feb 20 08:10:10 np0005625203.localdomain podman[71967]: 2026-02-20 08:10:10.795444906 +0000 UTC m=+0.032807990 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:10:10 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37d662a0623c7987d6654fd890eea9e2ed325a6255b9d568db582e750601fc64/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:10:10 np0005625203.localdomain podman[71967]: 2026-02-20 08:10:10.946992694 +0000 UTC m=+0.184355778 container init 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.5, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13)
Feb 20 08:10:10 np0005625203.localdomain sudo[71998]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:10:10 np0005625203.localdomain sudo[71998]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:10:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:10:10 np0005625203.localdomain podman[71967]: 2026-02-20 08:10:10.98162248 +0000 UTC m=+0.218985554 container start 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.13, tcib_managed=true, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container)
Feb 20 08:10:10 np0005625203.localdomain python3[71410]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_migration_target --conmon-pidfile /run/nova_migration_target.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=2eb7e8e9794eebaba92e1ff8facc8868 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=nova_migration_target --label managed_by=tripleo_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_migration_target.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /etc/ssh:/host-ssh:ro --volume /run/libvirt:/run/libvirt:shared,z --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Feb 20 08:10:11 np0005625203.localdomain sudo[71998]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:11 np0005625203.localdomain sshd[72022]: Server listening on 0.0.0.0 port 2022.
Feb 20 08:10:11 np0005625203.localdomain sshd[72022]: Server listening on :: port 2022.
Feb 20 08:10:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e7b2b5ec277db0a8768ab2ccd5edd825c3b965080651b007ffb0861f2c0ca9b0-merged.mount: Deactivated successfully.
Feb 20 08:10:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4b42ee3a2e822bf9b006ccb3380a1ce6668571aa159c9078ef5462a2c0361259-userdata-shm.mount: Deactivated successfully.
Feb 20 08:10:11 np0005625203.localdomain sshd[71118]: error: maximum authentication attempts exceeded for invalid user support from 185.246.128.171 port 37162 ssh2 [preauth]
Feb 20 08:10:11 np0005625203.localdomain sshd[71118]: Disconnecting invalid user support 185.246.128.171 port 37162: Too many authentication failures [preauth]
Feb 20 08:10:11 np0005625203.localdomain podman[71999]: 2026-02-20 08:10:11.128796222 +0000 UTC m=+0.137529134 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=starting, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container)
Feb 20 08:10:11 np0005625203.localdomain sudo[72047]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpddmfc14g/privsep.sock
Feb 20 08:10:11 np0005625203.localdomain sudo[72047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 20 08:10:11 np0005625203.localdomain podman[71999]: 2026-02-20 08:10:11.466628047 +0000 UTC m=+0.475361019 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:10:11 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:10:11 np0005625203.localdomain sshd[72068]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:11 np0005625203.localdomain kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Feb 20 08:10:11 np0005625203.localdomain sudo[72047]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:13 np0005625203.localdomain sshd[72068]: Invalid user support from 185.246.128.171 port 24010
Feb 20 08:10:13 np0005625203.localdomain ovs-vsctl[72174]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Feb 20 08:10:13 np0005625203.localdomain systemd[1]: libpod-d93a0b3d6d3f9611a8cce5a7ef59cca4608466101b17666506129bfd18de1a8f.scope: Deactivated successfully.
Feb 20 08:10:13 np0005625203.localdomain systemd[1]: libpod-d93a0b3d6d3f9611a8cce5a7ef59cca4608466101b17666506129bfd18de1a8f.scope: Consumed 2.764s CPU time.
Feb 20 08:10:13 np0005625203.localdomain podman[72175]: 2026-02-20 08:10:13.711203817 +0000 UTC m=+0.051519751 container died d93a0b3d6d3f9611a8cce5a7ef59cca4608466101b17666506129bfd18de1a8f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, url=https://www.redhat.com, container_name=setup_ovs_manager, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., version=17.1.13, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:10:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d93a0b3d6d3f9611a8cce5a7ef59cca4608466101b17666506129bfd18de1a8f-userdata-shm.mount: Deactivated successfully.
Feb 20 08:10:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-350c54d5408c0d2bf89ea259b7106c227b04e0a20775f530019e2aa403fcd90a-merged.mount: Deactivated successfully.
Feb 20 08:10:13 np0005625203.localdomain podman[72175]: 2026-02-20 08:10:13.752294274 +0000 UTC m=+0.092610178 container cleanup d93a0b3d6d3f9611a8cce5a7ef59cca4608466101b17666506129bfd18de1a8f (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=setup_ovs_manager, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=setup_ovs_manager, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.13, config_id=tripleo_step4, vcs-type=git, distribution-scope=public, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f)
Feb 20 08:10:13 np0005625203.localdomain systemd[1]: libpod-conmon-d93a0b3d6d3f9611a8cce5a7ef59cca4608466101b17666506129bfd18de1a8f.scope: Deactivated successfully.
Feb 20 08:10:13 np0005625203.localdomain python3[71410]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name setup_ovs_manager --conmon-pidfile /run/setup_ovs_manager.pid --detach=False --env TRIPLEO_DEPLOY_IDENTIFIER=1771573229 --label config_id=tripleo_step4 --label container_name=setup_ovs_manager --label managed_by=tripleo_ansible --label config_data={'command': ['/container_puppet_apply.sh', '4', 'exec', 'include tripleo::profile::base::neutron::ovn_metadata'], 'detach': False, 'environment': {'TRIPLEO_DEPLOY_IDENTIFIER': '1771573229'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'privileged': True, 'start_order': 0, 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro', '/etc/puppet:/tmp/puppet-etc:ro', '/usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/setup_ovs_manager.log --network host --privileged=True --user root --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/container-config-scripts/container_puppet_apply.sh:/container_puppet_apply.sh:ro --volume /etc/puppet:/tmp/puppet-etc:ro --volume /usr/share/openstack-puppet/modules:/usr/share/openstack-puppet/modules:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1 /container_puppet_apply.sh 4 exec include tripleo::profile::base::neutron::ovn_metadata
Feb 20 08:10:13 np0005625203.localdomain sshd[72068]: Disconnecting invalid user support 185.246.128.171 port 24010: Change of username or service not allowed: (support,ssh-connection) -> (student,ssh-connection) [preauth]
Feb 20 08:10:14 np0005625203.localdomain podman[72285]: 2026-02-20 08:10:14.194312036 +0000 UTC m=+0.077513989 container create d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true)
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Started libpod-conmon-d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.scope.
Feb 20 08:10:14 np0005625203.localdomain podman[72285]: 2026-02-20 08:10:14.14872288 +0000 UTC m=+0.031924833 image pull  registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:10:14 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d390892008df4878068c7165ea27144a38c2b77b5a7d1d8c8cfe57dd3f055d/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:14 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d390892008df4878068c7165ea27144a38c2b77b5a7d1d8c8cfe57dd3f055d/merged/var/log/openvswitch supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:14 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d390892008df4878068c7165ea27144a38c2b77b5a7d1d8c8cfe57dd3f055d/merged/var/log/ovn supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:14 np0005625203.localdomain podman[72298]: 2026-02-20 08:10:14.292221718 +0000 UTC m=+0.145133360 container create be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:10:14 np0005625203.localdomain podman[72285]: 2026-02-20 08:10:14.295956134 +0000 UTC m=+0.179158137 container init d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_id=tripleo_step4, tcib_managed=true, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Started libpod-conmon-be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.scope.
Feb 20 08:10:14 np0005625203.localdomain systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring.
Feb 20 08:10:14 np0005625203.localdomain podman[72285]: 2026-02-20 08:10:14.34477496 +0000 UTC m=+0.227976913 container start d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, vcs-type=git)
Feb 20 08:10:14 np0005625203.localdomain podman[72298]: 2026-02-20 08:10:14.246770826 +0000 UTC m=+0.099682498 image pull  registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Feb 20 08:10:14 np0005625203.localdomain python3[71410]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck 6642 --label config_id=tripleo_step4 --label container_name=ovn_controller --label managed_by=tripleo_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_controller.log --network host --privileged=True --user root --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/log/containers/openvswitch:/var/log/openvswitch:z --volume /var/log/containers/openvswitch:/var/log/ovn:z registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Created slice User Slice of UID 0.
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 20 08:10:14 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7d6143fe0f43bd2b55aaa133ca6222921bb092a5876f130467bb0649b23807e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:14 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7d6143fe0f43bd2b55aaa133ca6222921bb092a5876f130467bb0649b23807e/merged/etc/neutron/kill_scripts supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:14 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7d6143fe0f43bd2b55aaa133ca6222921bb092a5876f130467bb0649b23807e/merged/var/log/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Starting User Manager for UID 0...
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:10:14 np0005625203.localdomain podman[72298]: 2026-02-20 08:10:14.439914195 +0000 UTC m=+0.292825837 container init be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:10:14 np0005625203.localdomain sudo[72359]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:10:14 np0005625203.localdomain sudo[72359]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:10:14 np0005625203.localdomain podman[72298]: 2026-02-20 08:10:14.474522401 +0000 UTC m=+0.327434053 container start be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.13, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:10:14 np0005625203.localdomain python3[71410]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env TRIPLEO_CONFIG_HASH=ef7731a1bdeb8ee7875974b29f2e34e6 --healthcheck-command /openstack/healthcheck --label config_id=tripleo_step4 --label container_name=ovn_metadata_agent --label managed_by=tripleo_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/ovn_metadata_agent.log --network host --pid host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/neutron:/var/log/neutron:z --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro --volume /lib/modules:/lib/modules:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /run/netns:/run/netns:shared --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1
Feb 20 08:10:14 np0005625203.localdomain podman[72322]: 2026-02-20 08:10:14.453076815 +0000 UTC m=+0.108757590 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, tcib_managed=true, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller)
Feb 20 08:10:14 np0005625203.localdomain sudo[72359]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Queued start job for default target Main User Target.
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Created slice User Application Slice.
Feb 20 08:10:14 np0005625203.localdomain podman[72322]: 2026-02-20 08:10:14.594757867 +0000 UTC m=+0.250438642 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ovn_controller, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c)
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Started Daily Cleanup of User's Temporary Directories.
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Reached target Paths.
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Reached target Timers.
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Starting D-Bus User Message Bus Socket...
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Starting Create User's Volatile Files and Directories...
Feb 20 08:10:14 np0005625203.localdomain podman[72322]: unhealthy
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Listening on D-Bus User Message Bus Socket.
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Finished Create User's Volatile Files and Directories.
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Reached target Sockets.
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Reached target Basic System.
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Reached target Main User Target.
Feb 20 08:10:14 np0005625203.localdomain systemd[72343]: Startup finished in 141ms.
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Started User Manager for UID 0.
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: Started Session c9 of User root.
Feb 20 08:10:14 np0005625203.localdomain podman[72364]: 2026-02-20 08:10:14.590250446 +0000 UTC m=+0.107806159 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Feb 20 08:10:14 np0005625203.localdomain podman[72364]: 2026-02-20 08:10:14.677377353 +0000 UTC m=+0.194933066 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible)
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: session-c9.scope: Deactivated successfully.
Feb 20 08:10:14 np0005625203.localdomain podman[72364]: unhealthy
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:10:14 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:10:14 np0005625203.localdomain kernel: device br-int entered promiscuous mode
Feb 20 08:10:14 np0005625203.localdomain NetworkManager[5968]: <info>  [1771575014.7056] manager: (br-int): new Generic device (/org/freedesktop/NetworkManager/Devices/11)
Feb 20 08:10:14 np0005625203.localdomain systemd-udevd[72439]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 08:10:14 np0005625203.localdomain kernel: device genev_sys_6081 entered promiscuous mode
Feb 20 08:10:14 np0005625203.localdomain NetworkManager[5968]: <info>  [1771575014.7353] device (genev_sys_6081): carrier: link connected
Feb 20 08:10:14 np0005625203.localdomain NetworkManager[5968]: <info>  [1771575014.7356] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/12)
Feb 20 08:10:14 np0005625203.localdomain systemd-udevd[72442]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 08:10:14 np0005625203.localdomain sshd[72445]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:14 np0005625203.localdomain sudo[71408]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:14 np0005625203.localdomain sudo[72459]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-brdoijfcyplbqpillezfamwffzwfltbh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:14 np0005625203.localdomain sudo[72459]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:15 np0005625203.localdomain python3[72463]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:15 np0005625203.localdomain sudo[72459]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:15 np0005625203.localdomain sudo[72478]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdznppyrqqylncwapuoazkrhadhtndjx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:15 np0005625203.localdomain sudo[72478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:15 np0005625203.localdomain python3[72480]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:15 np0005625203.localdomain sudo[72478]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:15 np0005625203.localdomain sudo[72494]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmbbsfyenlvchvtghztizbeswsvmwqrb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:15 np0005625203.localdomain sudo[72494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:15 np0005625203.localdomain python3[72496]: ansible-file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:15 np0005625203.localdomain sudo[72494]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:15 np0005625203.localdomain sudo[72510]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-skzrojnhcetlmvfoiujjiwkqxchdnzsl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:15 np0005625203.localdomain sudo[72510]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:15 np0005625203.localdomain python3[72512]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:15 np0005625203.localdomain sudo[72510]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:15 np0005625203.localdomain sudo[72526]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nnrstgkqaubmxmzztfqyzbipnnhhdtid ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:16 np0005625203.localdomain sudo[72526]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:16 np0005625203.localdomain python3[72528]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:16 np0005625203.localdomain sudo[72530]:  neutron : PWD=/ ; USER=root ; COMMAND=/usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf privsep-helper --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/networking-ovn/networking-ovn-metadata-agent.ini --privsep_context neutron.privileged.default --privsep_sock_path /tmp/tmp49kdz4u3/privsep.sock
Feb 20 08:10:16 np0005625203.localdomain sudo[72530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42435)
Feb 20 08:10:16 np0005625203.localdomain sudo[72526]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:16 np0005625203.localdomain sudo[72545]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvvjxhcmkqsbncfkgisjndfmhdartboj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:16 np0005625203.localdomain sudo[72545]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:16 np0005625203.localdomain sshd[72445]: Invalid user student from 185.246.128.171 port 55419
Feb 20 08:10:16 np0005625203.localdomain python3[72547]: ansible-file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:16 np0005625203.localdomain sudo[72545]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:16 np0005625203.localdomain sudo[72562]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydsafrxuthpmigzdeqgitbwujenchvcf ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:16 np0005625203.localdomain sudo[72562]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:16 np0005625203.localdomain python3[72564]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:10:16 np0005625203.localdomain sudo[72562]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:16 np0005625203.localdomain sudo[72578]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hgkufherkcdosvypeeaprqvntokhkyyn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:16 np0005625203.localdomain sudo[72578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:16 np0005625203.localdomain sudo[72530]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:16 np0005625203.localdomain python3[72580]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:10:16 np0005625203.localdomain sudo[72578]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:16 np0005625203.localdomain sudo[72596]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jmfxqqkrbdptqckmvaqzyoicoerhivgw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:16 np0005625203.localdomain sudo[72596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:17 np0005625203.localdomain sshd[72445]: Disconnecting invalid user student 185.246.128.171 port 55419: Change of username or service not allowed: (student,ssh-connection) -> (splunk,ssh-connection) [preauth]
Feb 20 08:10:17 np0005625203.localdomain python3[72598]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_logrotate_crond_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:10:17 np0005625203.localdomain sudo[72596]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:17 np0005625203.localdomain sudo[72614]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vjwuexgmxvtoroxtzmsnlfjgoxtandzg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:17 np0005625203.localdomain sudo[72614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:17 np0005625203.localdomain python3[72616]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_migration_target_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:10:17 np0005625203.localdomain sudo[72614]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:17 np0005625203.localdomain sudo[72630]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lulbxqrrjycrygqyeccxayvmcqyhkqol ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:17 np0005625203.localdomain sudo[72630]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:17 np0005625203.localdomain python3[72632]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_controller_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:10:17 np0005625203.localdomain sudo[72630]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:17 np0005625203.localdomain sudo[72646]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gatkyqujbtvdvfsniauejgdkjpraoacx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:17 np0005625203.localdomain sudo[72646]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:17 np0005625203.localdomain python3[72648]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:10:17 np0005625203.localdomain sudo[72646]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:18 np0005625203.localdomain sudo[72707]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-juxqthuizndniyfimdrofzrswvdprpxj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:18 np0005625203.localdomain sudo[72707]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:18 np0005625203.localdomain python3[72709]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575018.037369-109840-93335972727073/source dest=/etc/systemd/system/tripleo_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:18 np0005625203.localdomain sudo[72707]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:18 np0005625203.localdomain sudo[72737]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djzboslgneyeflfkmyatnmtkxmoqtijq ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:18 np0005625203.localdomain sudo[72737]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:19 np0005625203.localdomain python3[72739]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575018.037369-109840-93335972727073/source dest=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:19 np0005625203.localdomain sudo[72737]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:19 np0005625203.localdomain sshd[72753]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:19 np0005625203.localdomain sudo[72767]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sgfwwkobtxbemveefhefpnmdejgrirme ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:19 np0005625203.localdomain sudo[72767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:19 np0005625203.localdomain python3[72769]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575018.037369-109840-93335972727073/source dest=/etc/systemd/system/tripleo_logrotate_crond.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:19 np0005625203.localdomain sudo[72767]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:20 np0005625203.localdomain sudo[72797]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xsozujbzxmlgrktsfjwobleqazmrthck ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:20 np0005625203.localdomain sudo[72797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:20 np0005625203.localdomain python3[72799]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575018.037369-109840-93335972727073/source dest=/etc/systemd/system/tripleo_nova_migration_target.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:20 np0005625203.localdomain sudo[72797]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:20 np0005625203.localdomain sudo[72826]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kfmprgzbmtwjyzcxqxxzljpveqxdalle ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:20 np0005625203.localdomain sudo[72826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:20 np0005625203.localdomain python3[72828]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575018.037369-109840-93335972727073/source dest=/etc/systemd/system/tripleo_ovn_controller.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:20 np0005625203.localdomain sudo[72826]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:21 np0005625203.localdomain sudo[72856]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blalblrqfomoeiaijvlqadeiztulnbvd ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:21 np0005625203.localdomain sudo[72856]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:21 np0005625203.localdomain python3[72858]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575018.037369-109840-93335972727073/source dest=/etc/systemd/system/tripleo_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:21 np0005625203.localdomain sudo[72856]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:21 np0005625203.localdomain sudo[72872]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qcixunnawhijhsukkbfdpukiniuuknjw ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:21 np0005625203.localdomain sudo[72872]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:21 np0005625203.localdomain python3[72874]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 08:10:21 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:10:21 np0005625203.localdomain systemd-rc-local-generator[72900]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:10:21 np0005625203.localdomain systemd-sysv-generator[72904]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:10:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:10:21 np0005625203.localdomain sudo[72872]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:22 np0005625203.localdomain sshd[72753]: Invalid user splunk from 185.246.128.171 port 43248
Feb 20 08:10:22 np0005625203.localdomain sudo[72923]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ztkjuiudhjtzfiheputegzxsjcecdirm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:22 np0005625203.localdomain sudo[72923]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:22 np0005625203.localdomain python3[72925]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:10:22 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:10:22 np0005625203.localdomain systemd-rc-local-generator[72950]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:10:22 np0005625203.localdomain systemd-sysv-generator[72954]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:10:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:10:22 np0005625203.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Feb 20 08:10:23 np0005625203.localdomain sshd[72753]: Disconnecting invalid user splunk 185.246.128.171 port 43248: Change of username or service not allowed: (splunk,ssh-connection) -> (mongod,ssh-connection) [preauth]
Feb 20 08:10:23 np0005625203.localdomain tripleo-start-podman-container[72965]: Creating additional drop-in dependency for "ceilometer_agent_compute" (5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29)
Feb 20 08:10:23 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:10:23 np0005625203.localdomain systemd-rc-local-generator[73024]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:10:23 np0005625203.localdomain systemd-sysv-generator[73028]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:10:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:10:23 np0005625203.localdomain systemd[1]: Started ceilometer_agent_compute container.
Feb 20 08:10:23 np0005625203.localdomain sudo[72923]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:23 np0005625203.localdomain sudo[73048]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qsohaipuusxxqfojjranhyreakuaumpc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:23 np0005625203.localdomain sudo[73048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:23 np0005625203.localdomain python3[73050]: ansible-systemd Invoked with state=restarted name=tripleo_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:10:24 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:10:24 np0005625203.localdomain systemd-rc-local-generator[73073]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:10:24 np0005625203.localdomain systemd-sysv-generator[73077]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:10:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:10:24 np0005625203.localdomain systemd[1]: Starting ceilometer_agent_ipmi container...
Feb 20 08:10:24 np0005625203.localdomain systemd[1]: Started ceilometer_agent_ipmi container.
Feb 20 08:10:24 np0005625203.localdomain sudo[73048]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:24 np0005625203.localdomain systemd[1]: Stopping User Manager for UID 0...
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Activating special unit Exit the Session...
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Stopped target Main User Target.
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Stopped target Basic System.
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Stopped target Paths.
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Stopped target Sockets.
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Stopped target Timers.
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Closed D-Bus User Message Bus Socket.
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Stopped Create User's Volatile Files and Directories.
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Removed slice User Application Slice.
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Reached target Shutdown.
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Finished Exit the Session.
Feb 20 08:10:24 np0005625203.localdomain systemd[72343]: Reached target Exit the Session.
Feb 20 08:10:24 np0005625203.localdomain systemd[1]: user@0.service: Deactivated successfully.
Feb 20 08:10:24 np0005625203.localdomain systemd[1]: Stopped User Manager for UID 0.
Feb 20 08:10:24 np0005625203.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb 20 08:10:24 np0005625203.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Feb 20 08:10:24 np0005625203.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb 20 08:10:24 np0005625203.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb 20 08:10:24 np0005625203.localdomain systemd[1]: Removed slice User Slice of UID 0.
Feb 20 08:10:25 np0005625203.localdomain sudo[73118]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zlbwfzbsmrnhbpynpalumsyxnczlirbr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:25 np0005625203.localdomain sudo[73118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:25 np0005625203.localdomain python3[73120]: ansible-systemd Invoked with state=restarted name=tripleo_logrotate_crond.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:10:25 np0005625203.localdomain sshd[73123]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:26 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:10:26 np0005625203.localdomain systemd-rc-local-generator[73148]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:10:26 np0005625203.localdomain systemd-sysv-generator[73151]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:10:26 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:10:26 np0005625203.localdomain systemd[1]: Starting logrotate_crond container...
Feb 20 08:10:26 np0005625203.localdomain systemd[1]: Started logrotate_crond container.
Feb 20 08:10:26 np0005625203.localdomain sudo[73118]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:27 np0005625203.localdomain sudo[73188]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ycexmpdbduzexykvpkvilvzrjipfqhoa ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:27 np0005625203.localdomain sudo[73188]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:27 np0005625203.localdomain python3[73190]: ansible-systemd Invoked with state=restarted name=tripleo_nova_migration_target.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:10:27 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:10:27 np0005625203.localdomain systemd-rc-local-generator[73214]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:10:27 np0005625203.localdomain systemd-sysv-generator[73219]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:10:27 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:10:27 np0005625203.localdomain sshd[73123]: Invalid user mongod from 185.246.128.171 port 45730
Feb 20 08:10:27 np0005625203.localdomain systemd[1]: Starting nova_migration_target container...
Feb 20 08:10:27 np0005625203.localdomain systemd[1]: Started nova_migration_target container.
Feb 20 08:10:27 np0005625203.localdomain sudo[73188]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:28 np0005625203.localdomain sudo[73255]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ivyratjucxyiwbtjlgdppseedhuujwnc ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:28 np0005625203.localdomain sudo[73255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:28 np0005625203.localdomain sshd[73123]: Disconnecting invalid user mongod 185.246.128.171 port 45730: Change of username or service not allowed: (mongod,ssh-connection) -> (Administrator,ssh-connection) [preauth]
Feb 20 08:10:28 np0005625203.localdomain python3[73257]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:10:28 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:10:28 np0005625203.localdomain systemd-rc-local-generator[73286]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:10:28 np0005625203.localdomain systemd-sysv-generator[73290]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:10:28 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:10:28 np0005625203.localdomain systemd[1]: Starting ovn_controller container...
Feb 20 08:10:28 np0005625203.localdomain tripleo-start-podman-container[73297]: Creating additional drop-in dependency for "ovn_controller" (d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933)
Feb 20 08:10:28 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:10:29 np0005625203.localdomain systemd-rc-local-generator[73350]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:10:29 np0005625203.localdomain systemd-sysv-generator[73354]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:10:29 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:10:29 np0005625203.localdomain systemd[1]: Started ovn_controller container.
Feb 20 08:10:29 np0005625203.localdomain sudo[73255]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:29 np0005625203.localdomain sudo[73378]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izawbkmawugbdldhbbyiooaqtyqrvryh ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:10:29 np0005625203.localdomain sudo[73378]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:29 np0005625203.localdomain python3[73380]: ansible-systemd Invoked with state=restarted name=tripleo_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:10:29 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:10:29 np0005625203.localdomain sshd[73383]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:29 np0005625203.localdomain systemd-sysv-generator[73407]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:10:29 np0005625203.localdomain systemd-rc-local-generator[73401]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:10:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:10:30 np0005625203.localdomain systemd[1]: Starting ovn_metadata_agent container...
Feb 20 08:10:30 np0005625203.localdomain systemd[1]: Started ovn_metadata_agent container.
Feb 20 08:10:30 np0005625203.localdomain sudo[73378]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:30 np0005625203.localdomain sudo[73459]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cwalclkxtekzoppfeeqeiilkwshrtdlz ; /usr/bin/python3
Feb 20 08:10:30 np0005625203.localdomain sudo[73459]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:30 np0005625203.localdomain python3[73461]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks4.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:30 np0005625203.localdomain sudo[73459]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:31 np0005625203.localdomain sudo[73507]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxqandkviowguqikojeimpzabjgapgrl ; /usr/bin/python3
Feb 20 08:10:31 np0005625203.localdomain sudo[73507]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:31 np0005625203.localdomain sudo[73507]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:31 np0005625203.localdomain sshd[73383]: Invalid user Administrator from 185.246.128.171 port 27192
Feb 20 08:10:31 np0005625203.localdomain sudo[73550]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-namcyvbgbefsbpndyisppoxpyoyzazde ; /usr/bin/python3
Feb 20 08:10:31 np0005625203.localdomain sudo[73550]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:31 np0005625203.localdomain sudo[73550]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:31 np0005625203.localdomain sudo[73580]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-snnjlmlbmmodytihuimihdcfabghjzda ; /usr/bin/python3
Feb 20 08:10:31 np0005625203.localdomain sudo[73580]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:32 np0005625203.localdomain python3[73582]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks4.json short_hostname=np0005625203 step=4 update_config_hash_only=False
Feb 20 08:10:32 np0005625203.localdomain sudo[73580]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:32 np0005625203.localdomain sudo[73596]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qylgfyzstcizuqaoolqzcvbgwqtbfobc ; /usr/bin/python3
Feb 20 08:10:32 np0005625203.localdomain sudo[73596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:32 np0005625203.localdomain python3[73598]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:10:32 np0005625203.localdomain sudo[73596]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:32 np0005625203.localdomain sudo[73613]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ackchmvchhedjwgbcevpashhthqkrkzm ; /usr/bin/python3
Feb 20 08:10:32 np0005625203.localdomain sudo[73613]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:10:33 np0005625203.localdomain python3[73615]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_4 config_pattern=container-puppet-*.json config_overrides={} debug=True
Feb 20 08:10:33 np0005625203.localdomain sudo[73613]: pam_unix(sudo:session): session closed for user root
Feb 20 08:10:36 np0005625203.localdomain sshd[73383]: Disconnecting invalid user Administrator 185.246.128.171 port 27192: Change of username or service not allowed: (Administrator,ssh-connection) -> (Test,ssh-connection) [preauth]
Feb 20 08:10:38 np0005625203.localdomain sshd[73617]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:10:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:10:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:10:38 np0005625203.localdomain podman[73620]: 2026-02-20 08:10:38.786040033 +0000 UTC m=+0.088062473 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., container_name=iscsid, url=https://www.redhat.com)
Feb 20 08:10:38 np0005625203.localdomain podman[73621]: 2026-02-20 08:10:38.819676698 +0000 UTC m=+0.121268094 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:10:38 np0005625203.localdomain podman[73620]: 2026-02-20 08:10:38.845479347 +0000 UTC m=+0.147501747 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, distribution-scope=public, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:10:38 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:10:38 np0005625203.localdomain podman[73618]: 2026-02-20 08:10:38.933640663 +0000 UTC m=+0.233941969 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, container_name=collectd, config_id=tripleo_step3, distribution-scope=public, release=1766032510, architecture=x86_64, build-date=2026-01-12T22:10:15Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:10:38 np0005625203.localdomain podman[73618]: 2026-02-20 08:10:38.942765188 +0000 UTC m=+0.243066464 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.13, container_name=collectd, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, distribution-scope=public, config_id=tripleo_step3, architecture=x86_64, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:10:38 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:10:38 np0005625203.localdomain podman[73621]: 2026-02-20 08:10:38.999500048 +0000 UTC m=+0.301091444 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, tcib_managed=true, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vendor=Red Hat, Inc.)
Feb 20 08:10:39 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:10:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:10:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:10:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:10:40 np0005625203.localdomain podman[73688]: 2026-02-20 08:10:40.768022647 +0000 UTC m=+0.083587062 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-type=git)
Feb 20 08:10:40 np0005625203.localdomain podman[73689]: 2026-02-20 08:10:40.831401275 +0000 UTC m=+0.137620808 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=starting, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc.)
Feb 20 08:10:40 np0005625203.localdomain podman[73690]: 2026-02-20 08:10:40.881867387 +0000 UTC m=+0.190018291 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=starting, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container)
Feb 20 08:10:40 np0005625203.localdomain podman[73688]: 2026-02-20 08:10:40.907418499 +0000 UTC m=+0.222982974 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, release=1766032510, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:10:40 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:10:40 np0005625203.localdomain podman[73690]: 2026-02-20 08:10:40.936242103 +0000 UTC m=+0.244393027 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1)
Feb 20 08:10:40 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:10:40 np0005625203.localdomain podman[73689]: 2026-02-20 08:10:40.962682242 +0000 UTC m=+0.268901745 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, config_id=tripleo_step4, maintainer=OpenStack TripleO Team)
Feb 20 08:10:40 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:10:41 np0005625203.localdomain sshd[73617]: Invalid user Test from 185.246.128.171 port 51468
Feb 20 08:10:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:10:41 np0005625203.localdomain podman[73761]: 2026-02-20 08:10:41.753991062 +0000 UTC m=+0.075801088 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:10:42 np0005625203.localdomain podman[73761]: 2026-02-20 08:10:42.191206225 +0000 UTC m=+0.513016241 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, version=17.1.13, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=)
Feb 20 08:10:42 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:10:42 np0005625203.localdomain sshd[73783]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:42 np0005625203.localdomain sshd[73783]: Invalid user otp from 147.135.114.8 port 37068
Feb 20 08:10:42 np0005625203.localdomain sshd[73783]: Received disconnect from 147.135.114.8 port 37068:11: Bye Bye [preauth]
Feb 20 08:10:42 np0005625203.localdomain sshd[73783]: Disconnected from invalid user otp 147.135.114.8 port 37068 [preauth]
Feb 20 08:10:43 np0005625203.localdomain sshd[73617]: Disconnecting invalid user Test 185.246.128.171 port 51468: Change of username or service not allowed: (Test,ssh-connection) -> (peertube,ssh-connection) [preauth]
Feb 20 08:10:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:10:44 np0005625203.localdomain podman[73785]: 2026-02-20 08:10:44.755591665 +0000 UTC m=+0.076505260 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=starting, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510)
Feb 20 08:10:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:10:44 np0005625203.localdomain podman[73785]: 2026-02-20 08:10:44.78027314 +0000 UTC m=+0.101186695 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vendor=Red Hat, Inc., io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:10:44 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:10:44 np0005625203.localdomain podman[73804]: 2026-02-20 08:10:44.857730709 +0000 UTC m=+0.079664590 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=starting, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:10:44 np0005625203.localdomain podman[73804]: 2026-02-20 08:10:44.928260501 +0000 UTC m=+0.150194352 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, managed_by=tripleo_ansible, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Feb 20 08:10:44 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:10:46 np0005625203.localdomain sshd[73832]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:49 np0005625203.localdomain snmpd[68076]: empty variable list in _query
Feb 20 08:10:49 np0005625203.localdomain snmpd[68076]: empty variable list in _query
Feb 20 08:10:49 np0005625203.localdomain sshd[73832]: Invalid user peertube from 185.246.128.171 port 10363
Feb 20 08:10:50 np0005625203.localdomain sshd[73832]: Disconnecting invalid user peertube 185.246.128.171 port 10363: Change of username or service not allowed: (peertube,ssh-connection) -> (craft,ssh-connection) [preauth]
Feb 20 08:10:51 np0005625203.localdomain sshd[73834]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:54 np0005625203.localdomain sshd[73836]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:54 np0005625203.localdomain sshd[73836]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:10:55 np0005625203.localdomain sshd[73834]: Invalid user craft from 185.246.128.171 port 62777
Feb 20 08:10:55 np0005625203.localdomain sshd[73834]: Disconnecting invalid user craft 185.246.128.171 port 62777: Change of username or service not allowed: (craft,ssh-connection) -> (anonymous,ssh-connection) [preauth]
Feb 20 08:10:57 np0005625203.localdomain sshd[73838]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:10:58 np0005625203.localdomain sshd[73838]: Invalid user anonymous from 185.246.128.171 port 59867
Feb 20 08:10:59 np0005625203.localdomain sshd[73838]: error: maximum authentication attempts exceeded for invalid user anonymous from 185.246.128.171 port 59867 ssh2 [preauth]
Feb 20 08:10:59 np0005625203.localdomain sshd[73838]: Disconnecting invalid user anonymous 185.246.128.171 port 59867: Too many authentication failures [preauth]
Feb 20 08:10:59 np0005625203.localdomain sshd[73840]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:00 np0005625203.localdomain sshd[73840]: Invalid user anonymous from 185.246.128.171 port 18207
Feb 20 08:11:02 np0005625203.localdomain sshd[73840]: Disconnecting invalid user anonymous 185.246.128.171 port 18207: Change of username or service not allowed: (anonymous,ssh-connection) -> (aman,ssh-connection) [preauth]
Feb 20 08:11:05 np0005625203.localdomain sshd[73842]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:06 np0005625203.localdomain sudo[73844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:11:06 np0005625203.localdomain sudo[73844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:11:06 np0005625203.localdomain sudo[73844]: pam_unix(sudo:session): session closed for user root
Feb 20 08:11:06 np0005625203.localdomain sudo[73859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:11:06 np0005625203.localdomain sudo[73859]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:11:07 np0005625203.localdomain sshd[73842]: Invalid user aman from 185.246.128.171 port 13275
Feb 20 08:11:07 np0005625203.localdomain sudo[73859]: pam_unix(sudo:session): session closed for user root
Feb 20 08:11:08 np0005625203.localdomain sudo[73905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:11:08 np0005625203.localdomain sudo[73905]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:11:08 np0005625203.localdomain sudo[73905]: pam_unix(sudo:session): session closed for user root
Feb 20 08:11:08 np0005625203.localdomain sshd[73842]: Disconnecting invalid user aman 185.246.128.171 port 13275: Change of username or service not allowed: (aman,ssh-connection) -> (ceshi,ssh-connection) [preauth]
Feb 20 08:11:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:11:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:11:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:11:09 np0005625203.localdomain systemd[1]: tmp-crun.jwP8wT.mount: Deactivated successfully.
Feb 20 08:11:09 np0005625203.localdomain podman[73921]: 2026-02-20 08:11:09.776486026 +0000 UTC m=+0.084852993 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, batch=17.1_20260112.1, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3)
Feb 20 08:11:09 np0005625203.localdomain podman[73920]: 2026-02-20 08:11:09.817413479 +0000 UTC m=+0.131074692 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, architecture=x86_64, version=17.1.13, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:11:09 np0005625203.localdomain podman[73921]: 2026-02-20 08:11:09.820340161 +0000 UTC m=+0.128707088 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.5, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3)
Feb 20 08:11:09 np0005625203.localdomain podman[73920]: 2026-02-20 08:11:09.830487849 +0000 UTC m=+0.144149092 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, config_id=tripleo_step3, distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64)
Feb 20 08:11:09 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:11:09 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:11:09 np0005625203.localdomain podman[73922]: 2026-02-20 08:11:09.873512848 +0000 UTC m=+0.179075957 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 08:11:10 np0005625203.localdomain podman[73922]: 2026-02-20 08:11:10.113463035 +0000 UTC m=+0.419026214 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, distribution-scope=public, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:11:10 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:11:10 np0005625203.localdomain sshd[73989]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:11 np0005625203.localdomain sshd[73989]: Invalid user ceshi from 185.246.128.171 port 6522
Feb 20 08:11:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:11:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:11:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:11:11 np0005625203.localdomain systemd[1]: tmp-crun.orKZdU.mount: Deactivated successfully.
Feb 20 08:11:11 np0005625203.localdomain podman[73991]: 2026-02-20 08:11:11.452158972 +0000 UTC m=+0.079317048 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, release=1766032510)
Feb 20 08:11:11 np0005625203.localdomain podman[73991]: 2026-02-20 08:11:11.485465547 +0000 UTC m=+0.112623583 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:11:11 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:11:11 np0005625203.localdomain podman[73992]: 2026-02-20 08:11:11.561459561 +0000 UTC m=+0.183646032 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, version=17.1.13, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z)
Feb 20 08:11:11 np0005625203.localdomain podman[73992]: 2026-02-20 08:11:11.592170044 +0000 UTC m=+0.214356525 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, build-date=2026-01-12T23:07:47Z, version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:11:11 np0005625203.localdomain sshd[73989]: Disconnecting invalid user ceshi 185.246.128.171 port 6522: Change of username or service not allowed: (ceshi,ssh-connection) -> (testuser,ssh-connection) [preauth]
Feb 20 08:11:11 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:11:11 np0005625203.localdomain podman[73993]: 2026-02-20 08:11:11.613083989 +0000 UTC m=+0.233035110 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64)
Feb 20 08:11:11 np0005625203.localdomain podman[73993]: 2026-02-20 08:11:11.643853754 +0000 UTC m=+0.263804865 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_ipmi, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, maintainer=OpenStack TripleO Team)
Feb 20 08:11:11 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:11:12 np0005625203.localdomain sshd[74066]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:11:12 np0005625203.localdomain podman[74068]: 2026-02-20 08:11:12.756692678 +0000 UTC m=+0.074504637 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team)
Feb 20 08:11:13 np0005625203.localdomain podman[74068]: 2026-02-20 08:11:13.128278713 +0000 UTC m=+0.446090672 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, url=https://www.redhat.com, release=1766032510, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.buildah.version=1.41.5)
Feb 20 08:11:13 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:11:14 np0005625203.localdomain sshd[74066]: Invalid user testuser from 185.246.128.171 port 28967
Feb 20 08:11:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:11:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:11:15 np0005625203.localdomain podman[74092]: 2026-02-20 08:11:15.78855418 +0000 UTC m=+0.103876278 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, version=17.1.13, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z)
Feb 20 08:11:15 np0005625203.localdomain podman[74092]: 2026-02-20 08:11:15.810295983 +0000 UTC m=+0.125618061 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, container_name=ovn_controller)
Feb 20 08:11:15 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:11:15 np0005625203.localdomain systemd[1]: tmp-crun.Xv3cPS.mount: Deactivated successfully.
Feb 20 08:11:15 np0005625203.localdomain podman[74091]: 2026-02-20 08:11:15.83796822 +0000 UTC m=+0.156214410 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public)
Feb 20 08:11:15 np0005625203.localdomain podman[74091]: 2026-02-20 08:11:15.908262184 +0000 UTC m=+0.226508384 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2026-01-12T22:56:19Z, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vendor=Red Hat, Inc.)
Feb 20 08:11:15 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:11:16 np0005625203.localdomain sshd[74066]: error: maximum authentication attempts exceeded for invalid user testuser from 185.246.128.171 port 28967 ssh2 [preauth]
Feb 20 08:11:16 np0005625203.localdomain sshd[74066]: Disconnecting invalid user testuser 185.246.128.171 port 28967: Too many authentication failures [preauth]
Feb 20 08:11:16 np0005625203.localdomain sshd[74140]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:18 np0005625203.localdomain sshd[74140]: Invalid user testuser from 185.246.128.171 port 14630
Feb 20 08:11:18 np0005625203.localdomain sshd[74140]: Disconnecting invalid user testuser 185.246.128.171 port 14630: Change of username or service not allowed: (testuser,ssh-connection) -> (server,ssh-connection) [preauth]
Feb 20 08:11:20 np0005625203.localdomain sshd[74142]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:21 np0005625203.localdomain sshd[74143]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:22 np0005625203.localdomain sshd[74143]: Invalid user oracle from 189.190.2.14 port 45314
Feb 20 08:11:22 np0005625203.localdomain sshd[74143]: Received disconnect from 189.190.2.14 port 45314:11: Bye Bye [preauth]
Feb 20 08:11:22 np0005625203.localdomain sshd[74143]: Disconnected from invalid user oracle 189.190.2.14 port 45314 [preauth]
Feb 20 08:11:24 np0005625203.localdomain sshd[74142]: Invalid user server from 185.246.128.171 port 59371
Feb 20 08:11:24 np0005625203.localdomain sshd[74142]: Disconnecting invalid user server 185.246.128.171 port 59371: Change of username or service not allowed: (server,ssh-connection) -> (tech,ssh-connection) [preauth]
Feb 20 08:11:25 np0005625203.localdomain sshd[74146]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:27 np0005625203.localdomain sshd[74146]: Invalid user tech from 185.246.128.171 port 49383
Feb 20 08:11:28 np0005625203.localdomain sshd[74146]: Disconnecting invalid user tech 185.246.128.171 port 49383: Change of username or service not allowed: (tech,ssh-connection) -> (adfexc,ssh-connection) [preauth]
Feb 20 08:11:31 np0005625203.localdomain sshd[74148]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:34 np0005625203.localdomain sshd[74151]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:34 np0005625203.localdomain sshd[74148]: Invalid user adfexc from 185.246.128.171 port 43418
Feb 20 08:11:34 np0005625203.localdomain sshd[74148]: Disconnecting invalid user adfexc 185.246.128.171 port 43418: Change of username or service not allowed: (adfexc,ssh-connection) -> (sapadm,ssh-connection) [preauth]
Feb 20 08:11:35 np0005625203.localdomain sshd[74151]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:11:35 np0005625203.localdomain sshd[74153]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:36 np0005625203.localdomain sshd[74153]: Invalid user sapadm from 185.246.128.171 port 25940
Feb 20 08:11:37 np0005625203.localdomain sshd[74153]: Disconnecting invalid user sapadm 185.246.128.171 port 25940: Change of username or service not allowed: (sapadm,ssh-connection) -> (USER3,ssh-connection) [preauth]
Feb 20 08:11:39 np0005625203.localdomain sshd[74155]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:39 np0005625203.localdomain sshd[74156]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:39 np0005625203.localdomain sshd[74156]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:11:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:11:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:11:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:11:40 np0005625203.localdomain systemd[1]: tmp-crun.gkEYaN.mount: Deactivated successfully.
Feb 20 08:11:40 np0005625203.localdomain podman[74159]: 2026-02-20 08:11:40.785696304 +0000 UTC m=+0.100083880 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, release=1766032510, batch=17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z)
Feb 20 08:11:40 np0005625203.localdomain podman[74160]: 2026-02-20 08:11:40.824250364 +0000 UTC m=+0.136585235 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com)
Feb 20 08:11:40 np0005625203.localdomain podman[74160]: 2026-02-20 08:11:40.837258751 +0000 UTC m=+0.149593602 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64)
Feb 20 08:11:40 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:11:40 np0005625203.localdomain podman[74161]: 2026-02-20 08:11:40.928850164 +0000 UTC m=+0.237891832 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, distribution-scope=public)
Feb 20 08:11:40 np0005625203.localdomain podman[74159]: 2026-02-20 08:11:40.965454872 +0000 UTC m=+0.279842418 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step3)
Feb 20 08:11:40 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:11:41 np0005625203.localdomain podman[74161]: 2026-02-20 08:11:41.145339364 +0000 UTC m=+0.454380972 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.5, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1766032510, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:11:41 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:11:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:11:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:11:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:11:41 np0005625203.localdomain podman[74223]: 2026-02-20 08:11:41.773454124 +0000 UTC m=+0.088023950 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, release=1766032510, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, maintainer=OpenStack TripleO Team)
Feb 20 08:11:41 np0005625203.localdomain podman[74223]: 2026-02-20 08:11:41.812316153 +0000 UTC m=+0.126885969 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.buildah.version=1.41.5, vcs-type=git, container_name=logrotate_crond, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, batch=17.1_20260112.1, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z)
Feb 20 08:11:41 np0005625203.localdomain podman[74224]: 2026-02-20 08:11:41.823569546 +0000 UTC m=+0.137454841 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13)
Feb 20 08:11:41 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:11:41 np0005625203.localdomain podman[74224]: 2026-02-20 08:11:41.883273 +0000 UTC m=+0.197158265 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, distribution-scope=public, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, release=1766032510, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute)
Feb 20 08:11:41 np0005625203.localdomain podman[74225]: 2026-02-20 08:11:41.891450586 +0000 UTC m=+0.202053328 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, version=17.1.13, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, release=1766032510, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, distribution-scope=public)
Feb 20 08:11:41 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:11:41 np0005625203.localdomain podman[74225]: 2026-02-20 08:11:41.9473552 +0000 UTC m=+0.257957902 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, release=1766032510, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:11:41 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:11:42 np0005625203.localdomain sshd[74155]: Invalid user USER3 from 185.246.128.171 port 64699
Feb 20 08:11:42 np0005625203.localdomain systemd[1]: tmp-crun.mK9j9c.mount: Deactivated successfully.
Feb 20 08:11:42 np0005625203.localdomain sshd[74155]: Disconnecting invalid user USER3 185.246.128.171 port 64699: Change of username or service not allowed: (USER3,ssh-connection) -> (administrator,ssh-connection) [preauth]
Feb 20 08:11:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:11:43 np0005625203.localdomain systemd[1]: tmp-crun.4a68Pd.mount: Deactivated successfully.
Feb 20 08:11:43 np0005625203.localdomain podman[74295]: 2026-02-20 08:11:43.763302996 +0000 UTC m=+0.084169211 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.13, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:11:44 np0005625203.localdomain podman[74295]: 2026-02-20 08:11:44.154449263 +0000 UTC m=+0.475315438 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., architecture=x86_64, release=1766032510, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public)
Feb 20 08:11:44 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:11:44 np0005625203.localdomain sshd[74319]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:45 np0005625203.localdomain sshd[74319]: Invalid user administrator from 185.246.128.171 port 56992
Feb 20 08:11:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:11:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:11:46 np0005625203.localdomain sshd[74319]: error: maximum authentication attempts exceeded for invalid user administrator from 185.246.128.171 port 56992 ssh2 [preauth]
Feb 20 08:11:46 np0005625203.localdomain sshd[74319]: Disconnecting invalid user administrator 185.246.128.171 port 56992: Too many authentication failures [preauth]
Feb 20 08:11:46 np0005625203.localdomain podman[74321]: 2026-02-20 08:11:46.758763056 +0000 UTC m=+0.078830623 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, tcib_managed=true, release=1766032510, maintainer=OpenStack TripleO Team)
Feb 20 08:11:46 np0005625203.localdomain podman[74321]: 2026-02-20 08:11:46.804314675 +0000 UTC m=+0.124382312 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, architecture=x86_64, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public)
Feb 20 08:11:46 np0005625203.localdomain systemd[1]: tmp-crun.68fOSA.mount: Deactivated successfully.
Feb 20 08:11:46 np0005625203.localdomain podman[74322]: 2026-02-20 08:11:46.820227764 +0000 UTC m=+0.134647744 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Feb 20 08:11:46 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:11:46 np0005625203.localdomain podman[74322]: 2026-02-20 08:11:46.872293908 +0000 UTC m=+0.186713908 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, container_name=ovn_controller, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.openshift.expose-services=, version=17.1.13, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:11:46 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:11:46 np0005625203.localdomain sshd[74369]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:49 np0005625203.localdomain sshd[74369]: Invalid user administrator from 185.246.128.171 port 21132
Feb 20 08:11:50 np0005625203.localdomain sshd[74369]: Disconnecting invalid user administrator 185.246.128.171 port 21132: Change of username or service not allowed: (administrator,ssh-connection) -> (vyos,ssh-connection) [preauth]
Feb 20 08:11:51 np0005625203.localdomain sshd[74371]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:52 np0005625203.localdomain sshd[74371]: Invalid user vyos from 185.246.128.171 port 3173
Feb 20 08:11:53 np0005625203.localdomain sshd[74371]: Disconnecting invalid user vyos 185.246.128.171 port 3173: Change of username or service not allowed: (vyos,ssh-connection) -> (rahul,ssh-connection) [preauth]
Feb 20 08:11:55 np0005625203.localdomain sshd[74373]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:11:58 np0005625203.localdomain sshd[74373]: Invalid user rahul from 185.246.128.171 port 46548
Feb 20 08:11:59 np0005625203.localdomain sshd[74373]: Disconnecting invalid user rahul 185.246.128.171 port 46548: Change of username or service not allowed: (rahul,ssh-connection) -> (zomboid,ssh-connection) [preauth]
Feb 20 08:12:00 np0005625203.localdomain sshd[74375]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:01 np0005625203.localdomain sshd[74375]: Invalid user zomboid from 185.246.128.171 port 38955
Feb 20 08:12:01 np0005625203.localdomain sshd[74375]: Disconnecting invalid user zomboid 185.246.128.171 port 38955: Change of username or service not allowed: (zomboid,ssh-connection) -> (keycloak,ssh-connection) [preauth]
Feb 20 08:12:02 np0005625203.localdomain sshd[74377]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:04 np0005625203.localdomain sshd[74377]: Invalid user keycloak from 185.246.128.171 port 7208
Feb 20 08:12:04 np0005625203.localdomain sshd[74377]: Disconnecting invalid user keycloak 185.246.128.171 port 7208: Change of username or service not allowed: (keycloak,ssh-connection) -> (cat,ssh-connection) [preauth]
Feb 20 08:12:05 np0005625203.localdomain sshd[74379]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:06 np0005625203.localdomain sshd[74379]: Invalid user cat from 185.246.128.171 port 31762
Feb 20 08:12:06 np0005625203.localdomain sshd[74379]: Disconnecting invalid user cat 185.246.128.171 port 31762: Change of username or service not allowed: (cat,ssh-connection) -> (william,ssh-connection) [preauth]
Feb 20 08:12:08 np0005625203.localdomain sudo[74381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:12:08 np0005625203.localdomain sudo[74381]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:12:08 np0005625203.localdomain sudo[74381]: pam_unix(sudo:session): session closed for user root
Feb 20 08:12:08 np0005625203.localdomain sudo[74396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 08:12:08 np0005625203.localdomain sudo[74396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:12:09 np0005625203.localdomain systemd[1]: tmp-crun.zeVGJX.mount: Deactivated successfully.
Feb 20 08:12:09 np0005625203.localdomain podman[74483]: 2026-02-20 08:12:09.234864198 +0000 UTC m=+0.088808545 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.tags=rhceph ceph, release=1770267347, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, ceph=True, com.redhat.component=rhceph-container, architecture=x86_64, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git)
Feb 20 08:12:09 np0005625203.localdomain sshd[74501]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:09 np0005625203.localdomain podman[74483]: 2026-02-20 08:12:09.365813086 +0000 UTC m=+0.219757403 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, name=rhceph)
Feb 20 08:12:09 np0005625203.localdomain sudo[74396]: pam_unix(sudo:session): session closed for user root
Feb 20 08:12:09 np0005625203.localdomain sudo[74550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:12:09 np0005625203.localdomain sudo[74550]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:12:09 np0005625203.localdomain sudo[74550]: pam_unix(sudo:session): session closed for user root
Feb 20 08:12:09 np0005625203.localdomain sudo[74565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:12:09 np0005625203.localdomain sudo[74565]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:12:10 np0005625203.localdomain sudo[74565]: pam_unix(sudo:session): session closed for user root
Feb 20 08:12:11 np0005625203.localdomain sudo[74612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:12:11 np0005625203.localdomain sudo[74612]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:12:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:12:11 np0005625203.localdomain sudo[74612]: pam_unix(sudo:session): session closed for user root
Feb 20 08:12:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:12:11 np0005625203.localdomain podman[74628]: 2026-02-20 08:12:11.136151122 +0000 UTC m=+0.083131859 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, architecture=x86_64, config_id=tripleo_step3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:12:11 np0005625203.localdomain podman[74628]: 2026-02-20 08:12:11.147545459 +0000 UTC m=+0.094526186 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.13, container_name=iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, release=1766032510, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:12:11 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:12:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:12:11 np0005625203.localdomain systemd[1]: tmp-crun.jOyh7O.mount: Deactivated successfully.
Feb 20 08:12:11 np0005625203.localdomain podman[74656]: 2026-02-20 08:12:11.264613791 +0000 UTC m=+0.080063013 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step1, url=https://www.redhat.com)
Feb 20 08:12:11 np0005625203.localdomain podman[74627]: 2026-02-20 08:12:11.237579133 +0000 UTC m=+0.184558140 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, container_name=collectd, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, release=1766032510, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, tcib_managed=true)
Feb 20 08:12:11 np0005625203.localdomain podman[74627]: 2026-02-20 08:12:11.32135851 +0000 UTC m=+0.268337517 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, container_name=collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, url=https://www.redhat.com, tcib_managed=true)
Feb 20 08:12:11 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:12:11 np0005625203.localdomain sshd[74501]: Invalid user william from 185.246.128.171 port 15878
Feb 20 08:12:11 np0005625203.localdomain podman[74656]: 2026-02-20 08:12:11.496232995 +0000 UTC m=+0.311682197 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, maintainer=OpenStack TripleO Team)
Feb 20 08:12:11 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:12:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:12:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:12:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:12:12 np0005625203.localdomain podman[74695]: 2026-02-20 08:12:12.765453014 +0000 UTC m=+0.081718795 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc.)
Feb 20 08:12:12 np0005625203.localdomain podman[74695]: 2026-02-20 08:12:12.777260134 +0000 UTC m=+0.093525975 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, vendor=Red Hat, Inc., io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, version=17.1.13, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4)
Feb 20 08:12:12 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:12:12 np0005625203.localdomain podman[74696]: 2026-02-20 08:12:12.82591042 +0000 UTC m=+0.139240099 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, version=17.1.13, release=1766032510, container_name=ceilometer_agent_compute, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Feb 20 08:12:12 np0005625203.localdomain sshd[74501]: Disconnecting invalid user william 185.246.128.171 port 15878: Change of username or service not allowed: (william,ssh-connection) -> (ospite,ssh-connection) [preauth]
Feb 20 08:12:12 np0005625203.localdomain podman[74697]: 2026-02-20 08:12:12.87407401 +0000 UTC m=+0.185664374 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z)
Feb 20 08:12:12 np0005625203.localdomain podman[74696]: 2026-02-20 08:12:12.877230809 +0000 UTC m=+0.190560498 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vcs-type=git, distribution-scope=public)
Feb 20 08:12:12 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:12:12 np0005625203.localdomain podman[74697]: 2026-02-20 08:12:12.903523214 +0000 UTC m=+0.215113528 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:12:12 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:12:14 np0005625203.localdomain sshd[74769]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:12:14 np0005625203.localdomain systemd[1]: tmp-crun.8eBUqM.mount: Deactivated successfully.
Feb 20 08:12:14 np0005625203.localdomain podman[74771]: 2026-02-20 08:12:14.761097976 +0000 UTC m=+0.082164457 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:12:15 np0005625203.localdomain podman[74771]: 2026-02-20 08:12:15.127140068 +0000 UTC m=+0.448206479 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, url=https://www.redhat.com, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1)
Feb 20 08:12:15 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:12:15 np0005625203.localdomain sshd[74769]: Invalid user ospite from 185.246.128.171 port 11415
Feb 20 08:12:15 np0005625203.localdomain sshd[74769]: Disconnecting invalid user ospite 185.246.128.171 port 11415: Change of username or service not allowed: (ospite,ssh-connection) -> (user03,ssh-connection) [preauth]
Feb 20 08:12:16 np0005625203.localdomain sshd[74793]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:12:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:12:17 np0005625203.localdomain podman[74796]: 2026-02-20 08:12:17.766045556 +0000 UTC m=+0.080845527 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, distribution-scope=public, version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_controller, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, release=1766032510, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:12:17 np0005625203.localdomain podman[74795]: 2026-02-20 08:12:17.814086022 +0000 UTC m=+0.130559926 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:12:17 np0005625203.localdomain podman[74796]: 2026-02-20 08:12:17.820955827 +0000 UTC m=+0.135755758 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, vcs-type=git, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, io.openshift.expose-services=, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team)
Feb 20 08:12:17 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:12:17 np0005625203.localdomain podman[74795]: 2026-02-20 08:12:17.882964672 +0000 UTC m=+0.199438586 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, architecture=x86_64, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510)
Feb 20 08:12:17 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:12:18 np0005625203.localdomain sshd[74793]: Invalid user user03 from 185.246.128.171 port 30899
Feb 20 08:12:18 np0005625203.localdomain sshd[74793]: Disconnecting invalid user user03 185.246.128.171 port 30899: Change of username or service not allowed: (user03,ssh-connection) -> (client,ssh-connection) [preauth]
Feb 20 08:12:19 np0005625203.localdomain sshd[74845]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:21 np0005625203.localdomain sshd[74845]: Invalid user client from 185.246.128.171 port 9347
Feb 20 08:12:24 np0005625203.localdomain sshd[74845]: Disconnecting invalid user client 185.246.128.171 port 9347: Change of username or service not allowed: (client,ssh-connection) -> (admin1,ssh-connection) [preauth]
Feb 20 08:12:24 np0005625203.localdomain sshd[74847]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:25 np0005625203.localdomain sshd[74849]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:25 np0005625203.localdomain sshd[74849]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:12:26 np0005625203.localdomain sshd[74847]: Invalid user admin1 from 185.246.128.171 port 3290
Feb 20 08:12:28 np0005625203.localdomain sshd[74847]: error: maximum authentication attempts exceeded for invalid user admin1 from 185.246.128.171 port 3290 ssh2 [preauth]
Feb 20 08:12:28 np0005625203.localdomain sshd[74847]: Disconnecting invalid user admin1 185.246.128.171 port 3290: Too many authentication failures [preauth]
Feb 20 08:12:29 np0005625203.localdomain sshd[74851]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:30 np0005625203.localdomain sshd[74851]: Invalid user admin1 from 185.246.128.171 port 60812
Feb 20 08:12:32 np0005625203.localdomain sshd[74851]: Disconnecting invalid user admin1 185.246.128.171 port 60812: Change of username or service not allowed: (admin1,ssh-connection) -> (centos,ssh-connection) [preauth]
Feb 20 08:12:34 np0005625203.localdomain sshd[74853]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:36 np0005625203.localdomain sshd[74853]: Invalid user centos from 185.246.128.171 port 64621
Feb 20 08:12:37 np0005625203.localdomain sshd[74853]: Disconnecting invalid user centos 185.246.128.171 port 64621: Change of username or service not allowed: (centos,ssh-connection) -> (alex,ssh-connection) [preauth]
Feb 20 08:12:40 np0005625203.localdomain sshd[74855]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:41 np0005625203.localdomain sshd[74855]: Invalid user alex from 185.246.128.171 port 5345
Feb 20 08:12:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:12:41 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:12:41 np0005625203.localdomain recover_tripleo_nova_virtqemud[74864]: 62505
Feb 20 08:12:41 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:12:41 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:12:41 np0005625203.localdomain systemd[1]: tmp-crun.hCbawN.mount: Deactivated successfully.
Feb 20 08:12:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:12:41 np0005625203.localdomain podman[74857]: 2026-02-20 08:12:41.375146043 +0000 UTC m=+0.096502127 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.5, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, architecture=x86_64, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git)
Feb 20 08:12:41 np0005625203.localdomain podman[74857]: 2026-02-20 08:12:41.412410322 +0000 UTC m=+0.133766416 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, container_name=iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:12:41 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:12:41 np0005625203.localdomain podman[74875]: 2026-02-20 08:12:41.472630581 +0000 UTC m=+0.086706700 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container)
Feb 20 08:12:41 np0005625203.localdomain podman[74875]: 2026-02-20 08:12:41.48629729 +0000 UTC m=+0.100373459 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:12:41 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:12:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:12:41 np0005625203.localdomain podman[74898]: 2026-02-20 08:12:41.608001897 +0000 UTC m=+0.060324893 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, container_name=metrics_qdr, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd)
Feb 20 08:12:41 np0005625203.localdomain podman[74898]: 2026-02-20 08:12:41.820346367 +0000 UTC m=+0.272669293 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, version=17.1.13, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5)
Feb 20 08:12:41 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:12:42 np0005625203.localdomain sshd[74855]: Disconnecting invalid user alex 185.246.128.171 port 5345: Change of username or service not allowed: (alex,ssh-connection) -> (VYOS,ssh-connection) [preauth]
Feb 20 08:12:42 np0005625203.localdomain sshd[74927]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:12:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:12:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:12:43 np0005625203.localdomain sshd[74927]: Invalid user VYOS from 185.246.128.171 port 33043
Feb 20 08:12:43 np0005625203.localdomain podman[74931]: 2026-02-20 08:12:43.769307936 +0000 UTC m=+0.079782614 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, architecture=x86_64, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:12:43 np0005625203.localdomain podman[74931]: 2026-02-20 08:12:43.801318909 +0000 UTC m=+0.111793577 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git)
Feb 20 08:12:43 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:12:43 np0005625203.localdomain podman[74929]: 2026-02-20 08:12:43.823143744 +0000 UTC m=+0.138744063 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, distribution-scope=public, tcib_managed=true)
Feb 20 08:12:43 np0005625203.localdomain podman[74930]: 2026-02-20 08:12:43.885640224 +0000 UTC m=+0.198937191 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.buildah.version=1.41.5)
Feb 20 08:12:43 np0005625203.localdomain podman[74929]: 2026-02-20 08:12:43.910282157 +0000 UTC m=+0.225882506 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.5, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, architecture=x86_64, release=1766032510, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true)
Feb 20 08:12:43 np0005625203.localdomain podman[74930]: 2026-02-20 08:12:43.921357624 +0000 UTC m=+0.234654641 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:12:43 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:12:43 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:12:44 np0005625203.localdomain sshd[74927]: Disconnecting invalid user VYOS 185.246.128.171 port 33043: Change of username or service not allowed: (VYOS,ssh-connection) -> (vhserver,ssh-connection) [preauth]
Feb 20 08:12:45 np0005625203.localdomain sshd[75001]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:12:45 np0005625203.localdomain systemd[1]: tmp-crun.La8Ydy.mount: Deactivated successfully.
Feb 20 08:12:45 np0005625203.localdomain podman[75003]: 2026-02-20 08:12:45.764243916 +0000 UTC m=+0.079289238 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, io.buildah.version=1.41.5, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:12:46 np0005625203.localdomain podman[75003]: 2026-02-20 08:12:46.128699917 +0000 UTC m=+0.443745229 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_id=tripleo_step4)
Feb 20 08:12:46 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:12:46 np0005625203.localdomain sshd[75001]: Invalid user vhserver from 185.246.128.171 port 8677
Feb 20 08:12:46 np0005625203.localdomain sshd[75001]: Disconnecting invalid user vhserver 185.246.128.171 port 8677: Change of username or service not allowed: (vhserver,ssh-connection) -> (caja,ssh-connection) [preauth]
Feb 20 08:12:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:12:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:12:48 np0005625203.localdomain systemd[1]: tmp-crun.czGLHE.mount: Deactivated successfully.
Feb 20 08:12:48 np0005625203.localdomain podman[75027]: 2026-02-20 08:12:48.769623548 +0000 UTC m=+0.084756080 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, io.buildah.version=1.41.5, url=https://www.redhat.com)
Feb 20 08:12:48 np0005625203.localdomain podman[75028]: 2026-02-20 08:12:48.820581916 +0000 UTC m=+0.132411724 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, version=17.1.13, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com)
Feb 20 08:12:48 np0005625203.localdomain podman[75027]: 2026-02-20 08:12:48.840534012 +0000 UTC m=+0.155666704 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5)
Feb 20 08:12:48 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:12:48 np0005625203.localdomain podman[75028]: 2026-02-20 08:12:48.869025195 +0000 UTC m=+0.180855013 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, container_name=ovn_controller, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Feb 20 08:12:48 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:12:49 np0005625203.localdomain sshd[75074]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:52 np0005625203.localdomain sshd[75074]: Invalid user caja from 185.246.128.171 port 10063
Feb 20 08:12:52 np0005625203.localdomain sshd[75074]: Disconnecting invalid user caja 185.246.128.171 port 10063: Change of username or service not allowed: (caja,ssh-connection) -> (test1,ssh-connection) [preauth]
Feb 20 08:12:53 np0005625203.localdomain sshd[75076]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:12:55 np0005625203.localdomain sshd[75076]: Invalid user test1 from 185.246.128.171 port 14091
Feb 20 08:12:56 np0005625203.localdomain sshd[75076]: Disconnecting invalid user test1 185.246.128.171 port 14091: Change of username or service not allowed: (test1,ssh-connection) -> (redmine,ssh-connection) [preauth]
Feb 20 08:12:57 np0005625203.localdomain sshd[75078]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:00 np0005625203.localdomain sshd[75078]: Invalid user redmine from 185.246.128.171 port 7614
Feb 20 08:13:00 np0005625203.localdomain sshd[75078]: Disconnecting invalid user redmine 185.246.128.171 port 7614: Change of username or service not allowed: (redmine,ssh-connection) -> (TEST,ssh-connection) [preauth]
Feb 20 08:13:00 np0005625203.localdomain sshd[75080]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:02 np0005625203.localdomain sshd[75080]: Invalid user TEST from 185.246.128.171 port 51117
Feb 20 08:13:03 np0005625203.localdomain sshd[75080]: Disconnecting invalid user TEST 185.246.128.171 port 51117: Change of username or service not allowed: (TEST,ssh-connection) -> (tmp,ssh-connection) [preauth]
Feb 20 08:13:04 np0005625203.localdomain sshd[75082]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:05 np0005625203.localdomain sshd[75082]: Invalid user tmp from 185.246.128.171 port 43210
Feb 20 08:13:05 np0005625203.localdomain sshd[75082]: Disconnecting invalid user tmp 185.246.128.171 port 43210: Change of username or service not allowed: (tmp,ssh-connection) -> (12345,ssh-connection) [preauth]
Feb 20 08:13:06 np0005625203.localdomain sshd[75084]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:06 np0005625203.localdomain sshd[75085]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:07 np0005625203.localdomain sshd[75084]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:13:08 np0005625203.localdomain sshd[75085]: Invalid user 12345 from 185.246.128.171 port 18206
Feb 20 08:13:09 np0005625203.localdomain sshd[75085]: Disconnecting invalid user 12345 185.246.128.171 port 18206: Change of username or service not allowed: (12345,ssh-connection) -> (sshd,ssh-connection) [preauth]
Feb 20 08:13:11 np0005625203.localdomain sudo[75088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:13:11 np0005625203.localdomain sudo[75088]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:13:11 np0005625203.localdomain sudo[75088]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:11 np0005625203.localdomain sudo[75103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:13:11 np0005625203.localdomain sudo[75103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:13:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:13:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:13:11 np0005625203.localdomain podman[75133]: 2026-02-20 08:13:11.78523822 +0000 UTC m=+0.080292279 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:13:11 np0005625203.localdomain podman[75133]: 2026-02-20 08:13:11.826384501 +0000 UTC m=+0.121438580 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., container_name=collectd, io.buildah.version=1.41.5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13)
Feb 20 08:13:11 np0005625203.localdomain podman[75134]: 2026-02-20 08:13:11.842179457 +0000 UTC m=+0.135581164 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:13:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:13:11 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:13:11 np0005625203.localdomain podman[75134]: 2026-02-20 08:13:11.85695632 +0000 UTC m=+0.150357987 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, container_name=iscsid, distribution-scope=public, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com)
Feb 20 08:13:11 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:13:11 np0005625203.localdomain podman[75173]: 2026-02-20 08:13:11.940834231 +0000 UTC m=+0.079202265 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, version=17.1.13, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:13:12 np0005625203.localdomain sudo[75103]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:12 np0005625203.localdomain sshd[75217]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:12 np0005625203.localdomain podman[75173]: 2026-02-20 08:13:12.157380522 +0000 UTC m=+0.295748546 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, version=17.1.13, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:13:12 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:13:12 np0005625203.localdomain sshd[75219]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:12 np0005625203.localdomain sshd[75219]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:13:12 np0005625203.localdomain sudo[75221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:13:12 np0005625203.localdomain sudo[75221]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:13:12 np0005625203.localdomain sudo[75221]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:12 np0005625203.localdomain sshd[75236]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:13 np0005625203.localdomain sshd[75217]: Disconnecting authenticating user sshd 185.246.128.171 port 38506: Change of username or service not allowed: (sshd,ssh-connection) -> (sepehr,ssh-connection) [preauth]
Feb 20 08:13:14 np0005625203.localdomain sshd[75236]: Invalid user ubuntu from 103.171.84.20 port 48684
Feb 20 08:13:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:13:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:13:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:13:14 np0005625203.localdomain systemd[1]: tmp-crun.ZBJidA.mount: Deactivated successfully.
Feb 20 08:13:14 np0005625203.localdomain podman[75238]: 2026-02-20 08:13:14.39676296 +0000 UTC m=+0.085138412 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible)
Feb 20 08:13:14 np0005625203.localdomain systemd[1]: tmp-crun.h3aXQF.mount: Deactivated successfully.
Feb 20 08:13:14 np0005625203.localdomain podman[75240]: 2026-02-20 08:13:14.50359008 +0000 UTC m=+0.184651582 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, batch=17.1_20260112.1, architecture=x86_64, version=17.1.13, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vcs-type=git, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z)
Feb 20 08:13:14 np0005625203.localdomain podman[75238]: 2026-02-20 08:13:14.533595632 +0000 UTC m=+0.221971084 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, release=1766032510)
Feb 20 08:13:14 np0005625203.localdomain sshd[75236]: Received disconnect from 103.171.84.20 port 48684:11: Bye Bye [preauth]
Feb 20 08:13:14 np0005625203.localdomain sshd[75236]: Disconnected from invalid user ubuntu 103.171.84.20 port 48684 [preauth]
Feb 20 08:13:14 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:13:14 np0005625203.localdomain podman[75240]: 2026-02-20 08:13:14.558190542 +0000 UTC m=+0.239252044 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi)
Feb 20 08:13:14 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:13:14 np0005625203.localdomain podman[75239]: 2026-02-20 08:13:14.4828657 +0000 UTC m=+0.166628317 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, io.openshift.expose-services=, tcib_managed=true)
Feb 20 08:13:14 np0005625203.localdomain podman[75239]: 2026-02-20 08:13:14.617341027 +0000 UTC m=+0.301103634 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, url=https://www.redhat.com, container_name=ceilometer_agent_compute, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z)
Feb 20 08:13:14 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:13:14 np0005625203.localdomain sshd[75308]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:16 np0005625203.localdomain sudo[75355]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-avmusojyuugmbxwlyuulhalldnabdrmg ; /usr/bin/python3
Feb 20 08:13:16 np0005625203.localdomain sudo[75355]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:13:16 np0005625203.localdomain podman[75358]: 2026-02-20 08:13:16.49756337 +0000 UTC m=+0.082445388 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_id=tripleo_step4, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:13:16 np0005625203.localdomain sshd[75308]: Invalid user sepehr from 185.246.128.171 port 19404
Feb 20 08:13:16 np0005625203.localdomain python3[75357]: ansible-ansible.legacy.stat Invoked with path=/etc/puppet/hieradata/config_step.json follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:13:16 np0005625203.localdomain sudo[75355]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:16 np0005625203.localdomain sshd[75308]: Disconnecting invalid user sepehr 185.246.128.171 port 19404: Change of username or service not allowed: (sepehr,ssh-connection) -> (manish,ssh-connection) [preauth]
Feb 20 08:13:16 np0005625203.localdomain sudo[75422]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lzkzvmgelgasmyoyyxffiwafkvgmwfsy ; /usr/bin/python3
Feb 20 08:13:16 np0005625203.localdomain sudo[75422]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:16 np0005625203.localdomain podman[75358]: 2026-02-20 08:13:16.844927835 +0000 UTC m=+0.429809843 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:13:16 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:13:16 np0005625203.localdomain python3[75424]: ansible-ansible.legacy.copy Invoked with dest=/etc/puppet/hieradata/config_step.json force=True mode=0600 src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575196.2274818-114046-75717394985510/source _original_basename=tmpfngllzcz follow=False checksum=039e0b234f00fbd1242930f0d5dc67e8b4c067fe backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:13:16 np0005625203.localdomain sudo[75422]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:17 np0005625203.localdomain sshd[75439]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:17 np0005625203.localdomain sudo[75454]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thddupnkyrvecynclvryhcvjizhgpssm ; /usr/bin/python3
Feb 20 08:13:17 np0005625203.localdomain sudo[75454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:17 np0005625203.localdomain python3[75456]: ansible-stat Invoked with path=/var/lib/tripleo-config/container-startup-config/step_5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:13:17 np0005625203.localdomain sudo[75454]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:18 np0005625203.localdomain sudo[75504]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wckpujkoorjtksptndybxdiwsthsbxni ; /usr/bin/python3
Feb 20 08:13:18 np0005625203.localdomain sudo[75504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:18 np0005625203.localdomain sudo[75504]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:18 np0005625203.localdomain sudo[75522]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gpfwsjifwwuvixgphdbgosqpnaxuyjwg ; /usr/bin/python3
Feb 20 08:13:18 np0005625203.localdomain sudo[75522]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:18 np0005625203.localdomain sudo[75522]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:18 np0005625203.localdomain sshd[75439]: Invalid user manish from 185.246.128.171 port 53157
Feb 20 08:13:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:13:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:13:19 np0005625203.localdomain systemd[1]: tmp-crun.LZCoK0.mount: Deactivated successfully.
Feb 20 08:13:19 np0005625203.localdomain podman[75552]: 2026-02-20 08:13:19.032033842 +0000 UTC m=+0.088583449 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, url=https://www.redhat.com)
Feb 20 08:13:19 np0005625203.localdomain systemd[1]: tmp-crun.BXBQcV.mount: Deactivated successfully.
Feb 20 08:13:19 np0005625203.localdomain podman[75553]: 2026-02-20 08:13:19.095524593 +0000 UTC m=+0.147047122 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.buildah.version=1.41.5, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_id=tripleo_step4, batch=17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:13:19 np0005625203.localdomain podman[75553]: 2026-02-20 08:13:19.116123139 +0000 UTC m=+0.167645628 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, version=17.1.13, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, release=1766032510)
Feb 20 08:13:19 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:13:19 np0005625203.localdomain podman[75552]: 2026-02-20 08:13:19.151018314 +0000 UTC m=+0.207567931 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, version=17.1.13)
Feb 20 08:13:19 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:13:19 np0005625203.localdomain sudo[75675]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-upzubspqtxqdfgtnhthnandsdorkcpxd ; ANSIBLE_ASYNC_DIR=/tmp/.ansible_async /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575198.9176161-114213-41898743972870/async_wrapper.py 572308869732 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575198.9176161-114213-41898743972870/AnsiballZ_command.py _
Feb 20 08:13:19 np0005625203.localdomain sudo[75675]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Feb 20 08:13:19 np0005625203.localdomain sshd[75439]: Disconnecting invalid user manish 185.246.128.171 port 53157: Change of username or service not allowed: (manish,ssh-connection) -> (dasusr1,ssh-connection) [preauth]
Feb 20 08:13:19 np0005625203.localdomain ansible-async_wrapper.py[75677]: Invoked with 572308869732 3600 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575198.9176161-114213-41898743972870/AnsiballZ_command.py _
Feb 20 08:13:19 np0005625203.localdomain ansible-async_wrapper.py[75680]: Starting module and watcher
Feb 20 08:13:19 np0005625203.localdomain ansible-async_wrapper.py[75680]: Start watching 75681 (3600)
Feb 20 08:13:19 np0005625203.localdomain ansible-async_wrapper.py[75681]: Start module (75681)
Feb 20 08:13:19 np0005625203.localdomain ansible-async_wrapper.py[75677]: Return async_wrapper task started.
Feb 20 08:13:19 np0005625203.localdomain sudo[75675]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:19 np0005625203.localdomain sudo[75696]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fgldnklpxoldniyjylysberhvzfuqtqf ; /usr/bin/python3
Feb 20 08:13:19 np0005625203.localdomain sudo[75696]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:19 np0005625203.localdomain python3[75701]: ansible-ansible.legacy.async_status Invoked with jid=572308869732.75677 mode=status _async_dir=/tmp/.ansible_async
Feb 20 08:13:19 np0005625203.localdomain sudo[75696]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:20 np0005625203.localdomain sshd[75702]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:23 np0005625203.localdomain sshd[75702]: Invalid user dasusr1 from 185.246.128.171 port 36080
Feb 20 08:13:23 np0005625203.localdomain sshd[75702]: Disconnecting invalid user dasusr1 185.246.128.171 port 36080: Change of username or service not allowed: (dasusr1,ssh-connection) -> (device,ssh-connection) [preauth]
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Warning: /etc/puppet/hiera.yaml: Use of 'hiera.yaml' version 3 is deprecated. It should be converted to version 5
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    (file: /etc/puppet/hiera.yaml)
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Warning: Undefined variable '::deploy_config_name';
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    (file & line not available)
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Warning: The function 'hiera' is deprecated in favor of using 'lookup'. See https://puppet.com/docs/puppet/7.10/deprecated_language.html
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    (file & line not available)
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/profile/base/database/mysql/client.pp, line: 89, column: 8)
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Warning: This method is deprecated, please use match expressions with Stdlib::Compat::String instead. They are described at https://docs.puppet.com/puppet/latest/reference/lang_data_type.html#match-expressions. at ["/etc/puppet/modules/snmp/manifests/params.pp", 310]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:                     with Stdlib::Compat::Bool. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 358]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:                     with Stdlib::Compat::Array. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 367]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:                     with Stdlib::Compat::String. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 382]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:                     with Stdlib::Compat::Numeric. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 388]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Warning: This method is deprecated, please use the stdlib validate_legacy function,
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:                     with Pattern[]. There is further documentation for validate_legacy function in the README. at ["/etc/puppet/modules/snmp/manifests/init.pp", 393]:["/var/lib/tripleo-config/puppet_step_config.pp", 4]
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    (location: /etc/puppet/modules/stdlib/lib/puppet/functions/deprecation.rb:34:in `deprecation')
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Warning: Unknown variable: '::deployment_type'. (file: /etc/puppet/modules/tripleo/manifests/packages.pp, line: 39, column: 69)
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Notice: Compiled catalog for np0005625203.localdomain in environment production in 0.20 seconds
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Notice: Applied catalog in 0.30 seconds
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Application:
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    Initial environment: production
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    Converged environment: production
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:          Run mode: user
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Changes:
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Events:
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Resources:
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:             Total: 19
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Time:
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:          Schedule: 0.00
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:           Package: 0.00
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:              Exec: 0.01
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:            Augeas: 0.01
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:              File: 0.02
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:           Service: 0.08
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    Config retrieval: 0.26
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    Transaction evaluation: 0.29
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:    Catalog application: 0.30
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:          Last run: 1771575203
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:        Filebucket: 0.00
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:             Total: 0.30
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]: Version:
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:            Config: 1771575203
Feb 20 08:13:23 np0005625203.localdomain puppet-user[75700]:            Puppet: 7.10.0
Feb 20 08:13:24 np0005625203.localdomain ansible-async_wrapper.py[75681]: Module complete (75681)
Feb 20 08:13:24 np0005625203.localdomain sshd[75826]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:24 np0005625203.localdomain ansible-async_wrapper.py[75680]: Done in kid B.
Feb 20 08:13:26 np0005625203.localdomain sshd[75826]: Invalid user device from 185.246.128.171 port 41514
Feb 20 08:13:27 np0005625203.localdomain sshd[75826]: Disconnecting invalid user device 185.246.128.171 port 41514: Change of username or service not allowed: (device,ssh-connection) -> (finance,ssh-connection) [preauth]
Feb 20 08:13:29 np0005625203.localdomain sshd[75828]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:29 np0005625203.localdomain sudo[75842]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ojkjrfmsqdotlvfeigdwwhryhsjuhnlj ; /usr/bin/python3
Feb 20 08:13:29 np0005625203.localdomain sudo[75842]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:30 np0005625203.localdomain python3[75844]: ansible-ansible.legacy.async_status Invoked with jid=572308869732.75677 mode=status _async_dir=/tmp/.ansible_async
Feb 20 08:13:30 np0005625203.localdomain sudo[75842]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:30 np0005625203.localdomain sudo[75859]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tcreiikmsgmwcrzrxvpgnmsztkwczvvp ; /usr/bin/python3
Feb 20 08:13:30 np0005625203.localdomain sudo[75859]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:30 np0005625203.localdomain python3[75861]: ansible-file Invoked with path=/var/lib/container-puppet/puppetlabs state=directory setype=svirt_sandbox_file_t selevel=s0 recurse=True force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 08:13:30 np0005625203.localdomain sudo[75859]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:30 np0005625203.localdomain sudo[75875]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-waizgikjmjmwbfqpeyazknlzbnsbrzna ; /usr/bin/python3
Feb 20 08:13:30 np0005625203.localdomain sudo[75875]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:31 np0005625203.localdomain python3[75877]: ansible-stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:13:31 np0005625203.localdomain sudo[75875]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:31 np0005625203.localdomain sshd[75828]: Invalid user finance from 185.246.128.171 port 53238
Feb 20 08:13:31 np0005625203.localdomain sudo[75925]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwndbqseagkcvvxmjrawarudpnwtqrtl ; /usr/bin/python3
Feb 20 08:13:31 np0005625203.localdomain sudo[75925]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:31 np0005625203.localdomain python3[75927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/container-puppet/puppetlabs/facter.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:13:31 np0005625203.localdomain sudo[75925]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:31 np0005625203.localdomain sudo[75943]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-khzjxukjzsjsuwkysakewelotulbcmdk ; /usr/bin/python3
Feb 20 08:13:31 np0005625203.localdomain sudo[75943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:31 np0005625203.localdomain python3[75945]: ansible-ansible.legacy.file Invoked with setype=svirt_sandbox_file_t selevel=s0 dest=/var/lib/container-puppet/puppetlabs/facter.conf _original_basename=tmpkvrw1942 recurse=False state=file path=/var/lib/container-puppet/puppetlabs/facter.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 08:13:31 np0005625203.localdomain sudo[75943]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:32 np0005625203.localdomain sshd[75828]: Disconnecting invalid user finance 185.246.128.171 port 53238: Change of username or service not allowed: (finance,ssh-connection) -> (install,ssh-connection) [preauth]
Feb 20 08:13:32 np0005625203.localdomain sudo[75973]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cbdikqsgolwsjxculzkmlssueubuoyvx ; /usr/bin/python3
Feb 20 08:13:32 np0005625203.localdomain sudo[75973]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:32 np0005625203.localdomain python3[75975]: ansible-file Invoked with path=/opt/puppetlabs/facter state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:13:32 np0005625203.localdomain sudo[75973]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:32 np0005625203.localdomain sudo[75989]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phliyphffrwciihznjcvppyfzlahdhec ; /usr/bin/python3
Feb 20 08:13:32 np0005625203.localdomain sudo[75989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:33 np0005625203.localdomain sudo[75989]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:33 np0005625203.localdomain sudo[76078]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gapxhornlppzcxlysukzfzpxxxpilohd ; /usr/bin/python3
Feb 20 08:13:33 np0005625203.localdomain sudo[76078]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:33 np0005625203.localdomain python3[76080]: ansible-ansible.posix.synchronize Invoked with src=/opt/puppetlabs/ dest=/var/lib/container-puppet/puppetlabs/ _local_rsync_path=rsync _local_rsync_password=NOT_LOGGING_PARAMETER rsync_path=None delete=False _substitute_controller=False archive=True checksum=False compress=True existing_only=False dirs=False copy_links=False set_remote_user=True rsync_timeout=0 rsync_opts=[] ssh_connection_multiplexing=False partial=False verify_host=False mode=push dest_port=None private_key=None recursive=None links=None perms=None times=None owner=None group=None ssh_args=None link_dest=None
Feb 20 08:13:33 np0005625203.localdomain sudo[76078]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:33 np0005625203.localdomain sudo[76097]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eowzyiobyvyfvyvhnbwmjfxzsnbphpim ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:33 np0005625203.localdomain sudo[76097]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:34 np0005625203.localdomain python3[76099]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:13:34 np0005625203.localdomain sudo[76097]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:34 np0005625203.localdomain sudo[76113]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mocoizsyynyxdnnndbjqmssqiyrhcrxj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:34 np0005625203.localdomain sudo[76113]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:34 np0005625203.localdomain sudo[76113]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:34 np0005625203.localdomain sudo[76129]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xsxkcsbrvuuonyvvdpoqqokraabmbtsr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:34 np0005625203.localdomain sudo[76129]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:34 np0005625203.localdomain sshd[76132]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:34 np0005625203.localdomain python3[76131]: ansible-stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:13:35 np0005625203.localdomain sudo[76129]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:35 np0005625203.localdomain sudo[76180]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udvamymgtciaitrtifpndnmzbvukexhr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:35 np0005625203.localdomain sudo[76180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:35 np0005625203.localdomain python3[76182]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-container-shutdown follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:13:35 np0005625203.localdomain sudo[76180]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:35 np0005625203.localdomain sudo[76198]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tsuemiwrfvymfhkshfbwdfxmlszzsnlr ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:35 np0005625203.localdomain sudo[76198]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:35 np0005625203.localdomain python3[76200]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-container-shutdown _original_basename=tripleo-container-shutdown recurse=False state=file path=/usr/libexec/tripleo-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:13:35 np0005625203.localdomain sudo[76198]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:36 np0005625203.localdomain sudo[76261]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwlkpbwgyolyyfvkoxjhzcurycmziwdk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:36 np0005625203.localdomain sudo[76261]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:36 np0005625203.localdomain python3[76263]: ansible-ansible.legacy.stat Invoked with path=/usr/libexec/tripleo-start-podman-container follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:13:36 np0005625203.localdomain sudo[76261]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:36 np0005625203.localdomain sudo[76279]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nsysdrwiqdjilvlncfwoxgpsltluyyeg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:36 np0005625203.localdomain sudo[76279]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:36 np0005625203.localdomain python3[76281]: ansible-ansible.legacy.file Invoked with mode=0700 owner=root group=root dest=/usr/libexec/tripleo-start-podman-container _original_basename=tripleo-start-podman-container recurse=False state=file path=/usr/libexec/tripleo-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:13:36 np0005625203.localdomain sudo[76279]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:36 np0005625203.localdomain sudo[76341]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vdsppcdlczofnwyaqngqonngwybolxqn ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:36 np0005625203.localdomain sudo[76341]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:37 np0005625203.localdomain python3[76343]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/tripleo-container-shutdown.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:13:37 np0005625203.localdomain sudo[76341]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:37 np0005625203.localdomain sudo[76359]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fhwcjfntcvtviuqbtaxmmobwntyigket ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:37 np0005625203.localdomain sudo[76359]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:37 np0005625203.localdomain python3[76361]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/tripleo-container-shutdown.service _original_basename=tripleo-container-shutdown-service recurse=False state=file path=/usr/lib/systemd/system/tripleo-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:13:37 np0005625203.localdomain sudo[76359]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:37 np0005625203.localdomain sudo[76421]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nfpiliensiqymwjmvwuvaxctadezcwvk ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:37 np0005625203.localdomain sudo[76421]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:37 np0005625203.localdomain sshd[76132]: Invalid user install from 185.246.128.171 port 21909
Feb 20 08:13:37 np0005625203.localdomain python3[76423]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:13:37 np0005625203.localdomain sudo[76421]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:38 np0005625203.localdomain sudo[76439]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uhjmdxzekkgliyxribxyovyedhwjhpcm ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:38 np0005625203.localdomain sudo[76439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:38 np0005625203.localdomain sshd[76132]: Disconnecting invalid user install 185.246.128.171 port 21909: Change of username or service not allowed: (install,ssh-connection) -> (eagle,ssh-connection) [preauth]
Feb 20 08:13:38 np0005625203.localdomain python3[76441]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset _original_basename=91-tripleo-container-shutdown-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-tripleo-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:13:38 np0005625203.localdomain sudo[76439]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:38 np0005625203.localdomain sudo[76469]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymevmvzdgxuqwwbdtnbprnkhkueonrox ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:38 np0005625203.localdomain sudo[76469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:38 np0005625203.localdomain python3[76471]: ansible-systemd Invoked with name=tripleo-container-shutdown state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:13:38 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:13:38 np0005625203.localdomain systemd-rc-local-generator[76495]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:13:38 np0005625203.localdomain systemd-sysv-generator[76500]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:13:38 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:13:38 np0005625203.localdomain sudo[76469]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:39 np0005625203.localdomain sudo[76556]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qbcurdmxivxfroconvdvurgvuxrgtyrp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:39 np0005625203.localdomain sudo[76556]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:39 np0005625203.localdomain python3[76558]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system/netns-placeholder.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:13:39 np0005625203.localdomain sudo[76556]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:39 np0005625203.localdomain sudo[76574]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jdoyxxelcoeranwutmazvxevplcsvbsj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:39 np0005625203.localdomain sudo[76574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:39 np0005625203.localdomain python3[76576]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/usr/lib/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:13:39 np0005625203.localdomain sudo[76574]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:40 np0005625203.localdomain sudo[76636]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uenjkpkesewrjwneiouamnsevboxssav ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:40 np0005625203.localdomain sudo[76636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:40 np0005625203.localdomain python3[76638]: ansible-ansible.legacy.stat Invoked with path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Feb 20 08:13:40 np0005625203.localdomain sudo[76636]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:40 np0005625203.localdomain sudo[76654]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hgelujcptutzpfekqlfymctdwaoysixl ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:40 np0005625203.localdomain sudo[76654]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:40 np0005625203.localdomain sshd[76657]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:40 np0005625203.localdomain python3[76656]: ansible-ansible.legacy.file Invoked with mode=0644 owner=root group=root dest=/usr/lib/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/usr/lib/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:13:40 np0005625203.localdomain sudo[76654]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:40 np0005625203.localdomain sudo[76685]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twsqtbbzjpzvjbswlmxmnzndvefihkne ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:40 np0005625203.localdomain sudo[76685]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:41 np0005625203.localdomain python3[76687]: ansible-systemd Invoked with name=netns-placeholder state=started enabled=True daemon_reload=True daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:13:41 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:13:41 np0005625203.localdomain systemd-sysv-generator[76716]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:13:41 np0005625203.localdomain systemd-rc-local-generator[76713]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:13:41 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:13:41 np0005625203.localdomain systemd[1]: Starting Create netns directory...
Feb 20 08:13:41 np0005625203.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 20 08:13:41 np0005625203.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 20 08:13:41 np0005625203.localdomain systemd[1]: Finished Create netns directory.
Feb 20 08:13:41 np0005625203.localdomain sudo[76685]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:41 np0005625203.localdomain sudo[76743]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rcmraogidocssvggsjbibwfkjklgqkol ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:41 np0005625203.localdomain sudo[76743]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:13:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:13:42 np0005625203.localdomain systemd[1]: tmp-crun.WkB2F0.mount: Deactivated successfully.
Feb 20 08:13:42 np0005625203.localdomain podman[76747]: 2026-02-20 08:13:42.038572382 +0000 UTC m=+0.084887884 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, distribution-scope=public, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z)
Feb 20 08:13:42 np0005625203.localdomain podman[76745]: 2026-02-20 08:13:42.047498661 +0000 UTC m=+0.090580131 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, build-date=2026-01-12T22:10:15Z, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git)
Feb 20 08:13:42 np0005625203.localdomain podman[76747]: 2026-02-20 08:13:42.072027181 +0000 UTC m=+0.118342663 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, version=17.1.13, batch=17.1_20260112.1, release=1766032510, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, container_name=iscsid)
Feb 20 08:13:42 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:13:42 np0005625203.localdomain podman[76745]: 2026-02-20 08:13:42.086811174 +0000 UTC m=+0.129892654 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13)
Feb 20 08:13:42 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:13:42 np0005625203.localdomain python3[76746]: ansible-container_puppet_config Invoked with update_config_hash_only=True no_archive=True check_mode=False config_vol_prefix=/var/lib/config-data debug=False net_host=True puppet_config= short_hostname= step=6
Feb 20 08:13:42 np0005625203.localdomain sudo[76743]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:13:42 np0005625203.localdomain sudo[76797]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-agkzkdpzzchciuvivvhlmhxxnzkkiwnx ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:42 np0005625203.localdomain sudo[76797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:42 np0005625203.localdomain podman[76799]: 2026-02-20 08:13:42.528835128 +0000 UTC m=+0.082429257 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, container_name=metrics_qdr, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, config_id=tripleo_step1, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, build-date=2026-01-12T22:10:14Z)
Feb 20 08:13:42 np0005625203.localdomain podman[76799]: 2026-02-20 08:13:42.752650878 +0000 UTC m=+0.306245007 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, vcs-type=git, tcib_managed=true, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, release=1766032510, batch=17.1_20260112.1, distribution-scope=public, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, architecture=x86_64)
Feb 20 08:13:42 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:13:43 np0005625203.localdomain sudo[76797]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:43 np0005625203.localdomain sudo[76869]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mzadwtvtwdrllwiiiohqaughbgizysfg ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:13:43 np0005625203.localdomain sudo[76869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:13:43 np0005625203.localdomain sshd[76657]: Invalid user eagle from 185.246.128.171 port 11001
Feb 20 08:13:43 np0005625203.localdomain sshd[76872]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:43 np0005625203.localdomain python3[76871]: ansible-tripleo_container_manage Invoked with config_id=tripleo_step5 config_dir=/var/lib/tripleo-config/container-startup-config/step_5 config_patterns=*.json config_overrides={} concurrency=5 log_base_path=/var/log/containers/stdouts debug=False
Feb 20 08:13:44 np0005625203.localdomain sshd[76872]: Received disconnect from 147.135.114.8 port 54402:11: Bye Bye [preauth]
Feb 20 08:13:44 np0005625203.localdomain sshd[76872]: Disconnected from authenticating user root 147.135.114.8 port 54402 [preauth]
Feb 20 08:13:44 np0005625203.localdomain podman[76912]: 2026-02-20 08:13:44.297267475 +0000 UTC m=+0.073145396 container create 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, tcib_managed=true, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Started libpod-conmon-31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.scope.
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:13:44 np0005625203.localdomain podman[76912]: 2026-02-20 08:13:44.257664953 +0000 UTC m=+0.033542854 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Feb 20 08:13:44 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79d573e93a443ab224a39aaf10649c6676b1bd955bb2a70f97bd5d3d23c519b/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:13:44 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79d573e93a443ab224a39aaf10649c6676b1bd955bb2a70f97bd5d3d23c519b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:13:44 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79d573e93a443ab224a39aaf10649c6676b1bd955bb2a70f97bd5d3d23c519b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 20 08:13:44 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79d573e93a443ab224a39aaf10649c6676b1bd955bb2a70f97bd5d3d23c519b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 08:13:44 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f79d573e93a443ab224a39aaf10649c6676b1bd955bb2a70f97bd5d3d23c519b/merged/var/lib/kolla/config_files/src-ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:13:44 np0005625203.localdomain podman[76912]: 2026-02-20 08:13:44.395180546 +0000 UTC m=+0.171058487 container init 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, release=1766032510, io.buildah.version=1.41.5, vcs-type=git, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:13:44 np0005625203.localdomain sudo[76933]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:13:44 np0005625203.localdomain systemd-logind[759]: Existing logind session ID 28 used by new audit session, ignoring.
Feb 20 08:13:44 np0005625203.localdomain podman[76912]: 2026-02-20 08:13:44.435619495 +0000 UTC m=+0.211497376 container start 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, container_name=nova_compute, release=1766032510, config_id=tripleo_step5, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, version=17.1.13, tcib_managed=true)
Feb 20 08:13:44 np0005625203.localdomain python3[76871]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_compute --conmon-pidfile /run/nova_compute.pid --detach=True --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env LIBGUESTFS_BACKEND=direct --env TRIPLEO_CONFIG_HASH=ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868 --healthcheck-command /openstack/healthcheck 5672 --ipc host --label config_id=tripleo_step5 --label container_name=nova_compute --label managed_by=tripleo_ansible --label config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_compute.log --network host --privileged=True --ulimit nofile=131072 --ulimit memlock=67108864 --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/log/containers/nova:/var/log/nova --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro --volume /var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z --volume /dev:/dev --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /run/nova:/run/nova:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/libvirt:/var/lib/libvirt:shared --volume /sys/class/net:/sys/class/net --volume /sys/bus/pci:/sys/bus/pci --volume /boot:/boot:ro --volume /var/lib/nova:/var/lib/nova:shared registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Created slice User Slice of UID 0.
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Starting User Manager for UID 0...
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Feb 20 08:13:44 np0005625203.localdomain podman[76934]: 2026-02-20 08:13:44.540609917 +0000 UTC m=+0.093635767 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:13:44 np0005625203.localdomain podman[76934]: 2026-02-20 08:13:44.59936521 +0000 UTC m=+0.152391100 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, container_name=nova_compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, tcib_managed=true, version=17.1.13)
Feb 20 08:13:44 np0005625203.localdomain podman[76934]: unhealthy
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Queued start job for default target Main User Target.
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Created slice User Application Slice.
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Started Daily Cleanup of User's Temporary Directories.
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Reached target Paths.
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Reached target Timers.
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Starting D-Bus User Message Bus Socket...
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Starting Create User's Volatile Files and Directories...
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Finished Create User's Volatile Files and Directories.
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Listening on D-Bus User Message Bus Socket.
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Reached target Sockets.
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Reached target Basic System.
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Reached target Main User Target.
Feb 20 08:13:44 np0005625203.localdomain systemd[76947]: Startup finished in 198ms.
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Started User Manager for UID 0.
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: Started Session c10 of User root.
Feb 20 08:13:44 np0005625203.localdomain sudo[76933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Feb 20 08:13:44 np0005625203.localdomain podman[76988]: 2026-02-20 08:13:44.742390396 +0000 UTC m=+0.114777161 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, config_id=tripleo_step4, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, tcib_managed=true)
Feb 20 08:13:44 np0005625203.localdomain podman[76989]: 2026-02-20 08:13:44.769726473 +0000 UTC m=+0.143039967 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., release=1766032510, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi)
Feb 20 08:13:44 np0005625203.localdomain podman[76988]: 2026-02-20 08:13:44.804256216 +0000 UTC m=+0.176642981 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, release=1766032510, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z)
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:13:44 np0005625203.localdomain podman[76989]: 2026-02-20 08:13:44.815388805 +0000 UTC m=+0.188702289 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, release=1766032510, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:13:44 np0005625203.localdomain sudo[76933]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:44 np0005625203.localdomain sshd[76657]: Disconnecting invalid user eagle 185.246.128.171 port 11001: Change of username or service not allowed: (eagle,ssh-connection) -> (off,ssh-connection) [preauth]
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: session-c10.scope: Deactivated successfully.
Feb 20 08:13:44 np0005625203.localdomain podman[77007]: 2026-02-20 08:13:44.896934783 +0000 UTC m=+0.190404433 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., vcs-type=git, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute)
Feb 20 08:13:44 np0005625203.localdomain podman[77007]: 2026-02-20 08:13:44.954398356 +0000 UTC m=+0.247868006 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, vcs-type=git, url=https://www.redhat.com, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:13:44 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:13:44 np0005625203.localdomain podman[77084]: 2026-02-20 08:13:44.977671705 +0000 UTC m=+0.089865469 container create a18e75c13e54ab0acc4d393db54123059f2764f2c6d4403946462f60d189712f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, container_name=nova_wait_for_compute_service, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, release=1766032510, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container)
Feb 20 08:13:45 np0005625203.localdomain systemd[1]: Started libpod-conmon-a18e75c13e54ab0acc4d393db54123059f2764f2c6d4403946462f60d189712f.scope.
Feb 20 08:13:45 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:13:45 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d417fdd363e485b6316b065395adc69531d816fbea7fef0eb296ef13ea44a2/merged/container-config-scripts supports timestamps until 2038 (0x7fffffff)
Feb 20 08:13:45 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5d417fdd363e485b6316b065395adc69531d816fbea7fef0eb296ef13ea44a2/merged/var/log/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 08:13:45 np0005625203.localdomain podman[77084]: 2026-02-20 08:13:45.032623459 +0000 UTC m=+0.144817253 container init a18e75c13e54ab0acc4d393db54123059f2764f2c6d4403946462f60d189712f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, container_name=nova_wait_for_compute_service, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, config_id=tripleo_step5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:13:45 np0005625203.localdomain podman[77084]: 2026-02-20 08:13:44.934867803 +0000 UTC m=+0.047061617 image pull  registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Feb 20 08:13:45 np0005625203.localdomain podman[77084]: 2026-02-20 08:13:45.043507211 +0000 UTC m=+0.155700995 container start a18e75c13e54ab0acc4d393db54123059f2764f2c6d4403946462f60d189712f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, config_id=tripleo_step5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, container_name=nova_wait_for_compute_service, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:13:45 np0005625203.localdomain podman[77084]: 2026-02-20 08:13:45.043829391 +0000 UTC m=+0.156023215 container attach a18e75c13e54ab0acc4d393db54123059f2764f2c6d4403946462f60d189712f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_wait_for_compute_service, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, architecture=x86_64, config_id=tripleo_step5, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:13:45 np0005625203.localdomain sudo[77118]:     nova : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 08:13:45 np0005625203.localdomain sudo[77118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42436)
Feb 20 08:13:45 np0005625203.localdomain sudo[77118]: pam_unix(sudo:session): session closed for user root
Feb 20 08:13:45 np0005625203.localdomain sshd[77122]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:46 np0005625203.localdomain sshd[77122]: Invalid user off from 185.246.128.171 port 48694
Feb 20 08:13:47 np0005625203.localdomain sshd[77122]: Disconnecting invalid user off 185.246.128.171 port 48694: Change of username or service not allowed: (off,ssh-connection) -> (hduser,ssh-connection) [preauth]
Feb 20 08:13:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:13:47 np0005625203.localdomain podman[77124]: 2026-02-20 08:13:47.111559204 +0000 UTC m=+0.077070559 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, architecture=x86_64, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:13:47 np0005625203.localdomain sshd[77146]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:47 np0005625203.localdomain podman[77124]: 2026-02-20 08:13:47.511315491 +0000 UTC m=+0.476826816 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., release=1766032510)
Feb 20 08:13:47 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:13:48 np0005625203.localdomain sshd[77146]: Invalid user hduser from 185.246.128.171 port 26109
Feb 20 08:13:48 np0005625203.localdomain sshd[77146]: Disconnecting invalid user hduser 185.246.128.171 port 26109: Change of username or service not allowed: (hduser,ssh-connection) -> (Y,ssh-connection) [preauth]
Feb 20 08:13:49 np0005625203.localdomain sshd[77149]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:13:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:13:49 np0005625203.localdomain podman[77150]: 2026-02-20 08:13:49.801731979 +0000 UTC m=+0.077523783 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, container_name=ovn_metadata_agent, version=17.1.13, vcs-type=git, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510)
Feb 20 08:13:49 np0005625203.localdomain podman[77150]: 2026-02-20 08:13:49.854631179 +0000 UTC m=+0.130422992 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, tcib_managed=true, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team)
Feb 20 08:13:49 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:13:49 np0005625203.localdomain podman[77151]: 2026-02-20 08:13:49.859319336 +0000 UTC m=+0.136888284 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, batch=17.1_20260112.1, tcib_managed=true, config_id=tripleo_step4, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.buildah.version=1.41.5)
Feb 20 08:13:49 np0005625203.localdomain podman[77151]: 2026-02-20 08:13:49.945375475 +0000 UTC m=+0.222944413 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.5, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:13:49 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:13:52 np0005625203.localdomain sshd[77149]: Invalid user Y from 185.246.128.171 port 62450
Feb 20 08:13:52 np0005625203.localdomain sshd[77149]: Disconnecting invalid user Y 185.246.128.171 port 62450: Change of username or service not allowed: (Y,ssh-connection) -> (czr,ssh-connection) [preauth]
Feb 20 08:13:54 np0005625203.localdomain sshd[77199]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:54 np0005625203.localdomain systemd[1]: Stopping User Manager for UID 0...
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Activating special unit Exit the Session...
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Stopped target Main User Target.
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Stopped target Basic System.
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Stopped target Paths.
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Stopped target Sockets.
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Stopped target Timers.
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Closed D-Bus User Message Bus Socket.
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Stopped Create User's Volatile Files and Directories.
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Removed slice User Application Slice.
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Reached target Shutdown.
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Finished Exit the Session.
Feb 20 08:13:54 np0005625203.localdomain systemd[76947]: Reached target Exit the Session.
Feb 20 08:13:54 np0005625203.localdomain systemd[1]: user@0.service: Deactivated successfully.
Feb 20 08:13:54 np0005625203.localdomain systemd[1]: Stopped User Manager for UID 0.
Feb 20 08:13:54 np0005625203.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb 20 08:13:54 np0005625203.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Feb 20 08:13:54 np0005625203.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb 20 08:13:54 np0005625203.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb 20 08:13:54 np0005625203.localdomain systemd[1]: Removed slice User Slice of UID 0.
Feb 20 08:13:56 np0005625203.localdomain sshd[77199]: Invalid user czr from 185.246.128.171 port 29040
Feb 20 08:13:57 np0005625203.localdomain sshd[77199]: Disconnecting invalid user czr 185.246.128.171 port 29040: Change of username or service not allowed: (czr,ssh-connection) -> (nagios,ssh-connection) [preauth]
Feb 20 08:13:58 np0005625203.localdomain sshd[77202]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:59 np0005625203.localdomain sshd[77204]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:13:59 np0005625203.localdomain sshd[77204]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:14:00 np0005625203.localdomain sshd[77202]: Invalid user nagios from 185.246.128.171 port 49835
Feb 20 08:14:00 np0005625203.localdomain sshd[77202]: Disconnecting invalid user nagios 185.246.128.171 port 49835: Change of username or service not allowed: (nagios,ssh-connection) -> (gits,ssh-connection) [preauth]
Feb 20 08:14:01 np0005625203.localdomain sshd[77206]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:02 np0005625203.localdomain sshd[77206]: Invalid user gits from 185.246.128.171 port 53973
Feb 20 08:14:03 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:14:03 np0005625203.localdomain recover_tripleo_nova_virtqemud[77209]: 62505
Feb 20 08:14:03 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:14:03 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:14:03 np0005625203.localdomain sshd[77206]: Disconnecting invalid user gits 185.246.128.171 port 53973: Change of username or service not allowed: (gits,ssh-connection) -> (mohammad,ssh-connection) [preauth]
Feb 20 08:14:04 np0005625203.localdomain sshd[77210]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:05 np0005625203.localdomain sshd[77210]: Invalid user mohammad from 185.246.128.171 port 47081
Feb 20 08:14:06 np0005625203.localdomain sshd[77210]: Disconnecting invalid user mohammad 185.246.128.171 port 47081: Change of username or service not allowed: (mohammad,ssh-connection) -> (theta,ssh-connection) [preauth]
Feb 20 08:14:08 np0005625203.localdomain sshd[77212]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:10 np0005625203.localdomain sshd[77212]: Invalid user theta from 185.246.128.171 port 23302
Feb 20 08:14:10 np0005625203.localdomain sshd[77212]: Disconnecting invalid user theta 185.246.128.171 port 23302: Change of username or service not allowed: (theta,ssh-connection) -> (hasan,ssh-connection) [preauth]
Feb 20 08:14:11 np0005625203.localdomain sshd[77214]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:14:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:14:12 np0005625203.localdomain systemd[1]: tmp-crun.hSEJtN.mount: Deactivated successfully.
Feb 20 08:14:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:14:12 np0005625203.localdomain sudo[77237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:14:12 np0005625203.localdomain sudo[77237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:14:12 np0005625203.localdomain sudo[77237]: pam_unix(sudo:session): session closed for user root
Feb 20 08:14:12 np0005625203.localdomain podman[77216]: 2026-02-20 08:14:12.829010447 +0000 UTC m=+0.146297519 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1766032510, vcs-type=git, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com)
Feb 20 08:14:12 np0005625203.localdomain podman[77217]: 2026-02-20 08:14:12.801351169 +0000 UTC m=+0.119027543 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, architecture=x86_64, config_id=tripleo_step3, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:14:12 np0005625203.localdomain sudo[77265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:14:12 np0005625203.localdomain sudo[77265]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:14:12 np0005625203.localdomain podman[77216]: 2026-02-20 08:14:12.867240406 +0000 UTC m=+0.184527428 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, tcib_managed=true, version=17.1.13, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:14:12 np0005625203.localdomain podman[77217]: 2026-02-20 08:14:12.880344737 +0000 UTC m=+0.198021121 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, batch=17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510)
Feb 20 08:14:12 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:14:12 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:14:12 np0005625203.localdomain podman[77256]: 2026-02-20 08:14:12.939008117 +0000 UTC m=+0.136565845 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:14:13 np0005625203.localdomain podman[77256]: 2026-02-20 08:14:13.140973711 +0000 UTC m=+0.338531429 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, batch=17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container)
Feb 20 08:14:13 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:14:13 np0005625203.localdomain sshd[77214]: Invalid user hasan from 185.246.128.171 port 30365
Feb 20 08:14:13 np0005625203.localdomain sshd[77214]: Disconnecting invalid user hasan 185.246.128.171 port 30365: Change of username or service not allowed: (hasan,ssh-connection) -> (siapbot,ssh-connection) [preauth]
Feb 20 08:14:13 np0005625203.localdomain sudo[77265]: pam_unix(sudo:session): session closed for user root
Feb 20 08:14:13 np0005625203.localdomain sshd[77342]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:14 np0005625203.localdomain sudo[77343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:14:14 np0005625203.localdomain sudo[77343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:14:14 np0005625203.localdomain sudo[77343]: pam_unix(sudo:session): session closed for user root
Feb 20 08:14:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:14:14 np0005625203.localdomain podman[77359]: 2026-02-20 08:14:14.769392076 +0000 UTC m=+0.087225077 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, distribution-scope=public, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Feb 20 08:14:14 np0005625203.localdomain podman[77359]: 2026-02-20 08:14:14.824071131 +0000 UTC m=+0.141904132 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step5, vcs-type=git, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, container_name=nova_compute, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container)
Feb 20 08:14:14 np0005625203.localdomain podman[77359]: unhealthy
Feb 20 08:14:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:14:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:14:14 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:14:14 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:14:14 np0005625203.localdomain podman[77380]: 2026-02-20 08:14:14.941511974 +0000 UTC m=+0.089549969 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.expose-services=, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, version=17.1.13, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, release=1766032510, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64)
Feb 20 08:14:14 np0005625203.localdomain podman[77380]: 2026-02-20 08:14:14.949757663 +0000 UTC m=+0.097795678 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, release=1766032510, io.openshift.expose-services=, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, distribution-scope=public, version=17.1.13, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:14:14 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:14:14 np0005625203.localdomain systemd[1]: tmp-crun.cNXBr0.mount: Deactivated successfully.
Feb 20 08:14:14 np0005625203.localdomain podman[77381]: 2026-02-20 08:14:14.997458529 +0000 UTC m=+0.140785786 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:14:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:14:15 np0005625203.localdomain podman[77381]: 2026-02-20 08:14:15.03129364 +0000 UTC m=+0.174620867 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4)
Feb 20 08:14:15 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:14:15 np0005625203.localdomain podman[77418]: 2026-02-20 08:14:15.101156081 +0000 UTC m=+0.081838647 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:14:15 np0005625203.localdomain podman[77418]: 2026-02-20 08:14:15.134307131 +0000 UTC m=+0.114989717 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, version=17.1.13, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:14:15 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:14:15 np0005625203.localdomain sshd[77342]: Invalid user siapbot from 185.246.128.171 port 25121
Feb 20 08:14:16 np0005625203.localdomain sshd[77342]: Disconnecting invalid user siapbot 185.246.128.171 port 25121: Change of username or service not allowed: (siapbot,ssh-connection) -> (ruoyi,ssh-connection) [preauth]
Feb 20 08:14:16 np0005625203.localdomain sshd[77454]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:14:17 np0005625203.localdomain podman[77456]: 2026-02-20 08:14:17.775088478 +0000 UTC m=+0.094486455 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team)
Feb 20 08:14:18 np0005625203.localdomain podman[77456]: 2026-02-20 08:14:18.138375482 +0000 UTC m=+0.457773409 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2026-01-12T23:32:04Z, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Feb 20 08:14:18 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:14:18 np0005625203.localdomain sshd[77454]: Invalid user ruoyi from 185.246.128.171 port 28941
Feb 20 08:14:18 np0005625203.localdomain sshd[77454]: Disconnecting invalid user ruoyi 185.246.128.171 port 28941: Change of username or service not allowed: (ruoyi,ssh-connection) -> (tmax,ssh-connection) [preauth]
Feb 20 08:14:19 np0005625203.localdomain sshd[77479]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:20 np0005625203.localdomain sshd[77479]: Invalid user tmax from 185.246.128.171 port 21283
Feb 20 08:14:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:14:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:14:20 np0005625203.localdomain systemd[1]: tmp-crun.ULT76l.mount: Deactivated successfully.
Feb 20 08:14:20 np0005625203.localdomain podman[77481]: 2026-02-20 08:14:20.367229369 +0000 UTC m=+0.097475719 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, vendor=Red Hat, Inc., config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible)
Feb 20 08:14:20 np0005625203.localdomain systemd[1]: tmp-crun.tLNs39.mount: Deactivated successfully.
Feb 20 08:14:20 np0005625203.localdomain podman[77482]: 2026-02-20 08:14:20.415782641 +0000 UTC m=+0.144430340 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true)
Feb 20 08:14:20 np0005625203.localdomain podman[77481]: 2026-02-20 08:14:20.435991915 +0000 UTC m=+0.166238195 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn)
Feb 20 08:14:20 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:14:20 np0005625203.localdomain podman[77482]: 2026-02-20 08:14:20.463275491 +0000 UTC m=+0.191923170 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step4)
Feb 20 08:14:20 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:14:21 np0005625203.localdomain sshd[77479]: Disconnecting invalid user tmax 185.246.128.171 port 21283: Change of username or service not allowed: (tmax,ssh-connection) -> (pablo,ssh-connection) [preauth]
Feb 20 08:14:21 np0005625203.localdomain sshd[77529]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:23 np0005625203.localdomain sshd[77529]: Invalid user pablo from 185.246.128.171 port 17245
Feb 20 08:14:25 np0005625203.localdomain sshd[77529]: Disconnecting invalid user pablo 185.246.128.171 port 17245: Change of username or service not allowed: (pablo,ssh-connection) -> (bkp,ssh-connection) [preauth]
Feb 20 08:14:25 np0005625203.localdomain sshd[77531]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:28 np0005625203.localdomain sshd[77531]: Invalid user bkp from 185.246.128.171 port 49233
Feb 20 08:14:28 np0005625203.localdomain sshd[77531]: Disconnecting invalid user bkp 185.246.128.171 port 49233: Change of username or service not allowed: (bkp,ssh-connection) -> (charles,ssh-connection) [preauth]
Feb 20 08:14:29 np0005625203.localdomain sshd[77533]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:31 np0005625203.localdomain sshd[77533]: Invalid user charles from 185.246.128.171 port 8066
Feb 20 08:14:32 np0005625203.localdomain sshd[77533]: Disconnecting invalid user charles 185.246.128.171 port 8066: Change of username or service not allowed: (charles,ssh-connection) -> (Nova,ssh-connection) [preauth]
Feb 20 08:14:32 np0005625203.localdomain sshd[77535]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:33 np0005625203.localdomain sshd[77535]: Invalid user Nova from 185.246.128.171 port 26402
Feb 20 08:14:34 np0005625203.localdomain sshd[77535]: Disconnecting invalid user Nova 185.246.128.171 port 26402: Change of username or service not allowed: (Nova,ssh-connection) -> (rafael,ssh-connection) [preauth]
Feb 20 08:14:35 np0005625203.localdomain sshd[77537]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:39 np0005625203.localdomain sshd[77537]: Invalid user rafael from 185.246.128.171 port 27560
Feb 20 08:14:39 np0005625203.localdomain sshd[77537]: Disconnecting invalid user rafael 185.246.128.171 port 27560: Change of username or service not allowed: (rafael,ssh-connection) -> (user01,ssh-connection) [preauth]
Feb 20 08:14:40 np0005625203.localdomain sshd[77539]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:41 np0005625203.localdomain sshd[77539]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:14:41 np0005625203.localdomain sshd[77541]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:14:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:14:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:14:43 np0005625203.localdomain podman[77544]: 2026-02-20 08:14:43.758861928 +0000 UTC m=+0.070343437 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, version=17.1.13)
Feb 20 08:14:43 np0005625203.localdomain podman[77544]: 2026-02-20 08:14:43.793976818 +0000 UTC m=+0.105458357 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, io.openshift.expose-services=, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20260112.1, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3)
Feb 20 08:14:43 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:14:43 np0005625203.localdomain podman[77543]: 2026-02-20 08:14:43.882896788 +0000 UTC m=+0.190621160 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-type=git, config_id=tripleo_step3, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:14:43 np0005625203.localdomain systemd[1]: tmp-crun.W2ADGH.mount: Deactivated successfully.
Feb 20 08:14:43 np0005625203.localdomain podman[77543]: 2026-02-20 08:14:43.932122802 +0000 UTC m=+0.239847174 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-type=git, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, version=17.1.13, container_name=collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public)
Feb 20 08:14:43 np0005625203.localdomain podman[77545]: 2026-02-20 08:14:43.939625087 +0000 UTC m=+0.245835871 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, architecture=x86_64, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, vcs-type=git, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, url=https://www.redhat.com, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Feb 20 08:14:43 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:14:44 np0005625203.localdomain podman[77545]: 2026-02-20 08:14:44.134431037 +0000 UTC m=+0.440641851 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, managed_by=tripleo_ansible)
Feb 20 08:14:44 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:14:44 np0005625203.localdomain sshd[77541]: Invalid user user01 from 185.246.128.171 port 1060
Feb 20 08:14:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:14:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:14:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:14:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:14:45 np0005625203.localdomain podman[77612]: 2026-02-20 08:14:45.746558921 +0000 UTC m=+0.062967276 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1766032510, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:14:45 np0005625203.localdomain sshd[77541]: Disconnecting invalid user user01 185.246.128.171 port 1060: Change of username or service not allowed: (user01,ssh-connection) -> (postgres,ssh-connection) [preauth]
Feb 20 08:14:45 np0005625203.localdomain podman[77609]: 2026-02-20 08:14:45.818299721 +0000 UTC m=+0.138762314 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Feb 20 08:14:45 np0005625203.localdomain podman[77609]: 2026-02-20 08:14:45.849350445 +0000 UTC m=+0.169813028 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vcs-type=git, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5)
Feb 20 08:14:45 np0005625203.localdomain podman[77611]: 2026-02-20 08:14:45.859040349 +0000 UTC m=+0.176451485 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, release=1766032510, io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, tcib_managed=true, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc.)
Feb 20 08:14:45 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:14:45 np0005625203.localdomain podman[77610]: 2026-02-20 08:14:45.777116429 +0000 UTC m=+0.094252586 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=starting, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, config_id=tripleo_step5, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible)
Feb 20 08:14:45 np0005625203.localdomain podman[77612]: 2026-02-20 08:14:45.889580466 +0000 UTC m=+0.205988781 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, distribution-scope=public, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:14:45 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:14:45 np0005625203.localdomain podman[77610]: 2026-02-20 08:14:45.906664083 +0000 UTC m=+0.223800260 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:14:45 np0005625203.localdomain podman[77610]: unhealthy
Feb 20 08:14:45 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:14:45 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:14:45 np0005625203.localdomain podman[77611]: 2026-02-20 08:14:45.960504221 +0000 UTC m=+0.277915357 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:14:45 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:14:46 np0005625203.localdomain sshd[77701]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:46 np0005625203.localdomain systemd[1]: tmp-crun.NJS9vK.mount: Deactivated successfully.
Feb 20 08:14:46 np0005625203.localdomain sshd[77703]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:47 np0005625203.localdomain sshd[77703]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:14:47 np0005625203.localdomain sshd[77701]: Invalid user postgres from 185.246.128.171 port 22807
Feb 20 08:14:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:14:48 np0005625203.localdomain podman[77705]: 2026-02-20 08:14:48.772979874 +0000 UTC m=+0.089804788 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.openshift.expose-services=, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, vcs-type=git, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target)
Feb 20 08:14:48 np0005625203.localdomain sshd[77701]: error: maximum authentication attempts exceeded for invalid user postgres from 185.246.128.171 port 22807 ssh2 [preauth]
Feb 20 08:14:48 np0005625203.localdomain sshd[77701]: Disconnecting invalid user postgres 185.246.128.171 port 22807: Too many authentication failures [preauth]
Feb 20 08:14:49 np0005625203.localdomain podman[77705]: 2026-02-20 08:14:49.100232138 +0000 UTC m=+0.417057012 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=nova_migration_target, vendor=Red Hat, Inc., tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2026-01-12T23:32:04Z)
Feb 20 08:14:49 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:14:50 np0005625203.localdomain sshd[77726]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:14:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:14:50 np0005625203.localdomain podman[77727]: 2026-02-20 08:14:50.776477152 +0000 UTC m=+0.085743400 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, build-date=2026-01-12T22:56:19Z, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.buildah.version=1.41.5)
Feb 20 08:14:50 np0005625203.localdomain podman[77727]: 2026-02-20 08:14:50.826060168 +0000 UTC m=+0.135326396 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, vcs-type=git, container_name=ovn_metadata_agent, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, version=17.1.13)
Feb 20 08:14:50 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:14:50 np0005625203.localdomain podman[77728]: 2026-02-20 08:14:50.837364672 +0000 UTC m=+0.146055493 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, container_name=ovn_controller, architecture=x86_64, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:14:50 np0005625203.localdomain podman[77728]: 2026-02-20 08:14:50.884601303 +0000 UTC m=+0.193292114 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, version=17.1.13, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ovn_controller, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1)
Feb 20 08:14:50 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:14:52 np0005625203.localdomain sshd[77726]: Invalid user postgres from 185.246.128.171 port 22098
Feb 20 08:14:54 np0005625203.localdomain sshd[77726]: Disconnecting invalid user postgres 185.246.128.171 port 22098: Change of username or service not allowed: (postgres,ssh-connection) -> (qaz,ssh-connection) [preauth]
Feb 20 08:14:55 np0005625203.localdomain sshd[77775]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:14:55 np0005625203.localdomain sshd[36145]: Received disconnect from 192.168.122.100 port 38160:11: disconnected by user
Feb 20 08:14:55 np0005625203.localdomain sshd[36145]: Disconnected from user zuul 192.168.122.100 port 38160
Feb 20 08:14:55 np0005625203.localdomain sshd[36142]: pam_unix(sshd:session): session closed for user zuul
Feb 20 08:14:55 np0005625203.localdomain systemd[1]: session-27.scope: Deactivated successfully.
Feb 20 08:14:55 np0005625203.localdomain systemd[1]: session-27.scope: Consumed 2.924s CPU time.
Feb 20 08:14:55 np0005625203.localdomain systemd-logind[759]: Session 27 logged out. Waiting for processes to exit.
Feb 20 08:14:55 np0005625203.localdomain systemd-logind[759]: Removed session 27.
Feb 20 08:14:56 np0005625203.localdomain sshd[77775]: Invalid user qaz from 185.246.128.171 port 28155
Feb 20 08:14:57 np0005625203.localdomain sshd[77775]: Disconnecting invalid user qaz 185.246.128.171 port 28155: Change of username or service not allowed: (qaz,ssh-connection) -> (astra,ssh-connection) [preauth]
Feb 20 08:14:59 np0005625203.localdomain sshd[77777]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:03 np0005625203.localdomain sshd[77777]: Invalid user astra from 185.246.128.171 port 63370
Feb 20 08:15:04 np0005625203.localdomain sshd[77777]: Disconnecting invalid user astra 185.246.128.171 port 63370: Change of username or service not allowed: (astra,ssh-connection) -> (samba,ssh-connection) [preauth]
Feb 20 08:15:07 np0005625203.localdomain sshd[77779]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:09 np0005625203.localdomain sshd[77779]: Invalid user samba from 185.246.128.171 port 21651
Feb 20 08:15:10 np0005625203.localdomain sshd[77779]: Disconnecting invalid user samba 185.246.128.171 port 21651: Change of username or service not allowed: (samba,ssh-connection) -> (abigail,ssh-connection) [preauth]
Feb 20 08:15:10 np0005625203.localdomain sshd[77781]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:11 np0005625203.localdomain sshd[77781]: Invalid user abigail from 185.246.128.171 port 63607
Feb 20 08:15:12 np0005625203.localdomain sshd[77781]: Disconnecting invalid user abigail 185.246.128.171 port 63607: Change of username or service not allowed: (abigail,ssh-connection) -> (loginuser,ssh-connection) [preauth]
Feb 20 08:15:12 np0005625203.localdomain sshd[77783]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:13 np0005625203.localdomain sshd[77783]: Invalid user loginuser from 185.246.128.171 port 49742
Feb 20 08:15:13 np0005625203.localdomain sshd[77783]: Disconnecting invalid user loginuser 185.246.128.171 port 49742: Change of username or service not allowed: (loginuser,ssh-connection) -> (Grace,ssh-connection) [preauth]
Feb 20 08:15:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:15:13 np0005625203.localdomain podman[77786]: 2026-02-20 08:15:13.993495997 +0000 UTC m=+0.092568493 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step3, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:15:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:15:14 np0005625203.localdomain podman[77786]: 2026-02-20 08:15:14.034779352 +0000 UTC m=+0.133851798 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Feb 20 08:15:14 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:15:14 np0005625203.localdomain podman[77805]: 2026-02-20 08:15:14.10798393 +0000 UTC m=+0.087135273 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510)
Feb 20 08:15:14 np0005625203.localdomain podman[77805]: 2026-02-20 08:15:14.145580621 +0000 UTC m=+0.124731964 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:15:14 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:15:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:15:14 np0005625203.localdomain sshd[77836]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:14 np0005625203.localdomain podman[77826]: 2026-02-20 08:15:14.268560599 +0000 UTC m=+0.084091248 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, tcib_managed=true, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Feb 20 08:15:14 np0005625203.localdomain sudo[77851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:15:14 np0005625203.localdomain sudo[77851]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:15:14 np0005625203.localdomain sudo[77851]: pam_unix(sudo:session): session closed for user root
Feb 20 08:15:14 np0005625203.localdomain sudo[77873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:15:14 np0005625203.localdomain sudo[77873]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:15:14 np0005625203.localdomain podman[77826]: 2026-02-20 08:15:14.451659798 +0000 UTC m=+0.267190427 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.5, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z)
Feb 20 08:15:14 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:15:15 np0005625203.localdomain sudo[77873]: pam_unix(sudo:session): session closed for user root
Feb 20 08:15:15 np0005625203.localdomain sudo[77920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:15:15 np0005625203.localdomain sudo[77920]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:15:15 np0005625203.localdomain sudo[77920]: pam_unix(sudo:session): session closed for user root
Feb 20 08:15:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:15:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:15:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:15:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:15:16 np0005625203.localdomain systemd[1]: tmp-crun.uv6oaO.mount: Deactivated successfully.
Feb 20 08:15:16 np0005625203.localdomain podman[77937]: 2026-02-20 08:15:16.760241536 +0000 UTC m=+0.075884522 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:15:16 np0005625203.localdomain systemd[1]: tmp-crun.AIdQfj.mount: Deactivated successfully.
Feb 20 08:15:16 np0005625203.localdomain podman[77936]: 2026-02-20 08:15:16.811489601 +0000 UTC m=+0.123494195 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:15:16 np0005625203.localdomain podman[77937]: 2026-02-20 08:15:16.843513208 +0000 UTC m=+0.159156224 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-type=git, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Feb 20 08:15:16 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:15:16 np0005625203.localdomain podman[77935]: 2026-02-20 08:15:16.858370791 +0000 UTC m=+0.175705440 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.buildah.version=1.41.5, tcib_managed=true, build-date=2026-01-12T22:10:15Z, vcs-type=git, container_name=logrotate_crond, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron)
Feb 20 08:15:16 np0005625203.localdomain podman[77936]: 2026-02-20 08:15:16.88116002 +0000 UTC m=+0.193164594 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, architecture=x86_64, batch=17.1_20260112.1)
Feb 20 08:15:16 np0005625203.localdomain podman[77936]: unhealthy
Feb 20 08:15:16 np0005625203.localdomain podman[77938]: 2026-02-20 08:15:16.787102543 +0000 UTC m=+0.093936125 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:15:16 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:15:16 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:15:16 np0005625203.localdomain podman[77938]: 2026-02-20 08:15:16.917514711 +0000 UTC m=+0.224348313 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, architecture=x86_64, config_id=tripleo_step4)
Feb 20 08:15:16 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:15:16 np0005625203.localdomain podman[77935]: 2026-02-20 08:15:16.941139807 +0000 UTC m=+0.258474476 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, release=1766032510, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:15:16 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:15:19 np0005625203.localdomain sshd[77836]: Invalid user Grace from 185.246.128.171 port 41868
Feb 20 08:15:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:15:19 np0005625203.localdomain systemd[1]: tmp-crun.OsVL8V.mount: Deactivated successfully.
Feb 20 08:15:19 np0005625203.localdomain podman[78027]: 2026-02-20 08:15:19.327627521 +0000 UTC m=+0.096966430 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, vcs-type=git, version=17.1.13, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Feb 20 08:15:19 np0005625203.localdomain sshd[77836]: Disconnecting invalid user Grace 185.246.128.171 port 41868: Change of username or service not allowed: (Grace,ssh-connection) -> (aa,ssh-connection) [preauth]
Feb 20 08:15:19 np0005625203.localdomain podman[78027]: 2026-02-20 08:15:19.694376417 +0000 UTC m=+0.463715406 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:15:19 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:15:21 np0005625203.localdomain sshd[78051]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:15:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:15:21 np0005625203.localdomain podman[78054]: 2026-02-20 08:15:21.752381255 +0000 UTC m=+0.066407588 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, vcs-type=git, container_name=ovn_controller, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:15:21 np0005625203.localdomain podman[78054]: 2026-02-20 08:15:21.773190104 +0000 UTC m=+0.087216527 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, release=1766032510, url=https://www.redhat.com, version=17.1.13, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:15:21 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:15:21 np0005625203.localdomain podman[78053]: 2026-02-20 08:15:21.875049664 +0000 UTC m=+0.191100319 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, release=1766032510, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z)
Feb 20 08:15:21 np0005625203.localdomain podman[78053]: 2026-02-20 08:15:21.924156692 +0000 UTC m=+0.240207377 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, release=1766032510, version=17.1.13, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, vendor=Red Hat, Inc.)
Feb 20 08:15:21 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:15:23 np0005625203.localdomain sshd[78051]: Invalid user aa from 185.246.128.171 port 43140
Feb 20 08:15:23 np0005625203.localdomain sshd[78051]: Disconnecting invalid user aa 185.246.128.171 port 43140: Change of username or service not allowed: (aa,ssh-connection) -> (openvswitch,ssh-connection) [preauth]
Feb 20 08:15:25 np0005625203.localdomain sshd[78101]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:27 np0005625203.localdomain sshd[78103]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:28 np0005625203.localdomain sshd[78103]: Invalid user evapro from 103.200.25.162 port 49008
Feb 20 08:15:28 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:15:29 np0005625203.localdomain recover_tripleo_nova_virtqemud[78106]: 62505
Feb 20 08:15:29 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:15:29 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:15:29 np0005625203.localdomain sshd[78103]: Received disconnect from 103.200.25.162 port 49008:11: Bye Bye [preauth]
Feb 20 08:15:29 np0005625203.localdomain sshd[78103]: Disconnected from invalid user evapro 103.200.25.162 port 49008 [preauth]
Feb 20 08:15:29 np0005625203.localdomain sshd[78101]: Disconnecting authenticating user openvswitch 185.246.128.171 port 45914: Change of username or service not allowed: (openvswitch,ssh-connection) -> (vr,ssh-connection) [preauth]
Feb 20 08:15:30 np0005625203.localdomain sshd[78107]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:31 np0005625203.localdomain sshd[78107]: Invalid user vr from 185.246.128.171 port 43287
Feb 20 08:15:31 np0005625203.localdomain sshd[78107]: Disconnecting invalid user vr 185.246.128.171 port 43287: Change of username or service not allowed: (vr,ssh-connection) -> (admin,ssh-connection) [preauth]
Feb 20 08:15:32 np0005625203.localdomain sshd[78109]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:32 np0005625203.localdomain sshd[78109]: Invalid user admin from 185.246.128.171 port 19820
Feb 20 08:15:33 np0005625203.localdomain sshd[78111]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:33 np0005625203.localdomain sshd[78111]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:15:33 np0005625203.localdomain sshd[78109]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 19820 ssh2 [preauth]
Feb 20 08:15:33 np0005625203.localdomain sshd[78109]: Disconnecting invalid user admin 185.246.128.171 port 19820: Too many authentication failures [preauth]
Feb 20 08:15:34 np0005625203.localdomain sshd[78113]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:35 np0005625203.localdomain sshd[78113]: Invalid user admin from 185.246.128.171 port 18423
Feb 20 08:15:38 np0005625203.localdomain sshd[78113]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 18423 ssh2 [preauth]
Feb 20 08:15:38 np0005625203.localdomain sshd[78113]: Disconnecting invalid user admin 185.246.128.171 port 18423: Too many authentication failures [preauth]
Feb 20 08:15:40 np0005625203.localdomain sshd[78115]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:42 np0005625203.localdomain sshd[78115]: Invalid user admin from 185.246.128.171 port 40550
Feb 20 08:15:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:15:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:15:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:15:44 np0005625203.localdomain systemd[1]: tmp-crun.Kkw72H.mount: Deactivated successfully.
Feb 20 08:15:44 np0005625203.localdomain podman[78118]: 2026-02-20 08:15:44.793717517 +0000 UTC m=+0.100507739 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, version=17.1.13, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:15:44 np0005625203.localdomain podman[78118]: 2026-02-20 08:15:44.80600128 +0000 UTC m=+0.112791562 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1766032510, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step3, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:15:44 np0005625203.localdomain systemd[1]: tmp-crun.jTvtxA.mount: Deactivated successfully.
Feb 20 08:15:44 np0005625203.localdomain podman[78119]: 2026-02-20 08:15:44.839716669 +0000 UTC m=+0.142293490 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, release=1766032510, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:15:44 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:15:44 np0005625203.localdomain podman[78117]: 2026-02-20 08:15:44.93902914 +0000 UTC m=+0.246986798 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, version=17.1.13, container_name=collectd, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=)
Feb 20 08:15:44 np0005625203.localdomain podman[78117]: 2026-02-20 08:15:44.955033269 +0000 UTC m=+0.262990957 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, build-date=2026-01-12T22:10:15Z)
Feb 20 08:15:44 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:15:45 np0005625203.localdomain podman[78119]: 2026-02-20 08:15:45.019350981 +0000 UTC m=+0.321927802 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z)
Feb 20 08:15:45 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:15:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:15:47 np0005625203.localdomain sshd[78115]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 40550 ssh2 [preauth]
Feb 20 08:15:47 np0005625203.localdomain sshd[78115]: Disconnecting invalid user admin 185.246.128.171 port 40550: Too many authentication failures [preauth]
Feb 20 08:15:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:15:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:15:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:15:47 np0005625203.localdomain systemd[1]: tmp-crun.dtfbuA.mount: Deactivated successfully.
Feb 20 08:15:47 np0005625203.localdomain podman[78181]: 2026-02-20 08:15:47.766154089 +0000 UTC m=+0.083943314 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, io.openshift.expose-services=, vcs-type=git, version=17.1.13, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:15:47 np0005625203.localdomain podman[78181]: 2026-02-20 08:15:47.779210516 +0000 UTC m=+0.096999731 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, tcib_managed=true, distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:15:47 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:15:47 np0005625203.localdomain podman[78182]: 2026-02-20 08:15:47.858534974 +0000 UTC m=+0.167980309 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-type=git, version=17.1.13, io.openshift.expose-services=)
Feb 20 08:15:47 np0005625203.localdomain podman[78183]: 2026-02-20 08:15:47.873977796 +0000 UTC m=+0.180318735 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, url=https://www.redhat.com, release=1766032510, build-date=2026-01-12T23:07:47Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:15:47 np0005625203.localdomain podman[78182]: 2026-02-20 08:15:47.926575372 +0000 UTC m=+0.236020727 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64)
Feb 20 08:15:47 np0005625203.localdomain podman[78182]: unhealthy
Feb 20 08:15:47 np0005625203.localdomain podman[78189]: 2026-02-20 08:15:47.935683136 +0000 UTC m=+0.240275070 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:15:47 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:15:47 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:15:47 np0005625203.localdomain podman[78183]: 2026-02-20 08:15:47.955232345 +0000 UTC m=+0.261573334 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, version=17.1.13, tcib_managed=true, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute)
Feb 20 08:15:47 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:15:47 np0005625203.localdomain podman[78189]: 2026-02-20 08:15:47.994366453 +0000 UTC m=+0.298958387 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, io.buildah.version=1.41.5, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510)
Feb 20 08:15:48 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:15:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:15:50 np0005625203.localdomain podman[78273]: 2026-02-20 08:15:50.768716549 +0000 UTC m=+0.084029376 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team)
Feb 20 08:15:50 np0005625203.localdomain sshd[78294]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:51 np0005625203.localdomain podman[78273]: 2026-02-20 08:15:51.177278336 +0000 UTC m=+0.492591143 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible, container_name=nova_migration_target, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:15:51 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:15:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:15:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:15:52 np0005625203.localdomain systemd[1]: tmp-crun.XDa5L4.mount: Deactivated successfully.
Feb 20 08:15:52 np0005625203.localdomain podman[78297]: 2026-02-20 08:15:52.776553186 +0000 UTC m=+0.093554373 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, version=17.1.13, container_name=ovn_metadata_agent, distribution-scope=public, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:15:52 np0005625203.localdomain podman[78298]: 2026-02-20 08:15:52.817468819 +0000 UTC m=+0.131082060 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:15:52 np0005625203.localdomain podman[78298]: 2026-02-20 08:15:52.843408817 +0000 UTC m=+0.157022098 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1)
Feb 20 08:15:52 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:15:52 np0005625203.localdomain podman[78297]: 2026-02-20 08:15:52.925852653 +0000 UTC m=+0.242853860 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510)
Feb 20 08:15:52 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:15:53 np0005625203.localdomain sshd[78294]: Invalid user admin from 185.246.128.171 port 51109
Feb 20 08:15:56 np0005625203.localdomain sshd[78294]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 51109 ssh2 [preauth]
Feb 20 08:15:56 np0005625203.localdomain sshd[78294]: Disconnecting invalid user admin 185.246.128.171 port 51109: Too many authentication failures [preauth]
Feb 20 08:15:57 np0005625203.localdomain sshd[78344]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:15:59 np0005625203.localdomain sshd[78344]: Invalid user admin from 185.246.128.171 port 63404
Feb 20 08:16:00 np0005625203.localdomain sshd[78344]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 63404 ssh2 [preauth]
Feb 20 08:16:00 np0005625203.localdomain sshd[78344]: Disconnecting invalid user admin 185.246.128.171 port 63404: Too many authentication failures [preauth]
Feb 20 08:16:01 np0005625203.localdomain sshd[78346]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:02 np0005625203.localdomain sshd[78346]: Invalid user admin from 185.246.128.171 port 30708
Feb 20 08:16:06 np0005625203.localdomain sshd[78346]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 30708 ssh2 [preauth]
Feb 20 08:16:06 np0005625203.localdomain sshd[78346]: Disconnecting invalid user admin 185.246.128.171 port 30708: Too many authentication failures [preauth]
Feb 20 08:16:07 np0005625203.localdomain sshd[78348]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:09 np0005625203.localdomain sshd[78348]: Invalid user admin from 185.246.128.171 port 42115
Feb 20 08:16:12 np0005625203.localdomain sshd[78348]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 42115 ssh2 [preauth]
Feb 20 08:16:12 np0005625203.localdomain sshd[78348]: Disconnecting invalid user admin 185.246.128.171 port 42115: Too many authentication failures [preauth]
Feb 20 08:16:12 np0005625203.localdomain sshd[78350]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:13 np0005625203.localdomain sshd[78350]: Invalid user admin from 185.246.128.171 port 46628
Feb 20 08:16:15 np0005625203.localdomain sshd[78352]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:15 np0005625203.localdomain sshd[78352]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:16:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:16:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:16:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:16:15 np0005625203.localdomain podman[78354]: 2026-02-20 08:16:15.778001744 +0000 UTC m=+0.091798138 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, distribution-scope=public, vcs-type=git, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.openshift.expose-services=, version=17.1.13, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:16:15 np0005625203.localdomain podman[78354]: 2026-02-20 08:16:15.787344196 +0000 UTC m=+0.101140600 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:16:15 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:16:15 np0005625203.localdomain podman[78355]: 2026-02-20 08:16:15.878535433 +0000 UTC m=+0.188094995 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, release=1766032510, config_id=tripleo_step3, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Feb 20 08:16:15 np0005625203.localdomain podman[78355]: 2026-02-20 08:16:15.887046068 +0000 UTC m=+0.196605630 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, tcib_managed=true, container_name=iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:16:15 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:16:15 np0005625203.localdomain sudo[78405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:16:15 np0005625203.localdomain sudo[78405]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:16:15 np0005625203.localdomain sudo[78405]: pam_unix(sudo:session): session closed for user root
Feb 20 08:16:15 np0005625203.localdomain podman[78356]: 2026-02-20 08:16:15.985145792 +0000 UTC m=+0.292487265 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=)
Feb 20 08:16:16 np0005625203.localdomain sudo[78429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:16:16 np0005625203.localdomain sudo[78429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:16:16 np0005625203.localdomain podman[78356]: 2026-02-20 08:16:16.173049781 +0000 UTC m=+0.480391334 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z)
Feb 20 08:16:16 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:16:16 np0005625203.localdomain sshd[78350]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 46628 ssh2 [preauth]
Feb 20 08:16:16 np0005625203.localdomain sshd[78350]: Disconnecting invalid user admin 185.246.128.171 port 46628: Too many authentication failures [preauth]
Feb 20 08:16:16 np0005625203.localdomain sudo[78429]: pam_unix(sudo:session): session closed for user root
Feb 20 08:16:16 np0005625203.localdomain systemd[1]: tmp-crun.IIsVsE.mount: Deactivated successfully.
Feb 20 08:16:17 np0005625203.localdomain sudo[78483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:16:17 np0005625203.localdomain sudo[78483]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:16:17 np0005625203.localdomain sudo[78483]: pam_unix(sudo:session): session closed for user root
Feb 20 08:16:17 np0005625203.localdomain sshd[78498]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:16:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:16:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:16:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:16:18 np0005625203.localdomain systemd[1]: tmp-crun.KyPp9X.mount: Deactivated successfully.
Feb 20 08:16:18 np0005625203.localdomain podman[78501]: 2026-02-20 08:16:18.824483001 +0000 UTC m=+0.138922104 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:16:18 np0005625203.localdomain podman[78502]: 2026-02-20 08:16:18.874272241 +0000 UTC m=+0.188291522 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:16:18 np0005625203.localdomain podman[78501]: 2026-02-20 08:16:18.883167108 +0000 UTC m=+0.197606211 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1766032510)
Feb 20 08:16:18 np0005625203.localdomain podman[78501]: unhealthy
Feb 20 08:16:18 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:16:18 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:16:18 np0005625203.localdomain podman[78502]: 2026-02-20 08:16:18.906239086 +0000 UTC m=+0.220258417 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1)
Feb 20 08:16:18 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:16:18 np0005625203.localdomain podman[78500]: 2026-02-20 08:16:18.955065706 +0000 UTC m=+0.274035850 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, version=17.1.13, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, container_name=logrotate_crond)
Feb 20 08:16:18 np0005625203.localdomain podman[78500]: 2026-02-20 08:16:18.965758698 +0000 UTC m=+0.284728882 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, release=1766032510, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:16:18 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:16:18 np0005625203.localdomain podman[78503]: 2026-02-20 08:16:18.785103936 +0000 UTC m=+0.095681649 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vendor=Red Hat, Inc., release=1766032510, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi)
Feb 20 08:16:19 np0005625203.localdomain podman[78503]: 2026-02-20 08:16:19.017324804 +0000 UTC m=+0.327902517 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, version=17.1.13, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:16:19 np0005625203.localdomain sshd[78498]: Invalid user admin from 185.246.128.171 port 26758
Feb 20 08:16:19 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:16:19 np0005625203.localdomain sshd[78590]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:19 np0005625203.localdomain systemd[1]: tmp-crun.dp7mpN.mount: Deactivated successfully.
Feb 20 08:16:19 np0005625203.localdomain sshd[78590]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:16:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:16:21 np0005625203.localdomain podman[78592]: 2026-02-20 08:16:21.768175999 +0000 UTC m=+0.087629710 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, container_name=nova_migration_target, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.buildah.version=1.41.5, managed_by=tripleo_ansible)
Feb 20 08:16:22 np0005625203.localdomain podman[78592]: 2026-02-20 08:16:22.163823764 +0000 UTC m=+0.483277425 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.expose-services=, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, release=1766032510, com.redhat.component=openstack-nova-compute-container, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1)
Feb 20 08:16:22 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:16:23 np0005625203.localdomain sshd[78498]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 26758 ssh2 [preauth]
Feb 20 08:16:23 np0005625203.localdomain sshd[78498]: Disconnecting invalid user admin 185.246.128.171 port 26758: Too many authentication failures [preauth]
Feb 20 08:16:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:16:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:16:23 np0005625203.localdomain systemd[1]: tmp-crun.VfY5DE.mount: Deactivated successfully.
Feb 20 08:16:23 np0005625203.localdomain podman[78618]: 2026-02-20 08:16:23.444534878 +0000 UTC m=+0.094109590 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:16:23 np0005625203.localdomain podman[78618]: 2026-02-20 08:16:23.471175077 +0000 UTC m=+0.120749819 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, build-date=2026-01-12T22:36:40Z, release=1766032510, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:16:23 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:16:23 np0005625203.localdomain podman[78617]: 2026-02-20 08:16:23.494522454 +0000 UTC m=+0.144531110 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:16:23 np0005625203.localdomain podman[78617]: 2026-02-20 08:16:23.561412156 +0000 UTC m=+0.211420792 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git)
Feb 20 08:16:23 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:16:23 np0005625203.localdomain sshd[78667]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:24 np0005625203.localdomain systemd[1]: tmp-crun.0dVuAL.mount: Deactivated successfully.
Feb 20 08:16:25 np0005625203.localdomain sshd[78667]: Invalid user admin from 185.246.128.171 port 53172
Feb 20 08:16:28 np0005625203.localdomain sshd[78667]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 53172 ssh2 [preauth]
Feb 20 08:16:28 np0005625203.localdomain sshd[78667]: Disconnecting invalid user admin 185.246.128.171 port 53172: Too many authentication failures [preauth]
Feb 20 08:16:29 np0005625203.localdomain sshd[78669]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:31 np0005625203.localdomain sshd[78669]: Invalid user admin from 185.246.128.171 port 12459
Feb 20 08:16:32 np0005625203.localdomain sshd[78671]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:33 np0005625203.localdomain sshd[78671]: Invalid user cod4server from 189.190.2.14 port 55776
Feb 20 08:16:33 np0005625203.localdomain sshd[78671]: Received disconnect from 189.190.2.14 port 55776:11: Bye Bye [preauth]
Feb 20 08:16:33 np0005625203.localdomain sshd[78671]: Disconnected from invalid user cod4server 189.190.2.14 port 55776 [preauth]
Feb 20 08:16:35 np0005625203.localdomain sshd[78669]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 12459 ssh2 [preauth]
Feb 20 08:16:35 np0005625203.localdomain sshd[78669]: Disconnecting invalid user admin 185.246.128.171 port 12459: Too many authentication failures [preauth]
Feb 20 08:16:36 np0005625203.localdomain sshd[78673]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:37 np0005625203.localdomain sshd[78673]: Invalid user admin from 185.246.128.171 port 40089
Feb 20 08:16:39 np0005625203.localdomain sshd[78675]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:40 np0005625203.localdomain sshd[78675]: Received disconnect from 147.135.114.8 port 38200:11: Bye Bye [preauth]
Feb 20 08:16:40 np0005625203.localdomain sshd[78675]: Disconnected from authenticating user root 147.135.114.8 port 38200 [preauth]
Feb 20 08:16:43 np0005625203.localdomain sshd[78673]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 40089 ssh2 [preauth]
Feb 20 08:16:43 np0005625203.localdomain sshd[78673]: Disconnecting invalid user admin 185.246.128.171 port 40089: Too many authentication failures [preauth]
Feb 20 08:16:43 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:16:43 np0005625203.localdomain recover_tripleo_nova_virtqemud[78678]: 62505
Feb 20 08:16:43 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:16:43 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:16:44 np0005625203.localdomain sshd[78679]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:45 np0005625203.localdomain sshd[78679]: Invalid user admin from 185.246.128.171 port 29879
Feb 20 08:16:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:16:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:16:45 np0005625203.localdomain systemd[1]: tmp-crun.2gbPJr.mount: Deactivated successfully.
Feb 20 08:16:45 np0005625203.localdomain podman[78681]: 2026-02-20 08:16:45.984562051 +0000 UTC m=+0.110845821 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, container_name=collectd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step3, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container)
Feb 20 08:16:46 np0005625203.localdomain podman[78681]: 2026-02-20 08:16:46.019289413 +0000 UTC m=+0.145573153 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:16:46 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:16:46 np0005625203.localdomain systemd[1]: tmp-crun.oeWZp3.mount: Deactivated successfully.
Feb 20 08:16:46 np0005625203.localdomain podman[78700]: 2026-02-20 08:16:46.12815367 +0000 UTC m=+0.142943590 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, config_id=tripleo_step3, tcib_managed=true, version=17.1.13, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.expose-services=, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:16:46 np0005625203.localdomain podman[78700]: 2026-02-20 08:16:46.162799 +0000 UTC m=+0.177588920 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, distribution-scope=public, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:16:46 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:16:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:16:46 np0005625203.localdomain podman[78723]: 2026-02-20 08:16:46.758742919 +0000 UTC m=+0.072941262 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public)
Feb 20 08:16:46 np0005625203.localdomain podman[78723]: 2026-02-20 08:16:46.948026431 +0000 UTC m=+0.262224804 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, distribution-scope=public, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.5, container_name=metrics_qdr, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510)
Feb 20 08:16:46 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:16:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:16:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:16:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:16:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:16:49 np0005625203.localdomain podman[78820]: 2026-02-20 08:16:49.775135329 +0000 UTC m=+0.090486927 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:16:49 np0005625203.localdomain systemd[1]: tmp-crun.Ko8OCE.mount: Deactivated successfully.
Feb 20 08:16:49 np0005625203.localdomain podman[78819]: 2026-02-20 08:16:49.829749379 +0000 UTC m=+0.144792598 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.buildah.version=1.41.5, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.13, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z)
Feb 20 08:16:49 np0005625203.localdomain podman[78820]: 2026-02-20 08:16:49.833179486 +0000 UTC m=+0.148531014 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, vcs-type=git, distribution-scope=public, release=1766032510, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=)
Feb 20 08:16:49 np0005625203.localdomain podman[78819]: 2026-02-20 08:16:49.838493311 +0000 UTC m=+0.153536490 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, release=1766032510, tcib_managed=true, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git)
Feb 20 08:16:49 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:16:49 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:16:49 np0005625203.localdomain podman[78821]: 2026-02-20 08:16:49.930805455 +0000 UTC m=+0.241829319 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510)
Feb 20 08:16:49 np0005625203.localdomain podman[78822]: 2026-02-20 08:16:49.980184781 +0000 UTC m=+0.287598152 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1766032510, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc.)
Feb 20 08:16:50 np0005625203.localdomain podman[78822]: 2026-02-20 08:16:50.006239853 +0000 UTC m=+0.313653234 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, release=1766032510, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, tcib_managed=true, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step4, io.openshift.expose-services=)
Feb 20 08:16:50 np0005625203.localdomain podman[78821]: 2026-02-20 08:16:50.006581983 +0000 UTC m=+0.317605807 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute)
Feb 20 08:16:50 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:16:50 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:16:50 np0005625203.localdomain sshd[78679]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 29879 ssh2 [preauth]
Feb 20 08:16:50 np0005625203.localdomain sshd[78679]: Disconnecting invalid user admin 185.246.128.171 port 29879: Too many authentication failures [preauth]
Feb 20 08:16:52 np0005625203.localdomain sshd[78939]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:16:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:16:52 np0005625203.localdomain systemd[1]: tmp-crun.wS2G4X.mount: Deactivated successfully.
Feb 20 08:16:52 np0005625203.localdomain podman[78940]: 2026-02-20 08:16:52.767569222 +0000 UTC m=+0.087061940 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T23:32:04Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:16:53 np0005625203.localdomain podman[78940]: 2026-02-20 08:16:53.140302314 +0000 UTC m=+0.459794972 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1)
Feb 20 08:16:53 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:16:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:16:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:16:53 np0005625203.localdomain systemd[1]: tmp-crun.aLlInl.mount: Deactivated successfully.
Feb 20 08:16:53 np0005625203.localdomain podman[78963]: 2026-02-20 08:16:53.773518474 +0000 UTC m=+0.091524319 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public, container_name=ovn_metadata_agent, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:16:53 np0005625203.localdomain podman[78964]: 2026-02-20 08:16:53.831019174 +0000 UTC m=+0.143969332 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:16:53 np0005625203.localdomain podman[78963]: 2026-02-20 08:16:53.844352739 +0000 UTC m=+0.162358584 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, distribution-scope=public, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, version=17.1.13, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com)
Feb 20 08:16:53 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:16:53 np0005625203.localdomain podman[78964]: 2026-02-20 08:16:53.877688867 +0000 UTC m=+0.190638965 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, container_name=ovn_controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container)
Feb 20 08:16:53 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:16:55 np0005625203.localdomain sshd[78939]: Invalid user admin from 185.246.128.171 port 23684
Feb 20 08:16:57 np0005625203.localdomain systemd[1]: libpod-a18e75c13e54ab0acc4d393db54123059f2764f2c6d4403946462f60d189712f.scope: Deactivated successfully.
Feb 20 08:16:57 np0005625203.localdomain podman[77084]: 2026-02-20 08:16:57.521469535 +0000 UTC m=+192.633663349 container died a18e75c13e54ab0acc4d393db54123059f2764f2c6d4403946462f60d189712f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, version=17.1.13, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, container_name=nova_wait_for_compute_service, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:16:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a18e75c13e54ab0acc4d393db54123059f2764f2c6d4403946462f60d189712f-userdata-shm.mount: Deactivated successfully.
Feb 20 08:16:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d5d417fdd363e485b6316b065395adc69531d816fbea7fef0eb296ef13ea44a2-merged.mount: Deactivated successfully.
Feb 20 08:16:57 np0005625203.localdomain podman[79010]: 2026-02-20 08:16:57.612284262 +0000 UTC m=+0.075553072 container cleanup a18e75c13e54ab0acc4d393db54123059f2764f2c6d4403946462f60d189712f (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_wait_for_compute_service, container_name=nova_wait_for_compute_service, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, version=17.1.13, config_id=tripleo_step5, architecture=x86_64, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:16:57 np0005625203.localdomain systemd[1]: libpod-conmon-a18e75c13e54ab0acc4d393db54123059f2764f2c6d4403946462f60d189712f.scope: Deactivated successfully.
Feb 20 08:16:57 np0005625203.localdomain python3[76871]: ansible-tripleo_container_manage PODMAN-CONTAINER-DEBUG: podman run --name nova_wait_for_compute_service --conmon-pidfile /run/nova_wait_for_compute_service.pid --detach=False --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env __OS_DEBUG=true --env TRIPLEO_CONFIG_HASH=2eb7e8e9794eebaba92e1ff8facc8868 --label config_id=tripleo_step5 --label container_name=nova_wait_for_compute_service --label managed_by=tripleo_ansible --label config_data={'detach': False, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', '__OS_DEBUG': 'true', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'start_order': 4, 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/log/containers/nova:/var/log/nova', '/var/lib/container-config-scripts:/container-config-scripts']} --log-driver k8s-file --log-opt path=/var/log/containers/stdouts/nova_wait_for_compute_service.log --network host --user nova --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /etc/puppet:/etc/puppet:ro --volume /var/lib/kolla/config_files/nova_compute_wait_for_compute_service.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro --volume /var/log/containers/nova:/var/log/nova --volume /var/lib/container-config-scripts:/container-config-scripts registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1
Feb 20 08:16:57 np0005625203.localdomain sudo[76869]: pam_unix(sudo:session): session closed for user root
Feb 20 08:16:58 np0005625203.localdomain sudo[79060]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lajeylcjeljptvcfzzdiruwtbwepydxb ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:16:58 np0005625203.localdomain sudo[79060]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:16:58 np0005625203.localdomain python3[79062]: ansible-file Invoked with path=/etc/systemd/system/tripleo_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:16:58 np0005625203.localdomain sudo[79060]: pam_unix(sudo:session): session closed for user root
Feb 20 08:16:58 np0005625203.localdomain sudo[79076]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lehhlpqcsdbngwqeqwhiheobuuyvxyvj ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:16:58 np0005625203.localdomain sudo[79076]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:16:58 np0005625203.localdomain sshd[78939]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 23684 ssh2 [preauth]
Feb 20 08:16:58 np0005625203.localdomain sshd[78939]: Disconnecting invalid user admin 185.246.128.171 port 23684: Too many authentication failures [preauth]
Feb 20 08:16:58 np0005625203.localdomain python3[79078]: ansible-stat Invoked with path=/etc/systemd/system/tripleo_nova_compute_healthcheck.timer follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Feb 20 08:16:58 np0005625203.localdomain sudo[79076]: pam_unix(sudo:session): session closed for user root
Feb 20 08:16:59 np0005625203.localdomain sudo[79137]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ednfrayckvmoqamybhackizbkgkxbkin ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:16:59 np0005625203.localdomain sudo[79137]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:16:59 np0005625203.localdomain python3[79139]: ansible-copy Invoked with src=/home/tripleo-admin/.ansible/tmp/ansible-tmp-1771575418.6316233-118842-164818572388159/source dest=/etc/systemd/system/tripleo_nova_compute.service mode=0644 owner=root group=root backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:16:59 np0005625203.localdomain sudo[79137]: pam_unix(sudo:session): session closed for user root
Feb 20 08:16:59 np0005625203.localdomain sudo[79153]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tgqsqdmdxlwrhiojlrxvihevfnnafdcp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:16:59 np0005625203.localdomain sudo[79153]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:16:59 np0005625203.localdomain python3[79155]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 08:16:59 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:16:59 np0005625203.localdomain systemd-rc-local-generator[79177]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:16:59 np0005625203.localdomain systemd-sysv-generator[79181]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:16:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:16:59 np0005625203.localdomain sudo[79153]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:00 np0005625203.localdomain sshd[79207]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:00 np0005625203.localdomain sudo[79205]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-equqggdeqzlhhcplzeophagdfsvloacp ; TRIPLEO_MINOR_UPDATE=False /usr/bin/python3
Feb 20 08:17:00 np0005625203.localdomain sudo[79205]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:17:00 np0005625203.localdomain python3[79208]: ansible-systemd Invoked with state=restarted name=tripleo_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:17:00 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:17:00 np0005625203.localdomain systemd-rc-local-generator[79237]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:17:00 np0005625203.localdomain systemd-sysv-generator[79241]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:17:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:17:01 np0005625203.localdomain systemd[1]: Starting nova_compute container...
Feb 20 08:17:01 np0005625203.localdomain tripleo-start-podman-container[79248]: Creating additional drop-in dependency for "nova_compute" (31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4)
Feb 20 08:17:01 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:17:01 np0005625203.localdomain systemd-rc-local-generator[79306]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:17:01 np0005625203.localdomain systemd-sysv-generator[79309]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:17:01 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:17:01 np0005625203.localdomain systemd[1]: Started nova_compute container.
Feb 20 08:17:01 np0005625203.localdomain sudo[79205]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:01 np0005625203.localdomain sudo[79345]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxqligvxfcnaqkinmofztyrngdznjsjz ; /usr/bin/python3
Feb 20 08:17:01 np0005625203.localdomain sudo[79345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:17:02 np0005625203.localdomain python3[79347]: ansible-file Invoked with path=/var/lib/container-puppet/container-puppet-tasks5.json state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:17:02 np0005625203.localdomain sudo[79345]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:02 np0005625203.localdomain sudo[79393]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eclleoenqbxbodvnutdjautvmuqcubwt ; /usr/bin/python3
Feb 20 08:17:02 np0005625203.localdomain sudo[79393]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:17:02 np0005625203.localdomain sudo[79393]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:03 np0005625203.localdomain sshd[79207]: Invalid user admin from 185.246.128.171 port 6337
Feb 20 08:17:03 np0005625203.localdomain sudo[79436]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iksgtikinucjxbaeyaapyoxeajsopxhn ; /usr/bin/python3
Feb 20 08:17:03 np0005625203.localdomain sudo[79436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:17:03 np0005625203.localdomain sshd[79439]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:03 np0005625203.localdomain sudo[79436]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:03 np0005625203.localdomain sshd[79439]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:17:03 np0005625203.localdomain sudo[79468]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thmzsfqrxwhrlesariqusmaonyzohvse ; /usr/bin/python3
Feb 20 08:17:03 np0005625203.localdomain sudo[79468]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:17:03 np0005625203.localdomain python3[79470]: ansible-container_puppet_config Invoked with check_mode=False config_vol_prefix=/var/lib/config-data debug=True net_host=True no_archive=True puppet_config=/var/lib/container-puppet/container-puppet-tasks5.json short_hostname=np0005625203 step=5 update_config_hash_only=False
Feb 20 08:17:03 np0005625203.localdomain sudo[79468]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:04 np0005625203.localdomain sudo[79484]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gnhisuhlcluwmxvawqubbyrokkxkesat ; /usr/bin/python3
Feb 20 08:17:04 np0005625203.localdomain sudo[79484]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:17:04 np0005625203.localdomain python3[79486]: ansible-file Invoked with path=/var/log/containers/stdouts state=directory owner=root group=root recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 08:17:04 np0005625203.localdomain sudo[79484]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:04 np0005625203.localdomain sudo[79500]: tripleo-admin : PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvhspkhhoasoabokepizfjdhibqkyhbc ; /usr/bin/python3
Feb 20 08:17:04 np0005625203.localdomain sudo[79500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1003)
Feb 20 08:17:04 np0005625203.localdomain python3[79502]: ansible-container_config_data Invoked with config_path=/var/lib/tripleo-config/container-puppet-config/step_5 config_pattern=container-puppet-*.json config_overrides={} debug=True
Feb 20 08:17:04 np0005625203.localdomain sudo[79500]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:04 np0005625203.localdomain sshd[79207]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 6337 ssh2 [preauth]
Feb 20 08:17:04 np0005625203.localdomain sshd[79207]: Disconnecting invalid user admin 185.246.128.171 port 6337: Too many authentication failures [preauth]
Feb 20 08:17:05 np0005625203.localdomain sshd[79503]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:07 np0005625203.localdomain sshd[79503]: Invalid user admin from 185.246.128.171 port 50062
Feb 20 08:17:12 np0005625203.localdomain sshd[79503]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 50062 ssh2 [preauth]
Feb 20 08:17:12 np0005625203.localdomain sshd[79503]: Disconnecting invalid user admin 185.246.128.171 port 50062: Too many authentication failures [preauth]
Feb 20 08:17:14 np0005625203.localdomain sshd[79505]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:17:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:17:16 np0005625203.localdomain podman[79507]: 2026-02-20 08:17:16.780632727 +0000 UTC m=+0.094835212 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, architecture=x86_64, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:17:16 np0005625203.localdomain systemd[1]: tmp-crun.Oqm78z.mount: Deactivated successfully.
Feb 20 08:17:16 np0005625203.localdomain podman[79508]: 2026-02-20 08:17:16.836994772 +0000 UTC m=+0.149287338 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=)
Feb 20 08:17:16 np0005625203.localdomain podman[79507]: 2026-02-20 08:17:16.841512573 +0000 UTC m=+0.155715128 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:17:16 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:17:16 np0005625203.localdomain podman[79508]: 2026-02-20 08:17:16.877313607 +0000 UTC m=+0.189606133 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, release=1766032510, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:17:16 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:17:16 np0005625203.localdomain sshd[79505]: Invalid user admin from 185.246.128.171 port 61281
Feb 20 08:17:17 np0005625203.localdomain sudo[79544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:17:17 np0005625203.localdomain sudo[79544]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:17:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:17:17 np0005625203.localdomain sudo[79544]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:17 np0005625203.localdomain sudo[79568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:17:17 np0005625203.localdomain sudo[79568]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:17:17 np0005625203.localdomain podman[79559]: 2026-02-20 08:17:17.623063849 +0000 UTC m=+0.085064168 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public)
Feb 20 08:17:17 np0005625203.localdomain podman[79559]: 2026-02-20 08:17:17.809246064 +0000 UTC m=+0.271246353 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, container_name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13)
Feb 20 08:17:17 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:17:18 np0005625203.localdomain sudo[79568]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:18 np0005625203.localdomain sudo[79634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:17:18 np0005625203.localdomain sudo[79634]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:17:18 np0005625203.localdomain sudo[79634]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:17:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:17:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:17:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:17:20 np0005625203.localdomain systemd[1]: tmp-crun.CZuJeI.mount: Deactivated successfully.
Feb 20 08:17:20 np0005625203.localdomain podman[79652]: 2026-02-20 08:17:20.780925693 +0000 UTC m=+0.089930700 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:17:20 np0005625203.localdomain podman[79652]: 2026-02-20 08:17:20.804850558 +0000 UTC m=+0.113855585 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, release=1766032510, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com)
Feb 20 08:17:20 np0005625203.localdomain podman[79649]: 2026-02-20 08:17:20.830042632 +0000 UTC m=+0.146640826 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, architecture=x86_64, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team)
Feb 20 08:17:20 np0005625203.localdomain podman[79649]: 2026-02-20 08:17:20.842313484 +0000 UTC m=+0.158911698 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5)
Feb 20 08:17:20 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:17:20 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:17:20 np0005625203.localdomain podman[79651]: 2026-02-20 08:17:20.936143745 +0000 UTC m=+0.245443991 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., release=1766032510, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5)
Feb 20 08:17:20 np0005625203.localdomain podman[79650]: 2026-02-20 08:17:20.991021593 +0000 UTC m=+0.301830107 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, version=17.1.13, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z)
Feb 20 08:17:20 np0005625203.localdomain podman[79651]: 2026-02-20 08:17:20.996308427 +0000 UTC m=+0.305608653 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:17:21 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:17:21 np0005625203.localdomain podman[79650]: 2026-02-20 08:17:21.050361379 +0000 UTC m=+0.361169823 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, config_id=tripleo_step5, release=1766032510, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:17:21 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:17:22 np0005625203.localdomain sshd[79505]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 61281 ssh2 [preauth]
Feb 20 08:17:22 np0005625203.localdomain sshd[79505]: Disconnecting invalid user admin 185.246.128.171 port 61281: Too many authentication failures [preauth]
Feb 20 08:17:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:17:23 np0005625203.localdomain podman[79748]: 2026-02-20 08:17:23.755627406 +0000 UTC m=+0.075395898 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, release=1766032510, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:17:24 np0005625203.localdomain podman[79748]: 2026-02-20 08:17:24.12940911 +0000 UTC m=+0.449177622 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:17:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:17:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:17:24 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:17:24 np0005625203.localdomain podman[79772]: 2026-02-20 08:17:24.2395765 +0000 UTC m=+0.083245073 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, url=https://www.redhat.com, container_name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:17:24 np0005625203.localdomain podman[79773]: 2026-02-20 08:17:24.288712449 +0000 UTC m=+0.129050508 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4)
Feb 20 08:17:24 np0005625203.localdomain podman[79772]: 2026-02-20 08:17:24.306957007 +0000 UTC m=+0.150625640 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, distribution-scope=public, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team)
Feb 20 08:17:24 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:17:24 np0005625203.localdomain podman[79773]: 2026-02-20 08:17:24.360975478 +0000 UTC m=+0.201313517 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64)
Feb 20 08:17:24 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:17:25 np0005625203.localdomain sshd[79818]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:27 np0005625203.localdomain sshd[79818]: Invalid user admin from 185.246.128.171 port 57634
Feb 20 08:17:29 np0005625203.localdomain sshd[79820]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:30 np0005625203.localdomain sshd[79820]: Accepted publickey for zuul from 192.168.122.100 port 33206 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 08:17:30 np0005625203.localdomain systemd-logind[759]: New session 33 of user zuul.
Feb 20 08:17:30 np0005625203.localdomain systemd[1]: Started Session 33 of User zuul.
Feb 20 08:17:30 np0005625203.localdomain sshd[79820]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 08:17:30 np0005625203.localdomain sudo[79927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ossmksvukngsublenxyeffxxgootcwxo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771575450.179262-42409-180222413708875/AnsiballZ_setup.py
Feb 20 08:17:30 np0005625203.localdomain sudo[79927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:17:30 np0005625203.localdomain sshd[79818]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 57634 ssh2 [preauth]
Feb 20 08:17:30 np0005625203.localdomain sshd[79818]: Disconnecting invalid user admin 185.246.128.171 port 57634: Too many authentication failures [preauth]
Feb 20 08:17:30 np0005625203.localdomain python3[79929]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 08:17:31 np0005625203.localdomain sshd[79963]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:33 np0005625203.localdomain sshd[79963]: Invalid user admin from 185.246.128.171 port 62071
Feb 20 08:17:33 np0005625203.localdomain sudo[79927]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:38 np0005625203.localdomain sshd[79963]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 62071 ssh2 [preauth]
Feb 20 08:17:38 np0005625203.localdomain sshd[79963]: Disconnecting invalid user admin 185.246.128.171 port 62071: Too many authentication failures [preauth]
Feb 20 08:17:38 np0005625203.localdomain sudo[80192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scbkibukiceoqpfhvqruquflnsyfzkwy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771575457.9687839-42505-231496641638266/AnsiballZ_dnf.py
Feb 20 08:17:38 np0005625203.localdomain sudo[80192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:17:38 np0005625203.localdomain python3[80194]: ansible-ansible.legacy.dnf Invoked with name=['iptables'] allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None state=None
Feb 20 08:17:39 np0005625203.localdomain sshd[80196]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:40 np0005625203.localdomain sshd[80196]: Invalid user admin from 185.246.128.171 port 38301
Feb 20 08:17:41 np0005625203.localdomain sudo[80192]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:42 np0005625203.localdomain sshd[80196]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 38301 ssh2 [preauth]
Feb 20 08:17:42 np0005625203.localdomain sshd[80196]: Disconnecting invalid user admin 185.246.128.171 port 38301: Too many authentication failures [preauth]
Feb 20 08:17:44 np0005625203.localdomain sshd[80213]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:46 np0005625203.localdomain sudo[80289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-arhgugjvjjgcagcjaadjiquuaenmmhko ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771575465.8753636-42558-48341046562770/AnsiballZ_iptables.py
Feb 20 08:17:46 np0005625203.localdomain sudo[80289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:17:46 np0005625203.localdomain sshd[80213]: Invalid user admin from 185.246.128.171 port 41900
Feb 20 08:17:46 np0005625203.localdomain python3[80291]: ansible-ansible.builtin.iptables Invoked with action=insert chain=INPUT comment=allow ssh access for zuul executor in_interface=eth0 jump=ACCEPT protocol=tcp source=38.102.83.114 table=filter state=present ip_version=ipv4 match=[] destination_ports=[] ctstate=[] syn=ignore flush=False chain_management=False numeric=False rule_num=None wait=None to_source=None destination=None to_destination=None tcp_flags=None gateway=None log_prefix=None log_level=None goto=None out_interface=None fragment=None set_counters=None source_port=None destination_port=None to_ports=None set_dscp_mark=None set_dscp_mark_class=None src_range=None dst_range=None match_set=None match_set_flags=None limit=None limit_burst=None uid_owner=None gid_owner=None reject_with=None icmp_type=None policy=None
Feb 20 08:17:46 np0005625203.localdomain kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Feb 20 08:17:46 np0005625203.localdomain systemd-journald[48285]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 81.1 (270 of 333 items), suggesting rotation.
Feb 20 08:17:46 np0005625203.localdomain systemd-journald[48285]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating.
Feb 20 08:17:46 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 08:17:46 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 08:17:46 np0005625203.localdomain sudo[80289]: pam_unix(sudo:session): session closed for user root
Feb 20 08:17:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:17:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:17:47 np0005625203.localdomain podman[80338]: 2026-02-20 08:17:47.797722265 +0000 UTC m=+0.100126258 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5)
Feb 20 08:17:47 np0005625203.localdomain podman[80338]: 2026-02-20 08:17:47.839737713 +0000 UTC m=+0.142141706 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, release=1766032510, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:17:47 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:17:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:17:47 np0005625203.localdomain podman[80337]: 2026-02-20 08:17:47.902993282 +0000 UTC m=+0.205225029 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, version=17.1.13)
Feb 20 08:17:47 np0005625203.localdomain podman[80337]: 2026-02-20 08:17:47.917582946 +0000 UTC m=+0.219814683 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:17:47 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:17:47 np0005625203.localdomain podman[80370]: 2026-02-20 08:17:47.982653701 +0000 UTC m=+0.099326332 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:17:47 np0005625203.localdomain sshd[80394]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:48 np0005625203.localdomain sshd[80394]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:17:48 np0005625203.localdomain podman[80370]: 2026-02-20 08:17:48.192594716 +0000 UTC m=+0.309267367 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, version=17.1.13, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:17:48 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:17:48 np0005625203.localdomain sshd[80429]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:48 np0005625203.localdomain sshd[80429]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:17:49 np0005625203.localdomain sshd[80213]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 41900 ssh2 [preauth]
Feb 20 08:17:49 np0005625203.localdomain sshd[80213]: Disconnecting invalid user admin 185.246.128.171 port 41900: Too many authentication failures [preauth]
Feb 20 08:17:50 np0005625203.localdomain sshd[80431]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:17:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:17:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:17:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:17:51 np0005625203.localdomain podman[80433]: 2026-02-20 08:17:51.769015138 +0000 UTC m=+0.083421977 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, version=17.1.13, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, release=1766032510)
Feb 20 08:17:51 np0005625203.localdomain podman[80435]: 2026-02-20 08:17:51.782159638 +0000 UTC m=+0.085945136 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, vcs-type=git, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:17:51 np0005625203.localdomain podman[80434]: 2026-02-20 08:17:51.833349701 +0000 UTC m=+0.142896099 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, tcib_managed=true, version=17.1.13, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:17:51 np0005625203.localdomain podman[80435]: 2026-02-20 08:17:51.846419587 +0000 UTC m=+0.150205085 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, config_id=tripleo_step4, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:17:51 np0005625203.localdomain podman[80433]: 2026-02-20 08:17:51.853673103 +0000 UTC m=+0.168079962 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-type=git, container_name=logrotate_crond)
Feb 20 08:17:51 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:17:51 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:17:51 np0005625203.localdomain podman[80434]: 2026-02-20 08:17:51.86868822 +0000 UTC m=+0.178234648 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute)
Feb 20 08:17:51 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:17:51 np0005625203.localdomain podman[80441]: 2026-02-20 08:17:51.940769444 +0000 UTC m=+0.241051824 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:17:52 np0005625203.localdomain podman[80441]: 2026-02-20 08:17:52.000303347 +0000 UTC m=+0.300585667 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, release=1766032510, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc.)
Feb 20 08:17:52 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:17:52 np0005625203.localdomain sshd[80431]: Invalid user admin from 185.246.128.171 port 48094
Feb 20 08:17:52 np0005625203.localdomain systemd[1]: tmp-crun.LaEypB.mount: Deactivated successfully.
Feb 20 08:17:54 np0005625203.localdomain sshd[80431]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 48094 ssh2 [preauth]
Feb 20 08:17:54 np0005625203.localdomain sshd[80431]: Disconnecting invalid user admin 185.246.128.171 port 48094: Too many authentication failures [preauth]
Feb 20 08:17:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:17:54 np0005625203.localdomain podman[80533]: 2026-02-20 08:17:54.390343742 +0000 UTC m=+0.085107261 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, batch=17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., tcib_managed=true)
Feb 20 08:17:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:17:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:17:54 np0005625203.localdomain podman[80555]: 2026-02-20 08:17:54.510535642 +0000 UTC m=+0.085028627 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, version=17.1.13, container_name=ovn_metadata_agent, tcib_managed=true, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:17:54 np0005625203.localdomain podman[80556]: 2026-02-20 08:17:54.566413611 +0000 UTC m=+0.137967185 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, container_name=ovn_controller, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1)
Feb 20 08:17:54 np0005625203.localdomain podman[80556]: 2026-02-20 08:17:54.5962273 +0000 UTC m=+0.167780854 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-type=git, release=1766032510, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:17:54 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:17:54 np0005625203.localdomain podman[80555]: 2026-02-20 08:17:54.634848932 +0000 UTC m=+0.209341977 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible)
Feb 20 08:17:54 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:17:54 np0005625203.localdomain podman[80533]: 2026-02-20 08:17:54.764295871 +0000 UTC m=+0.459059410 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, container_name=nova_migration_target, version=17.1.13, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:17:54 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:17:55 np0005625203.localdomain sshd[80604]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:17:57 np0005625203.localdomain sshd[80604]: Invalid user admin from 185.246.128.171 port 23132
Feb 20 08:18:00 np0005625203.localdomain sshd[80604]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 23132 ssh2 [preauth]
Feb 20 08:18:00 np0005625203.localdomain sshd[80604]: Disconnecting invalid user admin 185.246.128.171 port 23132: Too many authentication failures [preauth]
Feb 20 08:18:01 np0005625203.localdomain sshd[80606]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:02 np0005625203.localdomain anacron[19053]: Job `cron.monthly' started
Feb 20 08:18:02 np0005625203.localdomain anacron[19053]: Job `cron.monthly' terminated
Feb 20 08:18:02 np0005625203.localdomain anacron[19053]: Normal exit (3 jobs run)
Feb 20 08:18:04 np0005625203.localdomain sshd[80606]: Invalid user admin from 185.246.128.171 port 45121
Feb 20 08:18:08 np0005625203.localdomain sshd[80606]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 45121 ssh2 [preauth]
Feb 20 08:18:08 np0005625203.localdomain sshd[80606]: Disconnecting invalid user admin 185.246.128.171 port 45121: Too many authentication failures [preauth]
Feb 20 08:18:09 np0005625203.localdomain sshd[80610]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:10 np0005625203.localdomain sshd[80610]: Invalid user admin from 185.246.128.171 port 33863
Feb 20 08:18:11 np0005625203.localdomain sshd[80610]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 33863 ssh2 [preauth]
Feb 20 08:18:11 np0005625203.localdomain sshd[80610]: Disconnecting invalid user admin 185.246.128.171 port 33863: Too many authentication failures [preauth]
Feb 20 08:18:12 np0005625203.localdomain sshd[80612]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:14 np0005625203.localdomain sshd[80612]: Invalid user admin from 185.246.128.171 port 40400
Feb 20 08:18:18 np0005625203.localdomain sshd[80612]: error: maximum authentication attempts exceeded for invalid user admin from 185.246.128.171 port 40400 ssh2 [preauth]
Feb 20 08:18:18 np0005625203.localdomain sshd[80612]: Disconnecting invalid user admin 185.246.128.171 port 40400: Too many authentication failures [preauth]
Feb 20 08:18:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:18:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:18:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:18:18 np0005625203.localdomain systemd[1]: tmp-crun.9t2I9x.mount: Deactivated successfully.
Feb 20 08:18:18 np0005625203.localdomain podman[80615]: 2026-02-20 08:18:18.312601031 +0000 UTC m=+0.065092837 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, managed_by=tripleo_ansible, release=1766032510, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:18:18 np0005625203.localdomain podman[80616]: 2026-02-20 08:18:18.369634566 +0000 UTC m=+0.120894684 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, release=1766032510, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, url=https://www.redhat.com)
Feb 20 08:18:18 np0005625203.localdomain podman[80614]: 2026-02-20 08:18:18.342082918 +0000 UTC m=+0.091842699 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, version=17.1.13, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1766032510, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:18:18 np0005625203.localdomain podman[80614]: 2026-02-20 08:18:18.425450903 +0000 UTC m=+0.175210714 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5, vendor=Red Hat, Inc., distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T22:10:15Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd)
Feb 20 08:18:18 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:18:18 np0005625203.localdomain sshd[80677]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:18 np0005625203.localdomain podman[80615]: 2026-02-20 08:18:18.445637691 +0000 UTC m=+0.198129477 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, container_name=iscsid, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public)
Feb 20 08:18:18 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:18:18 np0005625203.localdomain podman[80616]: 2026-02-20 08:18:18.596775486 +0000 UTC m=+0.348035604 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, batch=17.1_20260112.1, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, architecture=x86_64, version=17.1.13, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:18:18 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:18:19 np0005625203.localdomain sudo[80679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:18:19 np0005625203.localdomain sudo[80679]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:18:19 np0005625203.localdomain sudo[80679]: pam_unix(sudo:session): session closed for user root
Feb 20 08:18:19 np0005625203.localdomain sudo[80694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:18:19 np0005625203.localdomain sudo[80694]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:18:19 np0005625203.localdomain systemd[1]: tmp-crun.FR1oTU.mount: Deactivated successfully.
Feb 20 08:18:19 np0005625203.localdomain sshd[80677]: Invalid user admin from 185.246.128.171 port 57579
Feb 20 08:18:19 np0005625203.localdomain sudo[80694]: pam_unix(sudo:session): session closed for user root
Feb 20 08:18:20 np0005625203.localdomain sudo[80742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:18:20 np0005625203.localdomain sudo[80742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:18:20 np0005625203.localdomain sudo[80742]: pam_unix(sudo:session): session closed for user root
Feb 20 08:18:21 np0005625203.localdomain sshd[80677]: Disconnecting invalid user admin 185.246.128.171 port 57579: Change of username or service not allowed: (admin,ssh-connection) -> (api,ssh-connection) [preauth]
Feb 20 08:18:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:18:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:18:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:18:22 np0005625203.localdomain systemd[1]: tmp-crun.hT0D3t.mount: Deactivated successfully.
Feb 20 08:18:22 np0005625203.localdomain podman[80759]: 2026-02-20 08:18:22.104584242 +0000 UTC m=+0.086401080 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com)
Feb 20 08:18:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:18:22 np0005625203.localdomain podman[80759]: 2026-02-20 08:18:22.135566977 +0000 UTC m=+0.117383845 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, vcs-type=git, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, version=17.1.13, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:18:22 np0005625203.localdomain systemd[1]: tmp-crun.RrmqOt.mount: Deactivated successfully.
Feb 20 08:18:22 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:18:22 np0005625203.localdomain podman[80758]: 2026-02-20 08:18:22.165177638 +0000 UTC m=+0.146095868 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, distribution-scope=public, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step5, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:18:22 np0005625203.localdomain podman[80796]: 2026-02-20 08:18:22.207290209 +0000 UTC m=+0.080770826 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vcs-type=git, architecture=x86_64, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Feb 20 08:18:22 np0005625203.localdomain podman[80758]: 2026-02-20 08:18:22.229305354 +0000 UTC m=+0.210223584 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, config_id=tripleo_step5, version=17.1.13, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:18:22 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:18:22 np0005625203.localdomain podman[80757]: 2026-02-20 08:18:22.321243446 +0000 UTC m=+0.302525498 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-cron, vcs-type=git, tcib_managed=true, batch=17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:18:22 np0005625203.localdomain podman[80757]: 2026-02-20 08:18:22.330203425 +0000 UTC m=+0.311485497 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, version=17.1.13, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, architecture=x86_64)
Feb 20 08:18:22 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:18:22 np0005625203.localdomain podman[80796]: 2026-02-20 08:18:22.384155784 +0000 UTC m=+0.257636351 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:18:22 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:18:23 np0005625203.localdomain sshd[80853]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:18:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:18:24 np0005625203.localdomain podman[80855]: 2026-02-20 08:18:24.768054567 +0000 UTC m=+0.085849613 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.5, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, release=1766032510, tcib_managed=true)
Feb 20 08:18:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:18:24 np0005625203.localdomain podman[80855]: 2026-02-20 08:18:24.829220711 +0000 UTC m=+0.147015747 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, distribution-scope=public, release=1766032510, vcs-type=git, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f)
Feb 20 08:18:24 np0005625203.localdomain podman[80856]: 2026-02-20 08:18:24.826396353 +0000 UTC m=+0.139931697 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:18:24 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:18:24 np0005625203.localdomain podman[80856]: 2026-02-20 08:18:24.915958081 +0000 UTC m=+0.229493465 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:18:24 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:18:24 np0005625203.localdomain podman[80894]: 2026-02-20 08:18:24.920838943 +0000 UTC m=+0.085479212 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, url=https://www.redhat.com, tcib_managed=true)
Feb 20 08:18:25 np0005625203.localdomain podman[80894]: 2026-02-20 08:18:25.282294824 +0000 UTC m=+0.446935133 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, version=17.1.13, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:18:25 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:18:25 np0005625203.localdomain sshd[80853]: Invalid user api from 185.246.128.171 port 41287
Feb 20 08:18:26 np0005625203.localdomain sshd[80853]: Disconnecting invalid user api 185.246.128.171 port 41287: Change of username or service not allowed: (api,ssh-connection) -> (root2,ssh-connection) [preauth]
Feb 20 08:18:28 np0005625203.localdomain sshd[80925]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:30 np0005625203.localdomain sshd[80925]: Invalid user root2 from 185.246.128.171 port 41345
Feb 20 08:18:30 np0005625203.localdomain sshd[80925]: Disconnecting invalid user root2 185.246.128.171 port 41345: Change of username or service not allowed: (root2,ssh-connection) -> (staff,ssh-connection) [preauth]
Feb 20 08:18:32 np0005625203.localdomain sshd[80927]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:33 np0005625203.localdomain sshd[80929]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:33 np0005625203.localdomain sshd[80929]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:18:34 np0005625203.localdomain sshd[80927]: Invalid user staff from 185.246.128.171 port 18316
Feb 20 08:18:34 np0005625203.localdomain sshd[80927]: Disconnecting invalid user staff 185.246.128.171 port 18316: Change of username or service not allowed: (staff,ssh-connection) -> (omsagent,ssh-connection) [preauth]
Feb 20 08:18:35 np0005625203.localdomain sshd[80931]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:37 np0005625203.localdomain sshd[80931]: Invalid user omsagent from 185.246.128.171 port 29199
Feb 20 08:18:38 np0005625203.localdomain sshd[80931]: Disconnecting invalid user omsagent 185.246.128.171 port 29199: Change of username or service not allowed: (omsagent,ssh-connection) -> (backup,ssh-connection) [preauth]
Feb 20 08:18:40 np0005625203.localdomain sshd[80933]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:40 np0005625203.localdomain sshd[80935]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:41 np0005625203.localdomain sshd[80933]: Invalid user backup from 185.246.128.171 port 21499
Feb 20 08:18:41 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:18:41 np0005625203.localdomain recover_tripleo_nova_virtqemud[80938]: 62505
Feb 20 08:18:41 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:18:41 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:18:42 np0005625203.localdomain sshd[80935]: Received disconnect from 103.171.84.20 port 40742:11: Bye Bye [preauth]
Feb 20 08:18:42 np0005625203.localdomain sshd[80935]: Disconnected from authenticating user root 103.171.84.20 port 40742 [preauth]
Feb 20 08:18:46 np0005625203.localdomain sshd[79820]: pam_unix(sshd:session): session closed for user zuul
Feb 20 08:18:46 np0005625203.localdomain systemd[1]: session-33.scope: Deactivated successfully.
Feb 20 08:18:46 np0005625203.localdomain systemd[1]: session-33.scope: Consumed 6.250s CPU time.
Feb 20 08:18:46 np0005625203.localdomain systemd-logind[759]: Session 33 logged out. Waiting for processes to exit.
Feb 20 08:18:46 np0005625203.localdomain systemd-logind[759]: Removed session 33.
Feb 20 08:18:46 np0005625203.localdomain sshd[80933]: error: maximum authentication attempts exceeded for invalid user backup from 185.246.128.171 port 21499 ssh2 [preauth]
Feb 20 08:18:46 np0005625203.localdomain sshd[80933]: Disconnecting invalid user backup 185.246.128.171 port 21499: Too many authentication failures [preauth]
Feb 20 08:18:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:18:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:18:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:18:48 np0005625203.localdomain sshd[80986]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:48 np0005625203.localdomain podman[80941]: 2026-02-20 08:18:48.778387379 +0000 UTC m=+0.096897537 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, tcib_managed=true)
Feb 20 08:18:48 np0005625203.localdomain podman[80941]: 2026-02-20 08:18:48.848743449 +0000 UTC m=+0.167253607 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, version=17.1.13, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, batch=17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:18:48 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:18:48 np0005625203.localdomain podman[80943]: 2026-02-20 08:18:48.864940143 +0000 UTC m=+0.180588301 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, release=1766032510, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.buildah.version=1.41.5)
Feb 20 08:18:48 np0005625203.localdomain podman[80942]: 2026-02-20 08:18:48.81470661 +0000 UTC m=+0.133099164 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, release=1766032510, com.redhat.component=openstack-iscsid-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:18:48 np0005625203.localdomain podman[80942]: 2026-02-20 08:18:48.944672526 +0000 UTC m=+0.263065120 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:18:48 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:18:49 np0005625203.localdomain podman[80943]: 2026-02-20 08:18:49.084304521 +0000 UTC m=+0.399952689 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true)
Feb 20 08:18:49 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:18:49 np0005625203.localdomain systemd[1]: tmp-crun.etZCQT.mount: Deactivated successfully.
Feb 20 08:18:51 np0005625203.localdomain sshd[80986]: Invalid user backup from 185.246.128.171 port 23359
Feb 20 08:18:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:18:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:18:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:18:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:18:52 np0005625203.localdomain systemd[1]: tmp-crun.yNpNcQ.mount: Deactivated successfully.
Feb 20 08:18:52 np0005625203.localdomain podman[81054]: 2026-02-20 08:18:52.764836704 +0000 UTC m=+0.084196531 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:18:52 np0005625203.localdomain systemd[1]: tmp-crun.oNodow.mount: Deactivated successfully.
Feb 20 08:18:52 np0005625203.localdomain podman[81055]: 2026-02-20 08:18:52.778989164 +0000 UTC m=+0.089057262 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, version=17.1.13, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git)
Feb 20 08:18:52 np0005625203.localdomain podman[81054]: 2026-02-20 08:18:52.800253346 +0000 UTC m=+0.119613163 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, config_id=tripleo_step4)
Feb 20 08:18:52 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:18:52 np0005625203.localdomain podman[81060]: 2026-02-20 08:18:52.849271232 +0000 UTC m=+0.156421159 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.5)
Feb 20 08:18:52 np0005625203.localdomain podman[81060]: 2026-02-20 08:18:52.881821915 +0000 UTC m=+0.188971832 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, version=17.1.13, build-date=2026-01-12T23:07:47Z, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:18:52 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:18:52 np0005625203.localdomain podman[81055]: 2026-02-20 08:18:52.906943567 +0000 UTC m=+0.217011665 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:18:52 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:18:52 np0005625203.localdomain podman[81061]: 2026-02-20 08:18:52.883689664 +0000 UTC m=+0.187112935 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, vcs-type=git, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, tcib_managed=true)
Feb 20 08:18:52 np0005625203.localdomain podman[81061]: 2026-02-20 08:18:52.967253384 +0000 UTC m=+0.270676675 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public)
Feb 20 08:18:52 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:18:54 np0005625203.localdomain sshd[80986]: Disconnecting invalid user backup 185.246.128.171 port 23359: Change of username or service not allowed: (backup,ssh-connection) -> (mit,ssh-connection) [preauth]
Feb 20 08:18:54 np0005625203.localdomain sshd[81149]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:18:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:18:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:18:55 np0005625203.localdomain podman[81152]: 2026-02-20 08:18:55.763829052 +0000 UTC m=+0.079542577 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, batch=17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, distribution-scope=public)
Feb 20 08:18:55 np0005625203.localdomain podman[81152]: 2026-02-20 08:18:55.811145345 +0000 UTC m=+0.126858850 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, release=1766032510, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.)
Feb 20 08:18:55 np0005625203.localdomain systemd[1]: tmp-crun.HhBkTs.mount: Deactivated successfully.
Feb 20 08:18:55 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:18:55 np0005625203.localdomain podman[81151]: 2026-02-20 08:18:55.830160556 +0000 UTC m=+0.148859154 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc.)
Feb 20 08:18:55 np0005625203.localdomain podman[81153]: 2026-02-20 08:18:55.873275339 +0000 UTC m=+0.185321959 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, container_name=ovn_controller, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, release=1766032510, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4)
Feb 20 08:18:55 np0005625203.localdomain podman[81153]: 2026-02-20 08:18:55.92020482 +0000 UTC m=+0.232251470 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, release=1766032510, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4)
Feb 20 08:18:55 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:18:56 np0005625203.localdomain podman[81151]: 2026-02-20 08:18:56.229424254 +0000 UTC m=+0.548122832 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, version=17.1.13, tcib_managed=true, config_id=tripleo_step4, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public)
Feb 20 08:18:56 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:18:56 np0005625203.localdomain sshd[81149]: Invalid user mit from 185.246.128.171 port 13781
Feb 20 08:18:56 np0005625203.localdomain sshd[81149]: Disconnecting invalid user mit 185.246.128.171 port 13781: Change of username or service not allowed: (mit,ssh-connection) -> (auditadm,ssh-connection) [preauth]
Feb 20 08:18:57 np0005625203.localdomain sshd[81217]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:57 np0005625203.localdomain sshd[81217]: Accepted publickey for zuul from 38.102.83.114 port 34760 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 08:18:57 np0005625203.localdomain systemd-logind[759]: New session 34 of user zuul.
Feb 20 08:18:57 np0005625203.localdomain systemd[1]: Started Session 34 of User zuul.
Feb 20 08:18:57 np0005625203.localdomain sshd[81217]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 08:18:58 np0005625203.localdomain sshd[81235]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:18:58 np0005625203.localdomain sudo[81234]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yosmwjxeqmlzligcnbqwhdnazzotxmjf ; /usr/bin/python3
Feb 20 08:18:58 np0005625203.localdomain sudo[81234]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 08:18:58 np0005625203.localdomain python3[81237]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 08:19:00 np0005625203.localdomain sshd[81235]: Invalid user auditadm from 185.246.128.171 port 61076
Feb 20 08:19:01 np0005625203.localdomain sudo[81234]: pam_unix(sudo:session): session closed for user root
Feb 20 08:19:01 np0005625203.localdomain sshd[81235]: Disconnecting invalid user auditadm 185.246.128.171 port 61076: Change of username or service not allowed: (auditadm,ssh-connection) -> (config,ssh-connection) [preauth]
Feb 20 08:19:02 np0005625203.localdomain sshd[81240]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:05 np0005625203.localdomain sshd[81240]: Invalid user config from 185.246.128.171 port 38867
Feb 20 08:19:06 np0005625203.localdomain sshd[81240]: Disconnecting invalid user config 185.246.128.171 port 38867: Change of username or service not allowed: (config,ssh-connection) -> (wikijs,ssh-connection) [preauth]
Feb 20 08:19:07 np0005625203.localdomain sshd[81242]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:09 np0005625203.localdomain sshd[81242]: Invalid user wikijs from 185.246.128.171 port 7519
Feb 20 08:19:10 np0005625203.localdomain sshd[81242]: Disconnecting invalid user wikijs 185.246.128.171 port 7519: Change of username or service not allowed: (wikijs,ssh-connection) -> (esadmin,ssh-connection) [preauth]
Feb 20 08:19:11 np0005625203.localdomain sshd[81244]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:13 np0005625203.localdomain sshd[81244]: Invalid user esadmin from 185.246.128.171 port 2651
Feb 20 08:19:14 np0005625203.localdomain sshd[81244]: Disconnecting invalid user esadmin 185.246.128.171 port 2651: Change of username or service not allowed: (esadmin,ssh-connection) -> (Matthew,ssh-connection) [preauth]
Feb 20 08:19:16 np0005625203.localdomain sshd[81246]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:17 np0005625203.localdomain sshd[81247]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:17 np0005625203.localdomain sshd[81247]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:19:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:19:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:19:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:19:19 np0005625203.localdomain sshd[81253]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:19 np0005625203.localdomain podman[81250]: 2026-02-20 08:19:19.782557137 +0000 UTC m=+0.092397857 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3)
Feb 20 08:19:19 np0005625203.localdomain podman[81252]: 2026-02-20 08:19:19.82945685 +0000 UTC m=+0.139451355 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510)
Feb 20 08:19:19 np0005625203.localdomain sshd[81253]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:19:19 np0005625203.localdomain podman[81251]: 2026-02-20 08:19:19.885864555 +0000 UTC m=+0.194401256 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, distribution-scope=public, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, tcib_managed=true, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:19:19 np0005625203.localdomain podman[81251]: 2026-02-20 08:19:19.894247211 +0000 UTC m=+0.202783872 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.5, container_name=iscsid, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3)
Feb 20 08:19:19 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:19:19 np0005625203.localdomain podman[81250]: 2026-02-20 08:19:19.949396238 +0000 UTC m=+0.259236928 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible)
Feb 20 08:19:19 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:19:20 np0005625203.localdomain podman[81252]: 2026-02-20 08:19:20.085382457 +0000 UTC m=+0.395376922 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Feb 20 08:19:20 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:19:20 np0005625203.localdomain sshd[81246]: Invalid user Matthew from 185.246.128.171 port 48039
Feb 20 08:19:20 np0005625203.localdomain sudo[81321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:19:20 np0005625203.localdomain sudo[81321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:19:20 np0005625203.localdomain sudo[81321]: pam_unix(sudo:session): session closed for user root
Feb 20 08:19:20 np0005625203.localdomain sudo[81336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:19:20 np0005625203.localdomain sudo[81336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:19:21 np0005625203.localdomain sudo[81336]: pam_unix(sudo:session): session closed for user root
Feb 20 08:19:22 np0005625203.localdomain sshd[81246]: Disconnecting invalid user Matthew 185.246.128.171 port 48039: Change of username or service not allowed: (Matthew,ssh-connection) -> (netlink,ssh-connection) [preauth]
Feb 20 08:19:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:19:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:19:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:19:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:19:23 np0005625203.localdomain podman[81382]: 2026-02-20 08:19:23.770661564 +0000 UTC m=+0.086633660 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, url=https://www.redhat.com, distribution-scope=public, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, container_name=logrotate_crond)
Feb 20 08:19:23 np0005625203.localdomain podman[81382]: 2026-02-20 08:19:23.808382818 +0000 UTC m=+0.124354874 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, release=1766032510, container_name=logrotate_crond, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_id=tripleo_step4, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:19:23 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:19:23 np0005625203.localdomain podman[81383]: 2026-02-20 08:19:23.826251475 +0000 UTC m=+0.140458227 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5)
Feb 20 08:19:23 np0005625203.localdomain systemd[1]: tmp-crun.xxZ8CQ.mount: Deactivated successfully.
Feb 20 08:19:23 np0005625203.localdomain podman[81384]: 2026-02-20 08:19:23.885567828 +0000 UTC m=+0.199901453 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:19:23 np0005625203.localdomain podman[81384]: 2026-02-20 08:19:23.920964661 +0000 UTC m=+0.235298316 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13)
Feb 20 08:19:23 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:19:23 np0005625203.localdomain podman[81385]: 2026-02-20 08:19:23.9392541 +0000 UTC m=+0.249345756 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, tcib_managed=true, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5)
Feb 20 08:19:23 np0005625203.localdomain podman[81383]: 2026-02-20 08:19:23.944666846 +0000 UTC m=+0.258873598 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, release=1766032510, vcs-type=git, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container)
Feb 20 08:19:23 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:19:23 np0005625203.localdomain podman[81385]: 2026-02-20 08:19:23.99940968 +0000 UTC m=+0.309501266 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, vcs-type=git)
Feb 20 08:19:24 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:19:24 np0005625203.localdomain sshd[81480]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:25 np0005625203.localdomain sudo[81481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:19:25 np0005625203.localdomain sudo[81481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:19:25 np0005625203.localdomain sudo[81481]: pam_unix(sudo:session): session closed for user root
Feb 20 08:19:25 np0005625203.localdomain sudo[81510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jucvjzdhtfmoxzgunvbfhpoggzxmldos ; /usr/bin/python3
Feb 20 08:19:25 np0005625203.localdomain sudo[81510]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 08:19:25 np0005625203.localdomain python3[81512]: ansible-ansible.legacy.dnf Invoked with name=['sos'] state=latest allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Feb 20 08:19:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:19:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:19:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:19:26 np0005625203.localdomain systemd[1]: tmp-crun.aGusle.mount: Deactivated successfully.
Feb 20 08:19:26 np0005625203.localdomain podman[81514]: 2026-02-20 08:19:26.755761982 +0000 UTC m=+0.078661267 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, managed_by=tripleo_ansible, version=17.1.13)
Feb 20 08:19:26 np0005625203.localdomain podman[81515]: 2026-02-20 08:19:26.769828642 +0000 UTC m=+0.089842888 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5)
Feb 20 08:19:26 np0005625203.localdomain podman[81516]: 2026-02-20 08:19:26.815743526 +0000 UTC m=+0.133742101 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, version=17.1.13, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller)
Feb 20 08:19:26 np0005625203.localdomain podman[81515]: 2026-02-20 08:19:26.832299692 +0000 UTC m=+0.152313888 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, container_name=ovn_metadata_agent)
Feb 20 08:19:26 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:19:26 np0005625203.localdomain podman[81516]: 2026-02-20 08:19:26.845595149 +0000 UTC m=+0.163593714 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=ovn_controller, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller)
Feb 20 08:19:26 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:19:27 np0005625203.localdomain podman[81514]: 2026-02-20 08:19:27.139223348 +0000 UTC m=+0.462122593 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13)
Feb 20 08:19:27 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:19:27 np0005625203.localdomain sshd[81480]: Invalid user netlink from 185.246.128.171 port 10569
Feb 20 08:19:27 np0005625203.localdomain systemd[1]: tmp-crun.xOgQU9.mount: Deactivated successfully.
Feb 20 08:19:28 np0005625203.localdomain sshd[81480]: Disconnecting invalid user netlink 185.246.128.171 port 10569: Change of username or service not allowed: (netlink,ssh-connection) -> (dlinares,ssh-connection) [preauth]
Feb 20 08:19:29 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 08:19:29 np0005625203.localdomain systemd[1]: Starting man-db-cache-update.service...
Feb 20 08:19:29 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 08:19:30 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 08:19:30 np0005625203.localdomain systemd[1]: Finished man-db-cache-update.service.
Feb 20 08:19:30 np0005625203.localdomain systemd[1]: run-rc0f6a46885074ac2a4208ad3418b05f4.service: Deactivated successfully.
Feb 20 08:19:30 np0005625203.localdomain systemd[1]: run-r114e6707284a4924870a1cf2856368d9.service: Deactivated successfully.
Feb 20 08:19:30 np0005625203.localdomain sudo[81510]: pam_unix(sudo:session): session closed for user root
Feb 20 08:19:30 np0005625203.localdomain sshd[81731]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:35 np0005625203.localdomain sshd[81731]: Invalid user dlinares from 185.246.128.171 port 46367
Feb 20 08:19:35 np0005625203.localdomain sshd[81731]: Disconnecting invalid user dlinares 185.246.128.171 port 46367: Change of username or service not allowed: (dlinares,ssh-connection) -> (1111,ssh-connection) [preauth]
Feb 20 08:19:36 np0005625203.localdomain sshd[81733]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:37 np0005625203.localdomain sshd[81733]: Invalid user 1111 from 185.246.128.171 port 43268
Feb 20 08:19:37 np0005625203.localdomain sshd[81733]: Disconnecting invalid user 1111 185.246.128.171 port 43268: Change of username or service not allowed: (1111,ssh-connection) -> (user123,ssh-connection) [preauth]
Feb 20 08:19:38 np0005625203.localdomain sshd[81735]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:41 np0005625203.localdomain sshd[81735]: Invalid user user123 from 185.246.128.171 port 31868
Feb 20 08:19:41 np0005625203.localdomain sshd[81737]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:41 np0005625203.localdomain sshd[81737]: Invalid user matheus from 147.135.114.8 port 41186
Feb 20 08:19:41 np0005625203.localdomain sshd[81737]: Received disconnect from 147.135.114.8 port 41186:11: Bye Bye [preauth]
Feb 20 08:19:41 np0005625203.localdomain sshd[81737]: Disconnected from invalid user matheus 147.135.114.8 port 41186 [preauth]
Feb 20 08:19:41 np0005625203.localdomain sshd[81735]: Disconnecting invalid user user123 185.246.128.171 port 31868: Change of username or service not allowed: (user123,ssh-connection) -> (riscv,ssh-connection) [preauth]
Feb 20 08:19:43 np0005625203.localdomain sshd[81739]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:44 np0005625203.localdomain sshd[81739]: Invalid user riscv from 185.246.128.171 port 61959
Feb 20 08:19:45 np0005625203.localdomain sshd[81739]: Disconnecting invalid user riscv 185.246.128.171 port 61959: Change of username or service not allowed: (riscv,ssh-connection) -> (bao,ssh-connection) [preauth]
Feb 20 08:19:46 np0005625203.localdomain sshd[81741]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 08:19:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 2400.1 total, 600.0 interval
                                                          Cumulative writes: 4396 writes, 20K keys, 4396 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4396 writes, 502 syncs, 8.76 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 08:19:48 np0005625203.localdomain sshd[81741]: Invalid user bao from 185.246.128.171 port 7133
Feb 20 08:19:48 np0005625203.localdomain sshd[81741]: Disconnecting invalid user bao 185.246.128.171 port 7133: Change of username or service not allowed: (bao,ssh-connection) -> (sophia,ssh-connection) [preauth]
Feb 20 08:19:49 np0005625203.localdomain sshd[81788]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:19:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:19:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:19:50 np0005625203.localdomain sshd[81788]: Invalid user sophia from 185.246.128.171 port 36649
Feb 20 08:19:50 np0005625203.localdomain systemd[1]: tmp-crun.xXMYZD.mount: Deactivated successfully.
Feb 20 08:19:50 np0005625203.localdomain podman[81790]: 2026-02-20 08:19:50.557952225 +0000 UTC m=+0.111357535 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, release=1766032510, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:19:50 np0005625203.localdomain podman[81790]: 2026-02-20 08:19:50.591219903 +0000 UTC m=+0.144625203 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:19:50 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:19:50 np0005625203.localdomain podman[81791]: 2026-02-20 08:19:50.633022391 +0000 UTC m=+0.184227075 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container)
Feb 20 08:19:50 np0005625203.localdomain podman[81791]: 2026-02-20 08:19:50.643157712 +0000 UTC m=+0.194362426 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, version=17.1.13, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid)
Feb 20 08:19:50 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:19:50 np0005625203.localdomain podman[81792]: 2026-02-20 08:19:50.691670505 +0000 UTC m=+0.240761404 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., container_name=metrics_qdr, io.buildah.version=1.41.5, url=https://www.redhat.com, version=17.1.13, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:19:50 np0005625203.localdomain sshd[81788]: Disconnecting invalid user sophia 185.246.128.171 port 36649: Change of username or service not allowed: (sophia,ssh-connection) -> (wuhan,ssh-connection) [preauth]
Feb 20 08:19:50 np0005625203.localdomain podman[81792]: 2026-02-20 08:19:50.894091425 +0000 UTC m=+0.443182304 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, tcib_managed=true, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:19:50 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:19:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 08:19:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 2400.1 total, 600.0 interval
                                                          Cumulative writes: 5293 writes, 23K keys, 5293 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5293 writes, 571 syncs, 9.27 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 08:19:52 np0005625203.localdomain sshd[81857]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:19:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:19:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:19:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:19:54 np0005625203.localdomain podman[81862]: 2026-02-20 08:19:54.843630563 +0000 UTC m=+0.149677387 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, version=17.1.13, architecture=x86_64, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z)
Feb 20 08:19:54 np0005625203.localdomain podman[81861]: 2026-02-20 08:19:54.799092771 +0000 UTC m=+0.108691055 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute)
Feb 20 08:19:54 np0005625203.localdomain podman[81859]: 2026-02-20 08:19:54.767739973 +0000 UTC m=+0.084282679 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, container_name=logrotate_crond, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, release=1766032510, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:19:54 np0005625203.localdomain podman[81861]: 2026-02-20 08:19:54.879780719 +0000 UTC m=+0.189378983 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step4, architecture=x86_64, container_name=ceilometer_agent_compute, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:19:54 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:19:54 np0005625203.localdomain podman[81862]: 2026-02-20 08:19:54.899470181 +0000 UTC m=+0.205517005 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:19:54 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:19:54 np0005625203.localdomain podman[81860]: 2026-02-20 08:19:54.883002927 +0000 UTC m=+0.200185372 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, container_name=nova_compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:19:54 np0005625203.localdomain podman[81859]: 2026-02-20 08:19:54.954100352 +0000 UTC m=+0.270643108 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, version=17.1.13, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64)
Feb 20 08:19:54 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:19:54 np0005625203.localdomain podman[81860]: 2026-02-20 08:19:54.967358047 +0000 UTC m=+0.284540492 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, distribution-scope=public, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13)
Feb 20 08:19:54 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:19:54 np0005625203.localdomain sshd[81857]: Invalid user wuhan from 185.246.128.171 port 55285
Feb 20 08:19:55 np0005625203.localdomain sshd[81857]: Disconnecting invalid user wuhan 185.246.128.171 port 55285: Change of username or service not allowed: (wuhan,ssh-connection) -> (station6,ssh-connection) [preauth]
Feb 20 08:19:56 np0005625203.localdomain sshd[81954]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:19:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:19:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:19:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:19:57 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:19:57 np0005625203.localdomain recover_tripleo_nova_virtqemud[81975]: 62505
Feb 20 08:19:57 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:19:57 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:19:57 np0005625203.localdomain systemd[1]: tmp-crun.8PSBxJ.mount: Deactivated successfully.
Feb 20 08:19:57 np0005625203.localdomain podman[81958]: 2026-02-20 08:19:57.773793681 +0000 UTC m=+0.081832203 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller)
Feb 20 08:19:57 np0005625203.localdomain podman[81958]: 2026-02-20 08:19:57.832414724 +0000 UTC m=+0.140453236 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, version=17.1.13, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:19:57 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:19:57 np0005625203.localdomain podman[81956]: 2026-02-20 08:19:57.870613292 +0000 UTC m=+0.185717271 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, tcib_managed=true, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, config_id=tripleo_step4, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:19:57 np0005625203.localdomain podman[81957]: 2026-02-20 08:19:57.834062785 +0000 UTC m=+0.142810609 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=)
Feb 20 08:19:57 np0005625203.localdomain podman[81957]: 2026-02-20 08:19:57.919340122 +0000 UTC m=+0.228088016 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510)
Feb 20 08:19:57 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:19:58 np0005625203.localdomain podman[81956]: 2026-02-20 08:19:58.244533286 +0000 UTC m=+0.559637275 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, maintainer=OpenStack TripleO Team, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:19:58 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:19:58 np0005625203.localdomain sshd[81954]: Invalid user station6 from 185.246.128.171 port 39161
Feb 20 08:19:59 np0005625203.localdomain sshd[81954]: Disconnecting invalid user station6 185.246.128.171 port 39161: Change of username or service not allowed: (station6,ssh-connection) -> (joe,ssh-connection) [preauth]
Feb 20 08:20:00 np0005625203.localdomain sshd[82030]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:01 np0005625203.localdomain sshd[82030]: Invalid user joe from 185.246.128.171 port 41166
Feb 20 08:20:02 np0005625203.localdomain sshd[82030]: Disconnecting invalid user joe 185.246.128.171 port 41166: Change of username or service not allowed: (joe,ssh-connection) -> (proxy,ssh-connection) [preauth]
Feb 20 08:20:04 np0005625203.localdomain sshd[82032]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:05 np0005625203.localdomain sudo[82047]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxbhobepgdfiiyjyokbecrxfcnsyouav ; /usr/bin/python3
Feb 20 08:20:05 np0005625203.localdomain sudo[82047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 08:20:05 np0005625203.localdomain python3[82049]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhel-9-for-x86_64-baseos-eus-rpms --disable rhel-9-for-x86_64-appstream-eus-rpms --disable rhel-9-for-x86_64-highavailability-eus-rpms --disable openstack-17.1-for-rhel-9-x86_64-rpms --disable fast-datapath-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 08:20:05 np0005625203.localdomain sshd[82032]: Invalid user proxy from 185.246.128.171 port 2856
Feb 20 08:20:06 np0005625203.localdomain sshd[82032]: Disconnecting invalid user proxy 185.246.128.171 port 2856: Change of username or service not allowed: (proxy,ssh-connection) -> (ubnt,ssh-connection) [preauth]
Feb 20 08:20:06 np0005625203.localdomain sshd[82053]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:07 np0005625203.localdomain sshd[82054]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:07 np0005625203.localdomain sshd[82054]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:20:08 np0005625203.localdomain rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 08:20:08 np0005625203.localdomain rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 08:20:10 np0005625203.localdomain sshd[82053]: Invalid user ubnt from 185.246.128.171 port 61186
Feb 20 08:20:12 np0005625203.localdomain sshd[82053]: error: maximum authentication attempts exceeded for invalid user ubnt from 185.246.128.171 port 61186 ssh2 [preauth]
Feb 20 08:20:12 np0005625203.localdomain sshd[82053]: Disconnecting invalid user ubnt 185.246.128.171 port 61186: Too many authentication failures [preauth]
Feb 20 08:20:12 np0005625203.localdomain sudo[82047]: pam_unix(sudo:session): session closed for user root
Feb 20 08:20:12 np0005625203.localdomain sshd[82243]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:15 np0005625203.localdomain sshd[82243]: Invalid user ubnt from 185.246.128.171 port 12469
Feb 20 08:20:18 np0005625203.localdomain sshd[82243]: Disconnecting invalid user ubnt 185.246.128.171 port 12469: Change of username or service not allowed: (ubnt,ssh-connection) -> (pa,ssh-connection) [preauth]
Feb 20 08:20:20 np0005625203.localdomain sshd[82245]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:20:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:20:20 np0005625203.localdomain systemd[1]: tmp-crun.TbqrXy.mount: Deactivated successfully.
Feb 20 08:20:20 np0005625203.localdomain podman[82247]: 2026-02-20 08:20:20.76017839 +0000 UTC m=+0.075987175 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, config_id=tripleo_step3, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:20:20 np0005625203.localdomain systemd[1]: tmp-crun.eVdwrF.mount: Deactivated successfully.
Feb 20 08:20:20 np0005625203.localdomain podman[82248]: 2026-02-20 08:20:20.780161372 +0000 UTC m=+0.090806669 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:20:20 np0005625203.localdomain podman[82247]: 2026-02-20 08:20:20.769483275 +0000 UTC m=+0.085292090 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, container_name=collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, version=17.1.13, architecture=x86_64, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:20:20 np0005625203.localdomain podman[82248]: 2026-02-20 08:20:20.793384865 +0000 UTC m=+0.104030192 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.openshift.expose-services=, config_id=tripleo_step3, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:20:20 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:20:20 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:20:21 np0005625203.localdomain sshd[82245]: Invalid user pa from 185.246.128.171 port 17407
Feb 20 08:20:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:20:21 np0005625203.localdomain podman[82286]: 2026-02-20 08:20:21.551681925 +0000 UTC m=+0.086222148 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, url=https://www.redhat.com, config_id=tripleo_step1, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, release=1766032510, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-type=git, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:20:21 np0005625203.localdomain podman[82286]: 2026-02-20 08:20:21.74412234 +0000 UTC m=+0.278662563 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., container_name=metrics_qdr, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible)
Feb 20 08:20:21 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:20:22 np0005625203.localdomain sshd[82245]: Disconnecting invalid user pa 185.246.128.171 port 17407: Change of username or service not allowed: (pa,ssh-connection) -> (ftp,ssh-connection) [preauth]
Feb 20 08:20:22 np0005625203.localdomain sshd[82315]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:22 np0005625203.localdomain sshd[82317]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:23 np0005625203.localdomain sshd[82315]: Received disconnect from 103.200.25.162 port 40586:11: Bye Bye [preauth]
Feb 20 08:20:23 np0005625203.localdomain sshd[82315]: Disconnected from authenticating user root 103.200.25.162 port 40586 [preauth]
Feb 20 08:20:25 np0005625203.localdomain sudo[82319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:20:25 np0005625203.localdomain sudo[82319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:20:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:20:25 np0005625203.localdomain sudo[82319]: pam_unix(sudo:session): session closed for user root
Feb 20 08:20:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:20:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:20:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:20:25 np0005625203.localdomain podman[82336]: 2026-02-20 08:20:25.380432681 +0000 UTC m=+0.071444696 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64)
Feb 20 08:20:25 np0005625203.localdomain sudo[82363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 08:20:25 np0005625203.localdomain sudo[82363]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:20:25 np0005625203.localdomain podman[82334]: 2026-02-20 08:20:25.406363894 +0000 UTC m=+0.097895535 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.buildah.version=1.41.5)
Feb 20 08:20:25 np0005625203.localdomain podman[82334]: 2026-02-20 08:20:25.418172475 +0000 UTC m=+0.109704126 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, batch=17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public)
Feb 20 08:20:25 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:20:25 np0005625203.localdomain podman[82336]: 2026-02-20 08:20:25.463361257 +0000 UTC m=+0.154373292 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com)
Feb 20 08:20:25 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:20:25 np0005625203.localdomain podman[82337]: 2026-02-20 08:20:25.514631165 +0000 UTC m=+0.196404277 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, version=17.1.13, architecture=x86_64, distribution-scope=public, config_id=tripleo_step4, batch=17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z)
Feb 20 08:20:25 np0005625203.localdomain podman[82335]: 2026-02-20 08:20:25.562547291 +0000 UTC m=+0.250209964 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, tcib_managed=true, config_id=tripleo_step5, batch=17.1_20260112.1)
Feb 20 08:20:25 np0005625203.localdomain podman[82337]: 2026-02-20 08:20:25.575269549 +0000 UTC m=+0.257042661 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public)
Feb 20 08:20:25 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:20:25 np0005625203.localdomain podman[82335]: 2026-02-20 08:20:25.599289844 +0000 UTC m=+0.286952477 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com)
Feb 20 08:20:25 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:20:25 np0005625203.localdomain sudo[82363]: pam_unix(sudo:session): session closed for user root
Feb 20 08:20:25 np0005625203.localdomain sudo[82460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:20:25 np0005625203.localdomain sudo[82460]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:20:25 np0005625203.localdomain sudo[82460]: pam_unix(sudo:session): session closed for user root
Feb 20 08:20:25 np0005625203.localdomain sudo[82475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:20:25 np0005625203.localdomain sudo[82475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:20:26 np0005625203.localdomain sudo[82475]: pam_unix(sudo:session): session closed for user root
Feb 20 08:20:26 np0005625203.localdomain sshd[82317]: error: maximum authentication attempts exceeded for ftp from 185.246.128.171 port 17675 ssh2 [preauth]
Feb 20 08:20:26 np0005625203.localdomain sshd[82317]: Disconnecting authenticating user ftp 185.246.128.171 port 17675: Too many authentication failures [preauth]
Feb 20 08:20:27 np0005625203.localdomain sudo[82523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:20:27 np0005625203.localdomain sudo[82523]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:20:27 np0005625203.localdomain sudo[82523]: pam_unix(sudo:session): session closed for user root
Feb 20 08:20:28 np0005625203.localdomain sshd[82538]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:20:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:20:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:20:28 np0005625203.localdomain podman[82542]: 2026-02-20 08:20:28.767252143 +0000 UTC m=+0.080424241 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, container_name=ovn_controller, io.openshift.expose-services=, release=1766032510, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:20:28 np0005625203.localdomain systemd[1]: tmp-crun.ktaTgc.mount: Deactivated successfully.
Feb 20 08:20:28 np0005625203.localdomain podman[82540]: 2026-02-20 08:20:28.837759149 +0000 UTC m=+0.151586477 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=)
Feb 20 08:20:28 np0005625203.localdomain podman[82542]: 2026-02-20 08:20:28.845029031 +0000 UTC m=+0.158201149 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1766032510, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vendor=Red Hat, Inc., url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:20:28 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:20:28 np0005625203.localdomain podman[82541]: 2026-02-20 08:20:28.931205806 +0000 UTC m=+0.244422465 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, url=https://www.redhat.com, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, build-date=2026-01-12T22:56:19Z, tcib_managed=true, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5)
Feb 20 08:20:29 np0005625203.localdomain podman[82541]: 2026-02-20 08:20:29.005364815 +0000 UTC m=+0.318581534 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20260112.1, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team)
Feb 20 08:20:29 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:20:29 np0005625203.localdomain podman[82540]: 2026-02-20 08:20:29.254030198 +0000 UTC m=+0.567857516 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true)
Feb 20 08:20:29 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:20:31 np0005625203.localdomain sshd[82538]: Disconnecting authenticating user ftp 185.246.128.171 port 34560: Change of username or service not allowed: (ftp,ssh-connection) -> (mj,ssh-connection) [preauth]
Feb 20 08:20:33 np0005625203.localdomain sshd[82608]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:34 np0005625203.localdomain sshd[82608]: Invalid user mj from 185.246.128.171 port 8682
Feb 20 08:20:35 np0005625203.localdomain sshd[82608]: Disconnecting invalid user mj 185.246.128.171 port 8682: Change of username or service not allowed: (mj,ssh-connection) -> (samp,ssh-connection) [preauth]
Feb 20 08:20:37 np0005625203.localdomain sshd[82610]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:40 np0005625203.localdomain sshd[82610]: Invalid user samp from 185.246.128.171 port 33266
Feb 20 08:20:40 np0005625203.localdomain sshd[82610]: Disconnecting invalid user samp 185.246.128.171 port 33266: Change of username or service not allowed: (samp,ssh-connection) -> (nemo,ssh-connection) [preauth]
Feb 20 08:20:43 np0005625203.localdomain sshd[82612]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:44 np0005625203.localdomain sshd[82613]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:45 np0005625203.localdomain sshd[82613]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:20:46 np0005625203.localdomain sshd[82612]: Invalid user nemo from 185.246.128.171 port 23891
Feb 20 08:20:46 np0005625203.localdomain sshd[82612]: Disconnecting invalid user nemo 185.246.128.171 port 23891: Change of username or service not allowed: (nemo,ssh-connection) -> (satya,ssh-connection) [preauth]
Feb 20 08:20:49 np0005625203.localdomain sshd[82661]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:20:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:20:51 np0005625203.localdomain podman[82664]: 2026-02-20 08:20:51.777368408 +0000 UTC m=+0.090544791 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Feb 20 08:20:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:20:51 np0005625203.localdomain podman[82663]: 2026-02-20 08:20:51.835971139 +0000 UTC m=+0.150851793 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:20:51 np0005625203.localdomain podman[82664]: 2026-02-20 08:20:51.863156981 +0000 UTC m=+0.176333354 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-type=git, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, version=17.1.13)
Feb 20 08:20:51 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:20:51 np0005625203.localdomain podman[82663]: 2026-02-20 08:20:51.897993557 +0000 UTC m=+0.212874221 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=)
Feb 20 08:20:51 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:20:51 np0005625203.localdomain podman[82693]: 2026-02-20 08:20:51.865565854 +0000 UTC m=+0.067450513 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1766032510)
Feb 20 08:20:52 np0005625203.localdomain podman[82693]: 2026-02-20 08:20:52.082379974 +0000 UTC m=+0.284264703 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:10:14Z, release=1766032510, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container)
Feb 20 08:20:52 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:20:52 np0005625203.localdomain sshd[82661]: Invalid user satya from 185.246.128.171 port 21779
Feb 20 08:20:52 np0005625203.localdomain sshd[82661]: Disconnecting invalid user satya 185.246.128.171 port 21779: Change of username or service not allowed: (satya,ssh-connection) -> (orangepi,ssh-connection) [preauth]
Feb 20 08:20:53 np0005625203.localdomain sshd[82730]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:54 np0005625203.localdomain sshd[82731]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:54 np0005625203.localdomain sshd[82731]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:20:55 np0005625203.localdomain sudo[82747]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sncewscjsjbhhgqbtvgvdyzpkpafdfwd ; /usr/bin/python3
Feb 20 08:20:55 np0005625203.localdomain sudo[82747]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 08:20:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:20:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:20:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:20:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:20:55 np0005625203.localdomain systemd[1]: tmp-crun.7E8Ug9.mount: Deactivated successfully.
Feb 20 08:20:55 np0005625203.localdomain podman[82750]: 2026-02-20 08:20:55.712821465 +0000 UTC m=+0.090240470 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-type=git, container_name=logrotate_crond, config_id=tripleo_step4, architecture=x86_64, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc.)
Feb 20 08:20:55 np0005625203.localdomain podman[82750]: 2026-02-20 08:20:55.719399657 +0000 UTC m=+0.096818672 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:20:55 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:20:55 np0005625203.localdomain python3[82749]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager repos --disable rhceph-7-tools-for-rhel-9-x86_64-rpms _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 08:20:55 np0005625203.localdomain systemd[1]: tmp-crun.d9vI8b.mount: Deactivated successfully.
Feb 20 08:20:55 np0005625203.localdomain podman[82752]: 2026-02-20 08:20:55.77313733 +0000 UTC m=+0.138324311 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, container_name=ceilometer_agent_ipmi, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step4, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1)
Feb 20 08:20:55 np0005625203.localdomain podman[82780]: 2026-02-20 08:20:55.793115591 +0000 UTC m=+0.103669851 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, container_name=nova_compute, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-nova-compute-container)
Feb 20 08:20:55 np0005625203.localdomain podman[82780]: 2026-02-20 08:20:55.818036303 +0000 UTC m=+0.128590523 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, container_name=nova_compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510)
Feb 20 08:20:55 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:20:55 np0005625203.localdomain podman[82752]: 2026-02-20 08:20:55.86893691 +0000 UTC m=+0.234123901 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, distribution-scope=public, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Feb 20 08:20:55 np0005625203.localdomain podman[82751]: 2026-02-20 08:20:55.876066918 +0000 UTC m=+0.248524511 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, version=17.1.13, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container)
Feb 20 08:20:55 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:20:55 np0005625203.localdomain podman[82751]: 2026-02-20 08:20:55.90523899 +0000 UTC m=+0.277696603 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:20:55 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:20:56 np0005625203.localdomain sshd[82730]: Invalid user orangepi from 185.246.128.171 port 46537
Feb 20 08:20:57 np0005625203.localdomain sshd[82730]: Disconnecting invalid user orangepi 185.246.128.171 port 46537: Change of username or service not allowed: (orangepi,ssh-connection) -> (marco,ssh-connection) [preauth]
Feb 20 08:20:57 np0005625203.localdomain sshd[82850]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:20:58 np0005625203.localdomain rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 08:20:58 np0005625203.localdomain sshd[82850]: Invalid user marco from 185.246.128.171 port 59131
Feb 20 08:20:58 np0005625203.localdomain rhsm-service[6595]: WARNING [subscription_manager.cert_sorter:194] Installed product 479 not present in response from server.
Feb 20 08:20:58 np0005625203.localdomain sshd[82850]: Disconnecting invalid user marco 185.246.128.171 port 59131: Change of username or service not allowed: (marco,ssh-connection) -> (healthnet,ssh-connection) [preauth]
Feb 20 08:20:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:20:59 np0005625203.localdomain podman[82976]: 2026-02-20 08:20:59.04610376 +0000 UTC m=+0.077124909 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:20:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:20:59 np0005625203.localdomain podman[82976]: 2026-02-20 08:20:59.095398687 +0000 UTC m=+0.126419796 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller)
Feb 20 08:20:59 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:20:59 np0005625203.localdomain podman[82996]: 2026-02-20 08:20:59.148948545 +0000 UTC m=+0.080213674 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Feb 20 08:20:59 np0005625203.localdomain podman[82996]: 2026-02-20 08:20:59.219357848 +0000 UTC m=+0.150622947 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, vcs-type=git, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:20:59 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:20:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:20:59 np0005625203.localdomain podman[83023]: 2026-02-20 08:20:59.762308622 +0000 UTC m=+0.081094731 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.5, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, container_name=nova_migration_target, release=1766032510, maintainer=OpenStack TripleO Team)
Feb 20 08:21:00 np0005625203.localdomain podman[83023]: 2026-02-20 08:21:00.129450729 +0000 UTC m=+0.448236808 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git)
Feb 20 08:21:00 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:21:00 np0005625203.localdomain sshd[83048]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:02 np0005625203.localdomain sudo[82747]: pam_unix(sudo:session): session closed for user root
Feb 20 08:21:02 np0005625203.localdomain sshd[83048]: Invalid user healthnet from 185.246.128.171 port 62674
Feb 20 08:21:03 np0005625203.localdomain sshd[83048]: Disconnecting invalid user healthnet 185.246.128.171 port 62674: Change of username or service not allowed: (healthnet,ssh-connection) -> (intern,ssh-connection) [preauth]
Feb 20 08:21:05 np0005625203.localdomain sshd[83107]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:07 np0005625203.localdomain sshd[83107]: Invalid user intern from 185.246.128.171 port 44514
Feb 20 08:21:08 np0005625203.localdomain sshd[83107]: Disconnecting invalid user intern 185.246.128.171 port 44514: Change of username or service not allowed: (intern,ssh-connection) -> (scsadmin,ssh-connection) [preauth]
Feb 20 08:21:09 np0005625203.localdomain sshd[83109]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:10 np0005625203.localdomain sshd[83111]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:11 np0005625203.localdomain sshd[83109]: Invalid user scsadmin from 185.246.128.171 port 24688
Feb 20 08:21:12 np0005625203.localdomain sshd[83111]: Invalid user n8n from 102.210.148.92 port 55348
Feb 20 08:21:12 np0005625203.localdomain sshd[83109]: Disconnecting invalid user scsadmin 185.246.128.171 port 24688: Change of username or service not allowed: (scsadmin,ssh-connection) -> (sshadmin,ssh-connection) [preauth]
Feb 20 08:21:12 np0005625203.localdomain sshd[83111]: Received disconnect from 102.210.148.92 port 55348:11: Bye Bye [preauth]
Feb 20 08:21:12 np0005625203.localdomain sshd[83111]: Disconnected from invalid user n8n 102.210.148.92 port 55348 [preauth]
Feb 20 08:21:13 np0005625203.localdomain sshd[83113]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:17 np0005625203.localdomain sshd[83113]: Invalid user sshadmin from 185.246.128.171 port 11704
Feb 20 08:21:18 np0005625203.localdomain sshd[83113]: Disconnecting invalid user sshadmin 185.246.128.171 port 11704: Change of username or service not allowed: (sshadmin,ssh-connection) -> (operator,ssh-connection) [preauth]
Feb 20 08:21:19 np0005625203.localdomain sshd[83115]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:22 np0005625203.localdomain sshd[83115]: Disconnecting authenticating user operator 185.246.128.171 port 46959: Change of username or service not allowed: (operator,ssh-connection) -> (invite,ssh-connection) [preauth]
Feb 20 08:21:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:21:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:21:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:21:22 np0005625203.localdomain podman[83118]: 2026-02-20 08:21:22.502309658 +0000 UTC m=+0.094906223 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, tcib_managed=true, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step3, version=17.1.13, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3)
Feb 20 08:21:22 np0005625203.localdomain podman[83118]: 2026-02-20 08:21:22.543636132 +0000 UTC m=+0.136232687 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, container_name=iscsid, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:21:22 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:21:22 np0005625203.localdomain podman[83119]: 2026-02-20 08:21:22.595787127 +0000 UTC m=+0.182633967 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:21:22 np0005625203.localdomain podman[83117]: 2026-02-20 08:21:22.547967725 +0000 UTC m=+0.142228731 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:21:22 np0005625203.localdomain python3[83166]: ansible-ansible.builtin.slurp Invoked with path=/home/zuul/ansible_hostname src=/home/zuul/ansible_hostname
Feb 20 08:21:22 np0005625203.localdomain podman[83117]: 2026-02-20 08:21:22.682407536 +0000 UTC m=+0.276668552 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64, version=17.1.13, build-date=2026-01-12T22:10:15Z, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_id=tripleo_step3, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team)
Feb 20 08:21:22 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:21:22 np0005625203.localdomain podman[83119]: 2026-02-20 08:21:22.812583437 +0000 UTC m=+0.399430267 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, batch=17.1_20260112.1, distribution-scope=public, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd)
Feb 20 08:21:22 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:21:23 np0005625203.localdomain sshd[83197]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:24 np0005625203.localdomain sshd[83197]: Invalid user invite from 185.246.128.171 port 39056
Feb 20 08:21:25 np0005625203.localdomain sshd[83197]: Disconnecting invalid user invite 185.246.128.171 port 39056: Change of username or service not allowed: (invite,ssh-connection) -> (avalanche,ssh-connection) [preauth]
Feb 20 08:21:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:21:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:21:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:21:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:21:26 np0005625203.localdomain podman[83200]: 2026-02-20 08:21:26.034035202 +0000 UTC m=+0.095200272 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.13, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5)
Feb 20 08:21:26 np0005625203.localdomain podman[83200]: 2026-02-20 08:21:26.072286952 +0000 UTC m=+0.133452002 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:21:26 np0005625203.localdomain podman[83201]: 2026-02-20 08:21:26.085969719 +0000 UTC m=+0.141979842 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi)
Feb 20 08:21:26 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:21:26 np0005625203.localdomain podman[83201]: 2026-02-20 08:21:26.137530547 +0000 UTC m=+0.193540670 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, release=1766032510, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com)
Feb 20 08:21:26 np0005625203.localdomain podman[83202]: 2026-02-20 08:21:26.13991082 +0000 UTC m=+0.192591661 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5)
Feb 20 08:21:26 np0005625203.localdomain podman[83202]: 2026-02-20 08:21:26.177348954 +0000 UTC m=+0.230029825 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:21:26 np0005625203.localdomain podman[83199]: 2026-02-20 08:21:26.188043322 +0000 UTC m=+0.250034627 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, architecture=x86_64, config_id=tripleo_step4, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:21:26 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:21:26 np0005625203.localdomain podman[83199]: 2026-02-20 08:21:26.202413671 +0000 UTC m=+0.264405006 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, name=rhosp-rhel9/openstack-cron, version=17.1.13, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, vcs-type=git, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container)
Feb 20 08:21:26 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:21:26 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:21:27 np0005625203.localdomain sudo[83295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:21:27 np0005625203.localdomain sudo[83295]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:21:27 np0005625203.localdomain sudo[83295]: pam_unix(sudo:session): session closed for user root
Feb 20 08:21:27 np0005625203.localdomain sudo[83310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:21:27 np0005625203.localdomain sudo[83310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:21:28 np0005625203.localdomain sudo[83310]: pam_unix(sudo:session): session closed for user root
Feb 20 08:21:28 np0005625203.localdomain sshd[83356]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:28 np0005625203.localdomain sudo[83357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:21:28 np0005625203.localdomain sudo[83357]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:21:28 np0005625203.localdomain sudo[83357]: pam_unix(sudo:session): session closed for user root
Feb 20 08:21:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:21:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:21:29 np0005625203.localdomain podman[83373]: 2026-02-20 08:21:29.782698238 +0000 UTC m=+0.091671855 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, vcs-type=git, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, io.buildah.version=1.41.5, config_id=tripleo_step4)
Feb 20 08:21:29 np0005625203.localdomain systemd[1]: tmp-crun.DpTaCF.mount: Deactivated successfully.
Feb 20 08:21:29 np0005625203.localdomain podman[83373]: 2026-02-20 08:21:29.856530385 +0000 UTC m=+0.165503992 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible)
Feb 20 08:21:29 np0005625203.localdomain podman[83374]: 2026-02-20 08:21:29.856958329 +0000 UTC m=+0.163896093 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step4, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64)
Feb 20 08:21:29 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:21:29 np0005625203.localdomain podman[83374]: 2026-02-20 08:21:29.961050262 +0000 UTC m=+0.267988026 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, tcib_managed=true, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:21:29 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:21:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:21:30 np0005625203.localdomain podman[83423]: 2026-02-20 08:21:30.75756343 +0000 UTC m=+0.076981765 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:21:31 np0005625203.localdomain podman[83423]: 2026-02-20 08:21:31.134430645 +0000 UTC m=+0.453849000 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:21:31 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:21:31 np0005625203.localdomain sshd[83356]: Invalid user avalanche from 185.246.128.171 port 29703
Feb 20 08:21:32 np0005625203.localdomain sshd[83356]: Disconnecting invalid user avalanche 185.246.128.171 port 29703: Change of username or service not allowed: (avalanche,ssh-connection) -> (gwei,ssh-connection) [preauth]
Feb 20 08:21:33 np0005625203.localdomain sshd[83445]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:35 np0005625203.localdomain sshd[83445]: Invalid user gwei from 185.246.128.171 port 27934
Feb 20 08:21:36 np0005625203.localdomain sshd[83445]: Disconnecting invalid user gwei 185.246.128.171 port 27934: Change of username or service not allowed: (gwei,ssh-connection) -> (chef,ssh-connection) [preauth]
Feb 20 08:21:37 np0005625203.localdomain sshd[83447]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:39 np0005625203.localdomain sshd[83447]: Invalid user chef from 185.246.128.171 port 47545
Feb 20 08:21:40 np0005625203.localdomain sshd[83447]: Disconnecting invalid user chef 185.246.128.171 port 47545: Change of username or service not allowed: (chef,ssh-connection) -> (cloudera,ssh-connection) [preauth]
Feb 20 08:21:40 np0005625203.localdomain sshd[83449]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:40 np0005625203.localdomain sshd[83449]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:21:42 np0005625203.localdomain sshd[83451]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:44 np0005625203.localdomain sshd[83451]: Invalid user cloudera from 185.246.128.171 port 63562
Feb 20 08:21:44 np0005625203.localdomain sshd[83451]: Disconnecting invalid user cloudera 185.246.128.171 port 63562: Change of username or service not allowed: (cloudera,ssh-connection) -> (tomcat,ssh-connection) [preauth]
Feb 20 08:21:46 np0005625203.localdomain sshd[83453]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:48 np0005625203.localdomain sshd[83453]: Invalid user tomcat from 185.246.128.171 port 32611
Feb 20 08:21:50 np0005625203.localdomain sshd[83498]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:50 np0005625203.localdomain sshd[83453]: error: maximum authentication attempts exceeded for invalid user tomcat from 185.246.128.171 port 32611 ssh2 [preauth]
Feb 20 08:21:50 np0005625203.localdomain sshd[83453]: Disconnecting invalid user tomcat 185.246.128.171 port 32611: Too many authentication failures [preauth]
Feb 20 08:21:50 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:21:50 np0005625203.localdomain sshd[83498]: Received disconnect from 189.190.2.14 port 57042:11: Bye Bye [preauth]
Feb 20 08:21:50 np0005625203.localdomain sshd[83498]: Disconnected from authenticating user root 189.190.2.14 port 57042 [preauth]
Feb 20 08:21:50 np0005625203.localdomain recover_tripleo_nova_virtqemud[83503]: 62505
Feb 20 08:21:50 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:21:50 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:21:51 np0005625203.localdomain sshd[83504]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:21:52 np0005625203.localdomain systemd[1]: tmp-crun.bfvBMg.mount: Deactivated successfully.
Feb 20 08:21:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:21:52 np0005625203.localdomain podman[83506]: 2026-02-20 08:21:52.790177773 +0000 UTC m=+0.110141049 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, container_name=iscsid, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, release=1766032510, url=https://www.redhat.com, vcs-type=git, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, batch=17.1_20260112.1, distribution-scope=public)
Feb 20 08:21:52 np0005625203.localdomain podman[83506]: 2026-02-20 08:21:52.80221263 +0000 UTC m=+0.122175926 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, release=1766032510, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid)
Feb 20 08:21:52 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:21:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:21:52 np0005625203.localdomain podman[83524]: 2026-02-20 08:21:52.882750714 +0000 UTC m=+0.084959739 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, release=1766032510, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step3, container_name=collectd)
Feb 20 08:21:52 np0005625203.localdomain podman[83524]: 2026-02-20 08:21:52.895269866 +0000 UTC m=+0.097478921 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5)
Feb 20 08:21:52 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:21:52 np0005625203.localdomain sshd[83504]: Invalid user tomcat from 185.246.128.171 port 20862
Feb 20 08:21:52 np0005625203.localdomain podman[83541]: 2026-02-20 08:21:52.964014879 +0000 UTC m=+0.074525401 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.buildah.version=1.41.5, container_name=metrics_qdr, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Feb 20 08:21:53 np0005625203.localdomain podman[83541]: 2026-02-20 08:21:53.164275562 +0000 UTC m=+0.274786124 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, version=17.1.13, container_name=metrics_qdr, tcib_managed=true)
Feb 20 08:21:53 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:21:53 np0005625203.localdomain sshd[83504]: Disconnecting invalid user tomcat 185.246.128.171 port 20862: Change of username or service not allowed: (tomcat,ssh-connection) -> (website,ssh-connection) [preauth]
Feb 20 08:21:54 np0005625203.localdomain sshd[83572]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:21:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:21:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:21:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:21:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:21:56 np0005625203.localdomain podman[83574]: 2026-02-20 08:21:56.774046701 +0000 UTC m=+0.092991504 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, com.redhat.component=openstack-cron-container)
Feb 20 08:21:56 np0005625203.localdomain systemd[1]: tmp-crun.Whfq1F.mount: Deactivated successfully.
Feb 20 08:21:56 np0005625203.localdomain podman[83576]: 2026-02-20 08:21:56.835652695 +0000 UTC m=+0.152368120 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, version=17.1.13, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public)
Feb 20 08:21:56 np0005625203.localdomain podman[83577]: 2026-02-20 08:21:56.836258744 +0000 UTC m=+0.144047326 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, tcib_managed=true, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, release=1766032510)
Feb 20 08:21:56 np0005625203.localdomain podman[83574]: 2026-02-20 08:21:56.860510446 +0000 UTC m=+0.179455289 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, io.buildah.version=1.41.5, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, release=1766032510, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.13, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:21:56 np0005625203.localdomain podman[83576]: 2026-02-20 08:21:56.871965026 +0000 UTC m=+0.188680451 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, version=17.1.13, distribution-scope=public, io.buildah.version=1.41.5, url=https://www.redhat.com, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:21:56 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:21:56 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:21:56 np0005625203.localdomain podman[83577]: 2026-02-20 08:21:56.916161948 +0000 UTC m=+0.223950560 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64)
Feb 20 08:21:56 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:21:56 np0005625203.localdomain podman[83575]: 2026-02-20 08:21:56.961837074 +0000 UTC m=+0.281205240 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step5, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, maintainer=OpenStack TripleO Team, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, version=17.1.13)
Feb 20 08:21:57 np0005625203.localdomain podman[83575]: 2026-02-20 08:21:57.024393577 +0000 UTC m=+0.343761773 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1766032510, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, container_name=nova_compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, batch=17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:21:57 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:21:59 np0005625203.localdomain sshd[83572]: Invalid user website from 185.246.128.171 port 8519
Feb 20 08:21:59 np0005625203.localdomain sshd[83572]: Disconnecting invalid user website 185.246.128.171 port 8519: Change of username or service not allowed: (website,ssh-connection) -> (ali,ssh-connection) [preauth]
Feb 20 08:22:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:22:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:22:00 np0005625203.localdomain podman[83673]: 2026-02-20 08:22:00.773565579 +0000 UTC m=+0.084997621 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, release=1766032510, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com)
Feb 20 08:22:00 np0005625203.localdomain podman[83673]: 2026-02-20 08:22:00.825838936 +0000 UTC m=+0.137270978 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, container_name=ovn_controller, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Feb 20 08:22:00 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:22:00 np0005625203.localdomain podman[83672]: 2026-02-20 08:22:00.826887259 +0000 UTC m=+0.139890449 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:22:00 np0005625203.localdomain podman[83672]: 2026-02-20 08:22:00.911320521 +0000 UTC m=+0.224323761 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, architecture=x86_64, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:22:00 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:22:01 np0005625203.localdomain sshd[83720]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:22:01 np0005625203.localdomain systemd[1]: tmp-crun.vWLlCx.mount: Deactivated successfully.
Feb 20 08:22:01 np0005625203.localdomain podman[83722]: 2026-02-20 08:22:01.760771228 +0000 UTC m=+0.079773510 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Feb 20 08:22:02 np0005625203.localdomain podman[83722]: 2026-02-20 08:22:02.192315955 +0000 UTC m=+0.511318277 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1766032510, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:22:02 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:22:03 np0005625203.localdomain sshd[83720]: Invalid user ali from 185.246.128.171 port 26131
Feb 20 08:22:06 np0005625203.localdomain sshd[83720]: Disconnecting invalid user ali 185.246.128.171 port 26131: Change of username or service not allowed: (ali,ssh-connection) -> (testing,ssh-connection) [preauth]
Feb 20 08:22:09 np0005625203.localdomain sshd[83745]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:11 np0005625203.localdomain sshd[83747]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:11 np0005625203.localdomain sshd[83747]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:22:11 np0005625203.localdomain sshd[83745]: Invalid user testing from 185.246.128.171 port 59642
Feb 20 08:22:12 np0005625203.localdomain sshd[83745]: Disconnecting invalid user testing 185.246.128.171 port 59642: Change of username or service not allowed: (testing,ssh-connection) -> (matrix,ssh-connection) [preauth]
Feb 20 08:22:13 np0005625203.localdomain sshd[83749]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:15 np0005625203.localdomain sshd[83749]: Invalid user matrix from 185.246.128.171 port 30016
Feb 20 08:22:17 np0005625203.localdomain sshd[83749]: Disconnecting invalid user matrix 185.246.128.171 port 30016: Change of username or service not allowed: (matrix,ssh-connection) -> (odoo,ssh-connection) [preauth]
Feb 20 08:22:17 np0005625203.localdomain sshd[83751]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:18 np0005625203.localdomain sshd[83751]: Invalid user odoo from 185.246.128.171 port 27688
Feb 20 08:22:19 np0005625203.localdomain sshd[83751]: Disconnecting invalid user odoo 185.246.128.171 port 27688: Change of username or service not allowed: (odoo,ssh-connection) -> (vps,ssh-connection) [preauth]
Feb 20 08:22:20 np0005625203.localdomain sshd[83753]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:22 np0005625203.localdomain sshd[81220]: Received disconnect from 38.102.83.114 port 34760:11: disconnected by user
Feb 20 08:22:22 np0005625203.localdomain sshd[81220]: Disconnected from user zuul 38.102.83.114 port 34760
Feb 20 08:22:22 np0005625203.localdomain sshd[81217]: pam_unix(sshd:session): session closed for user zuul
Feb 20 08:22:22 np0005625203.localdomain systemd[1]: session-34.scope: Deactivated successfully.
Feb 20 08:22:22 np0005625203.localdomain systemd[1]: session-34.scope: Consumed 19.138s CPU time.
Feb 20 08:22:22 np0005625203.localdomain systemd-logind[759]: Session 34 logged out. Waiting for processes to exit.
Feb 20 08:22:22 np0005625203.localdomain systemd-logind[759]: Removed session 34.
Feb 20 08:22:23 np0005625203.localdomain sshd[83753]: Invalid user vps from 185.246.128.171 port 49679
Feb 20 08:22:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:22:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:22:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:22:23 np0005625203.localdomain systemd[1]: tmp-crun.uxRSTy.mount: Deactivated successfully.
Feb 20 08:22:23 np0005625203.localdomain podman[83755]: 2026-02-20 08:22:23.57920852 +0000 UTC m=+0.103621060 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, container_name=collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, architecture=x86_64)
Feb 20 08:22:23 np0005625203.localdomain podman[83756]: 2026-02-20 08:22:23.620024407 +0000 UTC m=+0.143862860 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, container_name=iscsid, release=1766032510, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git)
Feb 20 08:22:23 np0005625203.localdomain podman[83756]: 2026-02-20 08:22:23.630116286 +0000 UTC m=+0.153954779 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.expose-services=, container_name=iscsid, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:22:23 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:22:23 np0005625203.localdomain podman[83757]: 2026-02-20 08:22:23.675831264 +0000 UTC m=+0.197514521 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, tcib_managed=true, batch=17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=)
Feb 20 08:22:23 np0005625203.localdomain podman[83755]: 2026-02-20 08:22:23.69465221 +0000 UTC m=+0.219064730 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, distribution-scope=public, release=1766032510, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=collectd, io.buildah.version=1.41.5, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z)
Feb 20 08:22:23 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:22:23 np0005625203.localdomain podman[83757]: 2026-02-20 08:22:23.881056709 +0000 UTC m=+0.402740006 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vendor=Red Hat, Inc., config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, url=https://www.redhat.com)
Feb 20 08:22:23 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:22:24 np0005625203.localdomain sshd[83753]: Disconnecting invalid user vps 185.246.128.171 port 49679: Change of username or service not allowed: (vps,ssh-connection) -> (clouduser,ssh-connection) [preauth]
Feb 20 08:22:25 np0005625203.localdomain sshd[83825]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:25 np0005625203.localdomain sshd[83827]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:25 np0005625203.localdomain sshd[83825]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:22:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:22:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:22:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:22:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:22:27 np0005625203.localdomain podman[83830]: 2026-02-20 08:22:27.776488453 +0000 UTC m=+0.089048264 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, container_name=nova_compute, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z)
Feb 20 08:22:27 np0005625203.localdomain podman[83829]: 2026-02-20 08:22:27.750323003 +0000 UTC m=+0.067509236 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, url=https://www.redhat.com, com.redhat.component=openstack-cron-container)
Feb 20 08:22:27 np0005625203.localdomain podman[83830]: 2026-02-20 08:22:27.826549335 +0000 UTC m=+0.139109136 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_id=tripleo_step5, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:22:27 np0005625203.localdomain podman[83835]: 2026-02-20 08:22:27.842552713 +0000 UTC m=+0.143816429 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:22:27 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:22:27 np0005625203.localdomain podman[83835]: 2026-02-20 08:22:27.870314072 +0000 UTC m=+0.171577768 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:22:27 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:22:27 np0005625203.localdomain podman[83831]: 2026-02-20 08:22:27.919061573 +0000 UTC m=+0.230581852 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:22:27 np0005625203.localdomain podman[83829]: 2026-02-20 08:22:27.930333558 +0000 UTC m=+0.247519811 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, io.buildah.version=1.41.5, url=https://www.redhat.com, release=1766032510, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:22:27 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:22:27 np0005625203.localdomain podman[83831]: 2026-02-20 08:22:27.955173168 +0000 UTC m=+0.266693437 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:22:27 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:22:29 np0005625203.localdomain sudo[83928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:22:29 np0005625203.localdomain sudo[83928]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:22:29 np0005625203.localdomain sudo[83928]: pam_unix(sudo:session): session closed for user root
Feb 20 08:22:29 np0005625203.localdomain sudo[83943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 08:22:29 np0005625203.localdomain sudo[83943]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:22:29 np0005625203.localdomain sshd[83827]: Invalid user clouduser from 185.246.128.171 port 15017
Feb 20 08:22:29 np0005625203.localdomain sshd[83827]: Disconnecting invalid user clouduser 185.246.128.171 port 15017: Change of username or service not allowed: (clouduser,ssh-connection) -> (vpnuser,ssh-connection) [preauth]
Feb 20 08:22:29 np0005625203.localdomain podman[84029]: 2026-02-20 08:22:29.942064628 +0000 UTC m=+0.084874296 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, io.buildah.version=1.42.2, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, name=rhceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, release=1770267347)
Feb 20 08:22:30 np0005625203.localdomain podman[84029]: 2026-02-20 08:22:30.047936686 +0000 UTC m=+0.190746414 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, distribution-scope=public, ceph=True, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7)
Feb 20 08:22:30 np0005625203.localdomain sudo[83943]: pam_unix(sudo:session): session closed for user root
Feb 20 08:22:30 np0005625203.localdomain sudo[84094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:22:30 np0005625203.localdomain sudo[84094]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:22:30 np0005625203.localdomain sudo[84094]: pam_unix(sudo:session): session closed for user root
Feb 20 08:22:30 np0005625203.localdomain sudo[84109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:22:30 np0005625203.localdomain sudo[84109]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:22:31 np0005625203.localdomain sudo[84109]: pam_unix(sudo:session): session closed for user root
Feb 20 08:22:31 np0005625203.localdomain sshd[84157]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:22:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:22:31 np0005625203.localdomain podman[84159]: 2026-02-20 08:22:31.777539278 +0000 UTC m=+0.086170147 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., release=1766032510, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:22:31 np0005625203.localdomain sudo[84180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:22:31 np0005625203.localdomain sudo[84180]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:22:31 np0005625203.localdomain sudo[84180]: pam_unix(sudo:session): session closed for user root
Feb 20 08:22:31 np0005625203.localdomain podman[84159]: 2026-02-20 08:22:31.839358538 +0000 UTC m=+0.147989437 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, release=1766032510, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:22:31 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:22:31 np0005625203.localdomain podman[84158]: 2026-02-20 08:22:31.841240996 +0000 UTC m=+0.149563265 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:22:31 np0005625203.localdomain podman[84158]: 2026-02-20 08:22:31.921588683 +0000 UTC m=+0.229910902 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, io.buildah.version=1.41.5)
Feb 20 08:22:31 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:22:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:22:32 np0005625203.localdomain podman[84220]: 2026-02-20 08:22:32.510320037 +0000 UTC m=+0.077273704 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git)
Feb 20 08:22:32 np0005625203.localdomain podman[84220]: 2026-02-20 08:22:32.912380012 +0000 UTC m=+0.479333659 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step4)
Feb 20 08:22:32 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:22:34 np0005625203.localdomain sshd[84157]: Invalid user vpnuser from 185.246.128.171 port 17526
Feb 20 08:22:35 np0005625203.localdomain sshd[84157]: Disconnecting invalid user vpnuser 185.246.128.171 port 17526: Change of username or service not allowed: (vpnuser,ssh-connection) -> (ken,ssh-connection) [preauth]
Feb 20 08:22:37 np0005625203.localdomain sshd[84244]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:40 np0005625203.localdomain sshd[84244]: Invalid user ken from 185.246.128.171 port 61855
Feb 20 08:22:40 np0005625203.localdomain sshd[84244]: Disconnecting invalid user ken 185.246.128.171 port 61855: Change of username or service not allowed: (ken,ssh-connection) -> (fedora,ssh-connection) [preauth]
Feb 20 08:22:42 np0005625203.localdomain sshd[84246]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:43 np0005625203.localdomain sshd[84246]: Invalid user fedora from 185.246.128.171 port 39657
Feb 20 08:22:43 np0005625203.localdomain sshd[84246]: Disconnecting invalid user fedora 185.246.128.171 port 39657: Change of username or service not allowed: (fedora,ssh-connection) -> (alfresco,ssh-connection) [preauth]
Feb 20 08:22:44 np0005625203.localdomain sshd[84248]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:47 np0005625203.localdomain sshd[84248]: Invalid user alfresco from 185.246.128.171 port 21872
Feb 20 08:22:47 np0005625203.localdomain sshd[84248]: Disconnecting invalid user alfresco 185.246.128.171 port 21872: Change of username or service not allowed: (alfresco,ssh-connection) -> (smart,ssh-connection) [preauth]
Feb 20 08:22:49 np0005625203.localdomain sshd[84250]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:50 np0005625203.localdomain sshd[84272]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:50 np0005625203.localdomain sshd[84272]: Invalid user ftpuser from 147.135.114.8 port 49518
Feb 20 08:22:50 np0005625203.localdomain sshd[84272]: Received disconnect from 147.135.114.8 port 49518:11: Bye Bye [preauth]
Feb 20 08:22:50 np0005625203.localdomain sshd[84272]: Disconnected from invalid user ftpuser 147.135.114.8 port 49518 [preauth]
Feb 20 08:22:51 np0005625203.localdomain sshd[84250]: Invalid user smart from 185.246.128.171 port 23173
Feb 20 08:22:51 np0005625203.localdomain sshd[84250]: Disconnecting invalid user smart 185.246.128.171 port 23173: Change of username or service not allowed: (smart,ssh-connection) -> (sftp,ssh-connection) [preauth]
Feb 20 08:22:52 np0005625203.localdomain sshd[84299]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:53 np0005625203.localdomain sshd[84301]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:22:53 np0005625203.localdomain sshd[84299]: Invalid user sftp from 212.154.234.9 port 19699
Feb 20 08:22:53 np0005625203.localdomain sshd[84299]: Received disconnect from 212.154.234.9 port 19699:11: Bye Bye [preauth]
Feb 20 08:22:53 np0005625203.localdomain sshd[84299]: Disconnected from invalid user sftp 212.154.234.9 port 19699 [preauth]
Feb 20 08:22:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:22:53 np0005625203.localdomain podman[84302]: 2026-02-20 08:22:53.750074021 +0000 UTC m=+0.067312521 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.41.5, tcib_managed=true, distribution-scope=public)
Feb 20 08:22:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:22:53 np0005625203.localdomain podman[84302]: 2026-02-20 08:22:53.769395991 +0000 UTC m=+0.086634461 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, config_id=tripleo_step3, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2026-01-12T22:34:43Z, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true)
Feb 20 08:22:53 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:22:53 np0005625203.localdomain podman[84321]: 2026-02-20 08:22:53.843569409 +0000 UTC m=+0.071710144 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, version=17.1.13, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20260112.1, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container)
Feb 20 08:22:53 np0005625203.localdomain podman[84321]: 2026-02-20 08:22:53.851623956 +0000 UTC m=+0.079764721 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, tcib_managed=true, container_name=collectd)
Feb 20 08:22:53 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:22:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:22:54 np0005625203.localdomain systemd[1]: tmp-crun.ufqyLB.mount: Deactivated successfully.
Feb 20 08:22:54 np0005625203.localdomain podman[84342]: 2026-02-20 08:22:54.776930402 +0000 UTC m=+0.094216103 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public)
Feb 20 08:22:54 np0005625203.localdomain podman[84342]: 2026-02-20 08:22:54.966029075 +0000 UTC m=+0.283314716 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:22:54 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:22:56 np0005625203.localdomain sshd[84301]: Invalid user sftp from 185.246.128.171 port 26987
Feb 20 08:22:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:22:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:22:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:22:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:22:58 np0005625203.localdomain podman[84373]: 2026-02-20 08:22:58.768820297 +0000 UTC m=+0.084890726 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, version=17.1.13, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, release=1766032510, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:22:58 np0005625203.localdomain podman[84373]: 2026-02-20 08:22:58.782282999 +0000 UTC m=+0.098353418 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, version=17.1.13, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, container_name=logrotate_crond)
Feb 20 08:22:58 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:22:58 np0005625203.localdomain podman[84374]: 2026-02-20 08:22:58.831498024 +0000 UTC m=+0.145981475 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, version=17.1.13)
Feb 20 08:22:58 np0005625203.localdomain systemd[1]: tmp-crun.HNSVLr.mount: Deactivated successfully.
Feb 20 08:22:58 np0005625203.localdomain podman[84375]: 2026-02-20 08:22:58.892560121 +0000 UTC m=+0.204171915 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, batch=17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com)
Feb 20 08:22:58 np0005625203.localdomain podman[84376]: 2026-02-20 08:22:58.935962239 +0000 UTC m=+0.245030015 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:22:58 np0005625203.localdomain podman[84375]: 2026-02-20 08:22:58.950471793 +0000 UTC m=+0.262083537 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, vcs-type=git)
Feb 20 08:22:58 np0005625203.localdomain podman[84374]: 2026-02-20 08:22:58.959952162 +0000 UTC m=+0.274435643 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true)
Feb 20 08:22:58 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:22:58 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:22:58 np0005625203.localdomain podman[84376]: 2026-02-20 08:22:58.995223511 +0000 UTC m=+0.304291267 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, container_name=ceilometer_agent_ipmi, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com)
Feb 20 08:22:59 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:22:59 np0005625203.localdomain sshd[84301]: Disconnecting invalid user sftp 185.246.128.171 port 26987: Change of username or service not allowed: (sftp,ssh-connection) -> (amir,ssh-connection) [preauth]
Feb 20 08:23:01 np0005625203.localdomain sshd[84467]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:02 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:23:02 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:23:02 np0005625203.localdomain podman[84469]: 2026-02-20 08:23:02.76326076 +0000 UTC m=+0.081056290 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, distribution-scope=public, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z)
Feb 20 08:23:02 np0005625203.localdomain podman[84469]: 2026-02-20 08:23:02.808816303 +0000 UTC m=+0.126611823 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.13, url=https://www.redhat.com, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team)
Feb 20 08:23:02 np0005625203.localdomain podman[84470]: 2026-02-20 08:23:02.82277623 +0000 UTC m=+0.137942069 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, batch=17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:23:02 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:23:02 np0005625203.localdomain podman[84470]: 2026-02-20 08:23:02.8502359 +0000 UTC m=+0.165401779 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, vcs-type=git, version=17.1.13, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., architecture=x86_64, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public)
Feb 20 08:23:02 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:23:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:23:03 np0005625203.localdomain podman[84517]: 2026-02-20 08:23:03.764210959 +0000 UTC m=+0.083652139 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:23:04 np0005625203.localdomain podman[84517]: 2026-02-20 08:23:04.169472613 +0000 UTC m=+0.488913743 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:23:04 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:23:05 np0005625203.localdomain sshd[84467]: Invalid user amir from 185.246.128.171 port 42826
Feb 20 08:23:07 np0005625203.localdomain sshd[84467]: Disconnecting invalid user amir 185.246.128.171 port 42826: Change of username or service not allowed: (amir,ssh-connection) -> (xd,ssh-connection) [preauth]
Feb 20 08:23:08 np0005625203.localdomain sshd[84540]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:09 np0005625203.localdomain sshd[84541]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:09 np0005625203.localdomain sshd[84541]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:23:09 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:23:09 np0005625203.localdomain recover_tripleo_nova_virtqemud[84544]: 62505
Feb 20 08:23:09 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:23:09 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:23:11 np0005625203.localdomain sshd[84540]: Invalid user xd from 185.246.128.171 port 14188
Feb 20 08:23:13 np0005625203.localdomain sshd[84540]: Disconnecting invalid user xd 185.246.128.171 port 14188: Change of username or service not allowed: (xd,ssh-connection) -> (maroof,ssh-connection) [preauth]
Feb 20 08:23:15 np0005625203.localdomain sshd[84546]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:15 np0005625203.localdomain sshd[84547]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:18 np0005625203.localdomain sshd[84546]: Invalid user maroof from 185.246.128.171 port 39439
Feb 20 08:23:18 np0005625203.localdomain sshd[84546]: Disconnecting invalid user maroof 185.246.128.171 port 39439: Change of username or service not allowed: (maroof,ssh-connection) -> (firefly,ssh-connection) [preauth]
Feb 20 08:23:18 np0005625203.localdomain sshd[84550]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:18 np0005625203.localdomain sshd[84547]: error: maximum authentication attempts exceeded for root from 36.89.252.58 port 43596 ssh2 [preauth]
Feb 20 08:23:18 np0005625203.localdomain sshd[84547]: Disconnecting authenticating user root 36.89.252.58 port 43596: Too many authentication failures [preauth]
Feb 20 08:23:19 np0005625203.localdomain sshd[84552]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:21 np0005625203.localdomain sshd[84550]: Invalid user firefly from 185.246.128.171 port 27797
Feb 20 08:23:21 np0005625203.localdomain sshd[84550]: Disconnecting invalid user firefly 185.246.128.171 port 27797: Change of username or service not allowed: (firefly,ssh-connection) -> (seki,ssh-connection) [preauth]
Feb 20 08:23:21 np0005625203.localdomain sshd[84554]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:22 np0005625203.localdomain sshd[84552]: Connection closed by authenticating user root 36.89.252.58 port 43722 [preauth]
Feb 20 08:23:22 np0005625203.localdomain sshd[84554]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:23:24 np0005625203.localdomain sshd[84556]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:23:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:23:24 np0005625203.localdomain systemd[1]: tmp-crun.irK4VQ.mount: Deactivated successfully.
Feb 20 08:23:24 np0005625203.localdomain podman[84558]: 2026-02-20 08:23:24.764769421 +0000 UTC m=+0.084800975 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_id=tripleo_step3, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible)
Feb 20 08:23:24 np0005625203.localdomain podman[84558]: 2026-02-20 08:23:24.805281499 +0000 UTC m=+0.125313063 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, vcs-type=git, container_name=iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Feb 20 08:23:24 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:23:24 np0005625203.localdomain podman[84557]: 2026-02-20 08:23:24.811990685 +0000 UTC m=+0.130592735 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:23:24 np0005625203.localdomain podman[84557]: 2026-02-20 08:23:24.89229029 +0000 UTC m=+0.210892320 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.openshift.expose-services=, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, batch=17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., container_name=collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:23:24 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:23:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:23:25 np0005625203.localdomain podman[84597]: 2026-02-20 08:23:25.766208145 +0000 UTC m=+0.082030029 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, container_name=metrics_qdr, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:23:25 np0005625203.localdomain podman[84597]: 2026-02-20 08:23:25.993835126 +0000 UTC m=+0.309657010 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, architecture=x86_64, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, release=1766032510, version=17.1.13, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Feb 20 08:23:26 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:23:26 np0005625203.localdomain sshd[84556]: Invalid user seki from 185.246.128.171 port 16263
Feb 20 08:23:27 np0005625203.localdomain sshd[84556]: Disconnecting invalid user seki 185.246.128.171 port 16263: Change of username or service not allowed: (seki,ssh-connection) -> (pi,ssh-connection) [preauth]
Feb 20 08:23:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:23:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:23:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:23:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:23:29 np0005625203.localdomain podman[84628]: 2026-02-20 08:23:29.772074418 +0000 UTC m=+0.080219305 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-type=git, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:23:29 np0005625203.localdomain podman[84625]: 2026-02-20 08:23:29.826684018 +0000 UTC m=+0.141709535 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, managed_by=tripleo_ansible, release=1766032510, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, version=17.1.13, build-date=2026-01-12T22:10:15Z)
Feb 20 08:23:29 np0005625203.localdomain podman[84628]: 2026-02-20 08:23:29.832333991 +0000 UTC m=+0.140478878 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.5, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:23:29 np0005625203.localdomain podman[84625]: 2026-02-20 08:23:29.83917846 +0000 UTC m=+0.154203997 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, vcs-type=git, container_name=logrotate_crond, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510)
Feb 20 08:23:29 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:23:29 np0005625203.localdomain podman[84627]: 2026-02-20 08:23:29.880410121 +0000 UTC m=+0.192379444 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:23:29 np0005625203.localdomain podman[84627]: 2026-02-20 08:23:29.909653275 +0000 UTC m=+0.221622548 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, config_id=tripleo_step4, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, release=1766032510, io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64)
Feb 20 08:23:29 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:23:29 np0005625203.localdomain podman[84626]: 2026-02-20 08:23:29.924796769 +0000 UTC m=+0.239758274 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, distribution-scope=public, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible)
Feb 20 08:23:29 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:23:29 np0005625203.localdomain podman[84626]: 2026-02-20 08:23:29.954665631 +0000 UTC m=+0.269627086 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:23:29 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:23:30 np0005625203.localdomain sshd[84724]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:30 np0005625203.localdomain systemd[1]: tmp-crun.DYwFtR.mount: Deactivated successfully.
Feb 20 08:23:31 np0005625203.localdomain sudo[84727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:23:31 np0005625203.localdomain sudo[84727]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:23:31 np0005625203.localdomain sudo[84727]: pam_unix(sudo:session): session closed for user root
Feb 20 08:23:32 np0005625203.localdomain sudo[84742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:23:32 np0005625203.localdomain sudo[84742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:23:32 np0005625203.localdomain sshd[84724]: Invalid user pi from 185.246.128.171 port 56560
Feb 20 08:23:32 np0005625203.localdomain sudo[84742]: pam_unix(sudo:session): session closed for user root
Feb 20 08:23:33 np0005625203.localdomain sudo[84789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:23:33 np0005625203.localdomain sudo[84789]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:23:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:23:33 np0005625203.localdomain sudo[84789]: pam_unix(sudo:session): session closed for user root
Feb 20 08:23:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:23:33 np0005625203.localdomain podman[84804]: 2026-02-20 08:23:33.491606913 +0000 UTC m=+0.085562548 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, architecture=x86_64, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, release=1766032510, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1)
Feb 20 08:23:33 np0005625203.localdomain podman[84805]: 2026-02-20 08:23:33.548530693 +0000 UTC m=+0.138689382 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, container_name=ovn_controller, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com)
Feb 20 08:23:33 np0005625203.localdomain podman[84804]: 2026-02-20 08:23:33.563913574 +0000 UTC m=+0.157869169 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., managed_by=tripleo_ansible, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, vcs-type=git)
Feb 20 08:23:33 np0005625203.localdomain podman[84805]: 2026-02-20 08:23:33.572487796 +0000 UTC m=+0.162646525 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4)
Feb 20 08:23:33 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:23:33 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:23:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:23:34 np0005625203.localdomain podman[84853]: 2026-02-20 08:23:34.764937872 +0000 UTC m=+0.084405382 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, version=17.1.13, release=1766032510, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true)
Feb 20 08:23:35 np0005625203.localdomain podman[84853]: 2026-02-20 08:23:35.140646731 +0000 UTC m=+0.460114231 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=nova_migration_target, config_id=tripleo_step4, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:23:35 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:23:37 np0005625203.localdomain sshd[84724]: error: maximum authentication attempts exceeded for invalid user pi from 185.246.128.171 port 56560 ssh2 [preauth]
Feb 20 08:23:37 np0005625203.localdomain sshd[84724]: Disconnecting invalid user pi 185.246.128.171 port 56560: Too many authentication failures [preauth]
Feb 20 08:23:38 np0005625203.localdomain sshd[84877]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:42 np0005625203.localdomain sshd[84877]: Invalid user pi from 185.246.128.171 port 11674
Feb 20 08:23:44 np0005625203.localdomain sshd[84877]: Disconnecting invalid user pi 185.246.128.171 port 11674: Change of username or service not allowed: (pi,ssh-connection) -> (testftp,ssh-connection) [preauth]
Feb 20 08:23:45 np0005625203.localdomain sshd[84879]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:47 np0005625203.localdomain sshd[84879]: Invalid user testftp from 185.246.128.171 port 29516
Feb 20 08:23:47 np0005625203.localdomain sshd[84879]: Disconnecting invalid user testftp 185.246.128.171 port 29516: Change of username or service not allowed: (testftp,ssh-connection) -> (liqing,ssh-connection) [preauth]
Feb 20 08:23:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 20 08:23:49 np0005625203.localdomain sshd[84881]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:53 np0005625203.localdomain sshd[84881]: Invalid user liqing from 185.246.128.171 port 8766
Feb 20 08:23:53 np0005625203.localdomain sshd[84881]: Disconnecting invalid user liqing 185.246.128.171 port 8766: Change of username or service not allowed: (liqing,ssh-connection) -> (cisco,ssh-connection) [preauth]
Feb 20 08:23:53 np0005625203.localdomain sshd[84928]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:54 np0005625203.localdomain sshd[84928]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:23:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:23:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:23:55 np0005625203.localdomain podman[84930]: 2026-02-20 08:23:55.780615717 +0000 UTC m=+0.093076707 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, config_id=tripleo_step3, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:23:55 np0005625203.localdomain podman[84930]: 2026-02-20 08:23:55.797269296 +0000 UTC m=+0.109730256 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5)
Feb 20 08:23:55 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:23:55 np0005625203.localdomain podman[84931]: 2026-02-20 08:23:55.899622347 +0000 UTC m=+0.210764597 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vendor=Red Hat, Inc., container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step3)
Feb 20 08:23:55 np0005625203.localdomain podman[84931]: 2026-02-20 08:23:55.933354808 +0000 UTC m=+0.244497068 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, tcib_managed=true)
Feb 20 08:23:55 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:23:56 np0005625203.localdomain sshd[84969]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:23:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:23:56 np0005625203.localdomain systemd[1]: tmp-crun.62rlfU.mount: Deactivated successfully.
Feb 20 08:23:56 np0005625203.localdomain podman[84971]: 2026-02-20 08:23:56.771075187 +0000 UTC m=+0.092688386 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=17.1.13, architecture=x86_64, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true)
Feb 20 08:23:56 np0005625203.localdomain podman[84971]: 2026-02-20 08:23:56.949136561 +0000 UTC m=+0.270749700 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, tcib_managed=true, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, io.openshift.expose-services=, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., batch=17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, release=1766032510)
Feb 20 08:23:56 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:23:58 np0005625203.localdomain sshd[84969]: Invalid user cisco from 185.246.128.171 port 63219
Feb 20 08:24:00 np0005625203.localdomain sshd[84969]: Disconnecting invalid user cisco 185.246.128.171 port 63219: Change of username or service not allowed: (cisco,ssh-connection) -> (vpn,ssh-connection) [preauth]
Feb 20 08:24:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:24:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:24:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:24:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:24:00 np0005625203.localdomain podman[84999]: 2026-02-20 08:24:00.250736686 +0000 UTC m=+0.090547709 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, url=https://www.redhat.com, distribution-scope=public, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:24:00 np0005625203.localdomain podman[85001]: 2026-02-20 08:24:00.311695101 +0000 UTC m=+0.143971065 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20260112.1)
Feb 20 08:24:00 np0005625203.localdomain podman[84999]: 2026-02-20 08:24:00.336366985 +0000 UTC m=+0.176178008 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, url=https://www.redhat.com, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.13, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:24:00 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:24:00 np0005625203.localdomain podman[85002]: 2026-02-20 08:24:00.412263166 +0000 UTC m=+0.242362563 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, version=17.1.13, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi)
Feb 20 08:24:00 np0005625203.localdomain podman[85000]: 2026-02-20 08:24:00.424644205 +0000 UTC m=+0.260623131 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, version=17.1.13, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute)
Feb 20 08:24:00 np0005625203.localdomain podman[85001]: 2026-02-20 08:24:00.441303704 +0000 UTC m=+0.273579708 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Feb 20 08:24:00 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:24:00 np0005625203.localdomain podman[85002]: 2026-02-20 08:24:00.468417584 +0000 UTC m=+0.298516941 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:24:00 np0005625203.localdomain podman[85000]: 2026-02-20 08:24:00.483238887 +0000 UTC m=+0.319217783 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, io.openshift.expose-services=, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_id=tripleo_step5)
Feb 20 08:24:00 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:24:00 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:24:01 np0005625203.localdomain systemd[1]: tmp-crun.n6KMqN.mount: Deactivated successfully.
Feb 20 08:24:02 np0005625203.localdomain sshd[85097]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:24:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:24:03 np0005625203.localdomain podman[85099]: 2026-02-20 08:24:03.770971608 +0000 UTC m=+0.088690993 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, distribution-scope=public, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:24:03 np0005625203.localdomain systemd[1]: tmp-crun.kbPhQ5.mount: Deactivated successfully.
Feb 20 08:24:03 np0005625203.localdomain podman[85100]: 2026-02-20 08:24:03.822953877 +0000 UTC m=+0.136103463 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5)
Feb 20 08:24:03 np0005625203.localdomain podman[85099]: 2026-02-20 08:24:03.81647636 +0000 UTC m=+0.134195795 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.buildah.version=1.41.5, distribution-scope=public, release=1766032510, io.openshift.expose-services=, url=https://www.redhat.com, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:24:03 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:24:03 np0005625203.localdomain podman[85100]: 2026-02-20 08:24:03.872996708 +0000 UTC m=+0.186146314 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, com.redhat.component=openstack-ovn-controller-container, vcs-type=git)
Feb 20 08:24:03 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:24:04 np0005625203.localdomain sshd[85097]: Invalid user vpn from 185.246.128.171 port 8429
Feb 20 08:24:05 np0005625203.localdomain sshd[85097]: Disconnecting invalid user vpn 185.246.128.171 port 8429: Change of username or service not allowed: (vpn,ssh-connection) -> (kiosk,ssh-connection) [preauth]
Feb 20 08:24:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:24:05 np0005625203.localdomain podman[85148]: 2026-02-20 08:24:05.525991888 +0000 UTC m=+0.076386057 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:24:05 np0005625203.localdomain podman[85148]: 2026-02-20 08:24:05.958589196 +0000 UTC m=+0.508983405 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, managed_by=tripleo_ansible, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, version=17.1.13, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:24:05 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:24:06 np0005625203.localdomain sshd[85170]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:07 np0005625203.localdomain sshd[85172]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:07 np0005625203.localdomain sshd[85170]: Invalid user ubuntu from 103.171.84.20 port 45246
Feb 20 08:24:08 np0005625203.localdomain sshd[85170]: Received disconnect from 103.171.84.20 port 45246:11: Bye Bye [preauth]
Feb 20 08:24:08 np0005625203.localdomain sshd[85170]: Disconnected from invalid user ubuntu 103.171.84.20 port 45246 [preauth]
Feb 20 08:24:09 np0005625203.localdomain sshd[85172]: Invalid user kiosk from 185.246.128.171 port 38773
Feb 20 08:24:09 np0005625203.localdomain sshd[85172]: Disconnecting invalid user kiosk 185.246.128.171 port 38773: Change of username or service not allowed: (kiosk,ssh-connection) -> (jack,ssh-connection) [preauth]
Feb 20 08:24:10 np0005625203.localdomain sshd[85174]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:13 np0005625203.localdomain sshd[85174]: Invalid user jack from 185.246.128.171 port 53498
Feb 20 08:24:15 np0005625203.localdomain sshd[85174]: Disconnecting invalid user jack 185.246.128.171 port 53498: Change of username or service not allowed: (jack,ssh-connection) -> (zhongwen,ssh-connection) [preauth]
Feb 20 08:24:17 np0005625203.localdomain sshd[85176]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:19 np0005625203.localdomain sshd[85176]: Invalid user zhongwen from 185.246.128.171 port 34496
Feb 20 08:24:20 np0005625203.localdomain sshd[85176]: Disconnecting invalid user zhongwen 185.246.128.171 port 34496: Change of username or service not allowed: (zhongwen,ssh-connection) -> (ventas01,ssh-connection) [preauth]
Feb 20 08:24:21 np0005625203.localdomain sshd[85178]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:25 np0005625203.localdomain sshd[85178]: Invalid user ventas01 from 185.246.128.171 port 34399
Feb 20 08:24:25 np0005625203.localdomain sshd[85178]: Disconnecting invalid user ventas01 185.246.128.171 port 34399: Change of username or service not allowed: (ventas01,ssh-connection) -> (fran,ssh-connection) [preauth]
Feb 20 08:24:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:24:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:24:26 np0005625203.localdomain systemd[1]: tmp-crun.0epCde.mount: Deactivated successfully.
Feb 20 08:24:26 np0005625203.localdomain podman[85181]: 2026-02-20 08:24:26.097593901 +0000 UTC m=+0.099890945 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, version=17.1.13, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid)
Feb 20 08:24:26 np0005625203.localdomain podman[85180]: 2026-02-20 08:24:26.130969652 +0000 UTC m=+0.137438214 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, distribution-scope=public, version=17.1.13, name=rhosp-rhel9/openstack-collectd, container_name=collectd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z)
Feb 20 08:24:26 np0005625203.localdomain podman[85181]: 2026-02-20 08:24:26.160415263 +0000 UTC m=+0.162712297 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, container_name=iscsid, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true)
Feb 20 08:24:26 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:24:26 np0005625203.localdomain podman[85180]: 2026-02-20 08:24:26.22118866 +0000 UTC m=+0.227657212 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, container_name=collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:24:26 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:24:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:24:27 np0005625203.localdomain podman[85220]: 2026-02-20 08:24:27.740936476 +0000 UTC m=+0.065666328 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510)
Feb 20 08:24:27 np0005625203.localdomain sshd[85248]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:27 np0005625203.localdomain podman[85220]: 2026-02-20 08:24:27.918258469 +0000 UTC m=+0.242988291 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, architecture=x86_64, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, version=17.1.13, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team)
Feb 20 08:24:27 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:24:30 np0005625203.localdomain sshd[85248]: Invalid user fran from 185.246.128.171 port 8654
Feb 20 08:24:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:24:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:24:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:24:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:24:30 np0005625203.localdomain podman[85250]: 2026-02-20 08:24:30.771766141 +0000 UTC m=+0.086851767 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., version=17.1.13)
Feb 20 08:24:30 np0005625203.localdomain podman[85250]: 2026-02-20 08:24:30.778168927 +0000 UTC m=+0.093254553 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1766032510, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, tcib_managed=true)
Feb 20 08:24:30 np0005625203.localdomain sshd[85248]: Disconnecting invalid user fran 185.246.128.171 port 8654: Change of username or service not allowed: (fran,ssh-connection) -> (ftp1,ssh-connection) [preauth]
Feb 20 08:24:30 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:24:30 np0005625203.localdomain podman[85251]: 2026-02-20 08:24:30.83058343 +0000 UTC m=+0.142794098 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, tcib_managed=true, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:24:30 np0005625203.localdomain podman[85252]: 2026-02-20 08:24:30.886136919 +0000 UTC m=+0.196461539 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, release=1766032510, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., config_id=tripleo_step4)
Feb 20 08:24:30 np0005625203.localdomain podman[85253]: 2026-02-20 08:24:30.930416872 +0000 UTC m=+0.237363459 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Feb 20 08:24:30 np0005625203.localdomain podman[85252]: 2026-02-20 08:24:30.938823609 +0000 UTC m=+0.249148199 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., release=1766032510, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64)
Feb 20 08:24:30 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:24:30 np0005625203.localdomain podman[85251]: 2026-02-20 08:24:30.960275706 +0000 UTC m=+0.272486344 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container)
Feb 20 08:24:30 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:24:30 np0005625203.localdomain podman[85253]: 2026-02-20 08:24:30.986346093 +0000 UTC m=+0.293292660 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, tcib_managed=true, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container)
Feb 20 08:24:30 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:24:32 np0005625203.localdomain sshd[85347]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:33 np0005625203.localdomain sudo[85349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:24:33 np0005625203.localdomain sudo[85349]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:24:33 np0005625203.localdomain sudo[85349]: pam_unix(sudo:session): session closed for user root
Feb 20 08:24:33 np0005625203.localdomain sudo[85364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:24:33 np0005625203.localdomain sudo[85364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:24:33 np0005625203.localdomain sshd[85347]: Invalid user ftp1 from 185.246.128.171 port 30921
Feb 20 08:24:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:24:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:24:34 np0005625203.localdomain podman[85394]: 2026-02-20 08:24:34.088718415 +0000 UTC m=+0.085210647 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, container_name=ovn_controller, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team)
Feb 20 08:24:34 np0005625203.localdomain podman[85394]: 2026-02-20 08:24:34.132494144 +0000 UTC m=+0.128986356 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, distribution-scope=public, architecture=x86_64)
Feb 20 08:24:34 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:24:34 np0005625203.localdomain podman[85393]: 2026-02-20 08:24:34.153604589 +0000 UTC m=+0.153175385 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true)
Feb 20 08:24:34 np0005625203.localdomain podman[85393]: 2026-02-20 08:24:34.196838691 +0000 UTC m=+0.196409487 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, release=1766032510, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn)
Feb 20 08:24:34 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:24:34 np0005625203.localdomain sudo[85364]: pam_unix(sudo:session): session closed for user root
Feb 20 08:24:34 np0005625203.localdomain sudo[85455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:24:34 np0005625203.localdomain sudo[85455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:24:34 np0005625203.localdomain sudo[85455]: pam_unix(sudo:session): session closed for user root
Feb 20 08:24:35 np0005625203.localdomain sshd[85347]: Disconnecting invalid user ftp1 185.246.128.171 port 30921: Change of username or service not allowed: (ftp1,ssh-connection) -> (kafka,ssh-connection) [preauth]
Feb 20 08:24:35 np0005625203.localdomain sshd[85470]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:36 np0005625203.localdomain sshd[85470]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:24:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:24:36 np0005625203.localdomain podman[85472]: 2026-02-20 08:24:36.281092148 +0000 UTC m=+0.079797880 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, version=17.1.13)
Feb 20 08:24:36 np0005625203.localdomain podman[85472]: 2026-02-20 08:24:36.654681774 +0000 UTC m=+0.453387496 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, release=1766032510, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, tcib_managed=true)
Feb 20 08:24:36 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:24:37 np0005625203.localdomain sshd[85495]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:39 np0005625203.localdomain sshd[85497]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:39 np0005625203.localdomain sshd[85497]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:24:40 np0005625203.localdomain sshd[85495]: Invalid user kafka from 185.246.128.171 port 46717
Feb 20 08:24:41 np0005625203.localdomain sshd[85495]: Disconnecting invalid user kafka 185.246.128.171 port 46717: Change of username or service not allowed: (kafka,ssh-connection) -> (teste,ssh-connection) [preauth]
Feb 20 08:24:41 np0005625203.localdomain sshd[85499]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:43 np0005625203.localdomain sshd[85499]: Invalid user x from 103.200.25.162 port 59860
Feb 20 08:24:43 np0005625203.localdomain sshd[85499]: Received disconnect from 103.200.25.162 port 59860:11: Bye Bye [preauth]
Feb 20 08:24:43 np0005625203.localdomain sshd[85499]: Disconnected from invalid user x 103.200.25.162 port 59860 [preauth]
Feb 20 08:24:44 np0005625203.localdomain sshd[85501]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:48 np0005625203.localdomain sshd[85501]: Invalid user teste from 185.246.128.171 port 29816
Feb 20 08:24:48 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:24:48 np0005625203.localdomain recover_tripleo_nova_virtqemud[85504]: 62505
Feb 20 08:24:48 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:24:48 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:24:50 np0005625203.localdomain sshd[85501]: Disconnecting invalid user teste 185.246.128.171 port 29816: Change of username or service not allowed: (teste,ssh-connection) -> (admin2,ssh-connection) [preauth]
Feb 20 08:24:54 np0005625203.localdomain sshd[85550]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:24:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:24:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:24:56 np0005625203.localdomain podman[85552]: 2026-02-20 08:24:56.784282558 +0000 UTC m=+0.101429763 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, container_name=collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, maintainer=OpenStack TripleO Team)
Feb 20 08:24:56 np0005625203.localdomain podman[85552]: 2026-02-20 08:24:56.799681779 +0000 UTC m=+0.116829014 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, managed_by=tripleo_ansible, container_name=collectd, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd)
Feb 20 08:24:56 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:24:56 np0005625203.localdomain podman[85553]: 2026-02-20 08:24:56.884045169 +0000 UTC m=+0.201223855 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64)
Feb 20 08:24:56 np0005625203.localdomain podman[85553]: 2026-02-20 08:24:56.921232066 +0000 UTC m=+0.238410782 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, architecture=x86_64, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid)
Feb 20 08:24:56 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:24:57 np0005625203.localdomain sshd[85550]: Invalid user admin2 from 185.246.128.171 port 42275
Feb 20 08:24:58 np0005625203.localdomain sshd[85550]: Disconnecting invalid user admin2 185.246.128.171 port 42275: Change of username or service not allowed: (admin2,ssh-connection) -> (uucp,ssh-connection) [preauth]
Feb 20 08:24:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:24:58 np0005625203.localdomain podman[85594]: 2026-02-20 08:24:58.766950469 +0000 UTC m=+0.079185793 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible)
Feb 20 08:24:58 np0005625203.localdomain podman[85594]: 2026-02-20 08:24:58.95206698 +0000 UTC m=+0.264302284 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, version=17.1.13, distribution-scope=public, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, tcib_managed=true, config_id=tripleo_step1, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com)
Feb 20 08:24:58 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:25:00 np0005625203.localdomain sshd[85622]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:25:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:25:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:25:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:25:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:25:01 np0005625203.localdomain systemd[1]: tmp-crun.fEnLgJ.mount: Deactivated successfully.
Feb 20 08:25:01 np0005625203.localdomain podman[85625]: 2026-02-20 08:25:01.781629349 +0000 UTC m=+0.097436749 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, vcs-type=git, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Feb 20 08:25:01 np0005625203.localdomain podman[85625]: 2026-02-20 08:25:01.813410922 +0000 UTC m=+0.129218312 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, batch=17.1_20260112.1, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step5)
Feb 20 08:25:01 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:25:01 np0005625203.localdomain podman[85624]: 2026-02-20 08:25:01.869469466 +0000 UTC m=+0.187192235 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, url=https://www.redhat.com, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, batch=17.1_20260112.1)
Feb 20 08:25:01 np0005625203.localdomain podman[85626]: 2026-02-20 08:25:01.839492429 +0000 UTC m=+0.152414891 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, architecture=x86_64, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Feb 20 08:25:01 np0005625203.localdomain podman[85624]: 2026-02-20 08:25:01.902379612 +0000 UTC m=+0.220102341 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1)
Feb 20 08:25:01 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:25:01 np0005625203.localdomain podman[85626]: 2026-02-20 08:25:01.923307702 +0000 UTC m=+0.236230134 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., version=17.1.13, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510)
Feb 20 08:25:01 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:25:01 np0005625203.localdomain podman[85627]: 2026-02-20 08:25:01.979056167 +0000 UTC m=+0.289522545 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, release=1766032510, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, architecture=x86_64, distribution-scope=public, version=17.1.13, build-date=2026-01-12T23:07:30Z, vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container)
Feb 20 08:25:02 np0005625203.localdomain sshd[85622]: Invalid user uucp from 185.246.128.171 port 25066
Feb 20 08:25:02 np0005625203.localdomain podman[85627]: 2026-02-20 08:25:02.028238071 +0000 UTC m=+0.338704409 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git)
Feb 20 08:25:02 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:25:02 np0005625203.localdomain sshd[85622]: Disconnecting invalid user uucp 185.246.128.171 port 25066: Change of username or service not allowed: (uucp,ssh-connection) -> (ds,ssh-connection) [preauth]
Feb 20 08:25:03 np0005625203.localdomain sshd[85720]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:25:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:25:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:25:04 np0005625203.localdomain podman[85722]: 2026-02-20 08:25:04.76617888 +0000 UTC m=+0.082727271 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Feb 20 08:25:04 np0005625203.localdomain podman[85722]: 2026-02-20 08:25:04.814295422 +0000 UTC m=+0.130843843 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:25:04 np0005625203.localdomain systemd[1]: tmp-crun.MeSgOK.mount: Deactivated successfully.
Feb 20 08:25:04 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:25:04 np0005625203.localdomain podman[85723]: 2026-02-20 08:25:04.830862988 +0000 UTC m=+0.142520359 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, version=17.1.13, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:25:04 np0005625203.localdomain podman[85723]: 2026-02-20 08:25:04.914350882 +0000 UTC m=+0.226008273 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, build-date=2026-01-12T22:36:40Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, container_name=ovn_controller)
Feb 20 08:25:04 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:25:07 np0005625203.localdomain sshd[85720]: Invalid user ds from 185.246.128.171 port 21889
Feb 20 08:25:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:25:07 np0005625203.localdomain podman[85770]: 2026-02-20 08:25:07.291411184 +0000 UTC m=+0.075905413 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:25:07 np0005625203.localdomain sshd[85720]: Disconnecting invalid user ds 185.246.128.171 port 21889: Change of username or service not allowed: (ds,ssh-connection) -> (fusion,ssh-connection) [preauth]
Feb 20 08:25:07 np0005625203.localdomain podman[85770]: 2026-02-20 08:25:07.618114315 +0000 UTC m=+0.402608544 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:25:07 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:25:09 np0005625203.localdomain sshd[85792]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:25:11 np0005625203.localdomain sshd[85792]: Invalid user fusion from 185.246.128.171 port 15875
Feb 20 08:25:12 np0005625203.localdomain sshd[85792]: Disconnecting invalid user fusion 185.246.128.171 port 15875: Change of username or service not allowed: (fusion,ssh-connection) -> (download,ssh-connection) [preauth]
Feb 20 08:25:14 np0005625203.localdomain sshd[85794]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:25:18 np0005625203.localdomain sshd[85794]: Invalid user download from 185.246.128.171 port 34759
Feb 20 08:25:19 np0005625203.localdomain sshd[85794]: Disconnecting invalid user download 185.246.128.171 port 34759: Change of username or service not allowed: (download,ssh-connection) -> (csgo,ssh-connection) [preauth]
Feb 20 08:25:19 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:25:19 np0005625203.localdomain recover_tripleo_nova_virtqemud[85797]: 62505
Feb 20 08:25:19 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:25:19 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:25:21 np0005625203.localdomain sshd[85798]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:25:24 np0005625203.localdomain sshd[85798]: Invalid user csgo from 185.246.128.171 port 6862
Feb 20 08:25:24 np0005625203.localdomain sshd[85798]: Disconnecting invalid user csgo 185.246.128.171 port 6862: Change of username or service not allowed: (csgo,ssh-connection) -> (plex,ssh-connection) [preauth]
Feb 20 08:25:25 np0005625203.localdomain sshd[85800]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:25:25 np0005625203.localdomain sshd[85800]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:25:26 np0005625203.localdomain sshd[85802]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:25:27 np0005625203.localdomain sshd[85802]: Invalid user plex from 185.246.128.171 port 7357
Feb 20 08:25:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:25:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:25:27 np0005625203.localdomain podman[85804]: 2026-02-20 08:25:27.23210881 +0000 UTC m=+0.083119702 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, release=1766032510, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:25:27 np0005625203.localdomain podman[85804]: 2026-02-20 08:25:27.242064355 +0000 UTC m=+0.093075247 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z)
Feb 20 08:25:27 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:25:27 np0005625203.localdomain systemd[1]: tmp-crun.m8oucb.mount: Deactivated successfully.
Feb 20 08:25:27 np0005625203.localdomain podman[85805]: 2026-02-20 08:25:27.338788933 +0000 UTC m=+0.189033342 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, tcib_managed=true, config_id=tripleo_step3, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T22:34:43Z, container_name=iscsid, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid)
Feb 20 08:25:27 np0005625203.localdomain sshd[85802]: Disconnecting invalid user plex 185.246.128.171 port 7357: Change of username or service not allowed: (plex,ssh-connection) -> (linaro,ssh-connection) [preauth]
Feb 20 08:25:27 np0005625203.localdomain podman[85805]: 2026-02-20 08:25:27.3772709 +0000 UTC m=+0.227515299 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64)
Feb 20 08:25:27 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:25:28 np0005625203.localdomain sshd[85845]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:25:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:25:29 np0005625203.localdomain podman[85847]: 2026-02-20 08:25:29.774115797 +0000 UTC m=+0.088253530 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Feb 20 08:25:30 np0005625203.localdomain podman[85847]: 2026-02-20 08:25:30.002363697 +0000 UTC m=+0.316501400 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=)
Feb 20 08:25:30 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:25:31 np0005625203.localdomain sshd[85845]: Invalid user linaro from 185.246.128.171 port 56963
Feb 20 08:25:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:25:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:25:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:25:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:25:32 np0005625203.localdomain podman[85876]: 2026-02-20 08:25:32.778312657 +0000 UTC m=+0.093829560 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:25:32 np0005625203.localdomain podman[85876]: 2026-02-20 08:25:32.785819876 +0000 UTC m=+0.101336769 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, managed_by=tripleo_ansible, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:25:32 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:25:32 np0005625203.localdomain podman[85877]: 2026-02-20 08:25:32.820082674 +0000 UTC m=+0.133280177 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, release=1766032510, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team)
Feb 20 08:25:32 np0005625203.localdomain podman[85878]: 2026-02-20 08:25:32.887101773 +0000 UTC m=+0.196309414 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, release=1766032510, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute)
Feb 20 08:25:32 np0005625203.localdomain podman[85878]: 2026-02-20 08:25:32.918683109 +0000 UTC m=+0.227890750 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, vcs-type=git, architecture=x86_64, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute)
Feb 20 08:25:32 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:25:32 np0005625203.localdomain podman[85879]: 2026-02-20 08:25:32.937052931 +0000 UTC m=+0.242112475 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:25:32 np0005625203.localdomain podman[85877]: 2026-02-20 08:25:32.9550202 +0000 UTC m=+0.268217703 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, container_name=nova_compute, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.expose-services=, release=1766032510, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com)
Feb 20 08:25:32 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:25:32 np0005625203.localdomain sshd[85845]: Connection closed by invalid user linaro 185.246.128.171 port 56963 [preauth]
Feb 20 08:25:33 np0005625203.localdomain podman[85879]: 2026-02-20 08:25:32.99981293 +0000 UTC m=+0.304872474 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, io.buildah.version=1.41.5, release=1766032510, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4)
Feb 20 08:25:33 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:25:35 np0005625203.localdomain sudo[85975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:25:35 np0005625203.localdomain sudo[85975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:25:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:25:35 np0005625203.localdomain sudo[85975]: pam_unix(sudo:session): session closed for user root
Feb 20 08:25:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:25:35 np0005625203.localdomain sudo[85992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:25:35 np0005625203.localdomain sudo[85992]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:25:35 np0005625203.localdomain podman[85990]: 2026-02-20 08:25:35.265127655 +0000 UTC m=+0.081962749 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, distribution-scope=public, batch=17.1_20260112.1, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Feb 20 08:25:35 np0005625203.localdomain podman[85991]: 2026-02-20 08:25:35.322803738 +0000 UTC m=+0.134519385 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, version=17.1.13, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:25:35 np0005625203.localdomain podman[85990]: 2026-02-20 08:25:35.33136884 +0000 UTC m=+0.148203964 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git)
Feb 20 08:25:35 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:25:35 np0005625203.localdomain podman[85991]: 2026-02-20 08:25:35.370243549 +0000 UTC m=+0.181959186 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, distribution-scope=public, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, url=https://www.redhat.com)
Feb 20 08:25:35 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:25:35 np0005625203.localdomain sudo[85992]: pam_unix(sudo:session): session closed for user root
Feb 20 08:25:36 np0005625203.localdomain sudo[86081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:25:36 np0005625203.localdomain sudo[86081]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:25:36 np0005625203.localdomain sudo[86081]: pam_unix(sudo:session): session closed for user root
Feb 20 08:25:37 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:25:37 np0005625203.localdomain podman[86096]: 2026-02-20 08:25:37.773601645 +0000 UTC m=+0.088166167 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., container_name=nova_migration_target, url=https://www.redhat.com, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4)
Feb 20 08:25:38 np0005625203.localdomain podman[86096]: 2026-02-20 08:25:38.171377159 +0000 UTC m=+0.485941681 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, container_name=nova_migration_target, version=17.1.13, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, vcs-type=git, com.redhat.component=openstack-nova-compute-container, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:25:38 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:25:46 np0005625203.localdomain sshd[86120]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:25:47 np0005625203.localdomain sshd[86120]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:25:56 np0005625203.localdomain sshd[86167]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:25:56 np0005625203.localdomain sshd[86167]: Invalid user help from 147.135.114.8 port 37836
Feb 20 08:25:56 np0005625203.localdomain sshd[86167]: Received disconnect from 147.135.114.8 port 37836:11: Bye Bye [preauth]
Feb 20 08:25:56 np0005625203.localdomain sshd[86167]: Disconnected from invalid user help 147.135.114.8 port 37836 [preauth]
Feb 20 08:25:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:25:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:25:57 np0005625203.localdomain systemd[1]: Starting dnf makecache...
Feb 20 08:25:57 np0005625203.localdomain podman[86169]: 2026-02-20 08:25:57.774315499 +0000 UTC m=+0.090152907 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.component=openstack-collectd-container)
Feb 20 08:25:57 np0005625203.localdomain podman[86169]: 2026-02-20 08:25:57.791234547 +0000 UTC m=+0.107071905 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1766032510, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, version=17.1.13)
Feb 20 08:25:57 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:25:57 np0005625203.localdomain systemd[1]: tmp-crun.TRXju7.mount: Deactivated successfully.
Feb 20 08:25:57 np0005625203.localdomain podman[86170]: 2026-02-20 08:25:57.899781756 +0000 UTC m=+0.209924890 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, batch=17.1_20260112.1, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3)
Feb 20 08:25:57 np0005625203.localdomain podman[86170]: 2026-02-20 08:25:57.911296239 +0000 UTC m=+0.221439423 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step3, container_name=iscsid, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, release=1766032510, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team)
Feb 20 08:25:57 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:25:57 np0005625203.localdomain dnf[86171]: Updating Subscription Management repositories.
Feb 20 08:25:59 np0005625203.localdomain dnf[86171]: Metadata cache refreshed recently.
Feb 20 08:25:59 np0005625203.localdomain systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 20 08:25:59 np0005625203.localdomain systemd[1]: Finished dnf makecache.
Feb 20 08:25:59 np0005625203.localdomain systemd[1]: dnf-makecache.service: Consumed 2.121s CPU time.
Feb 20 08:26:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:26:00 np0005625203.localdomain systemd[1]: tmp-crun.bLXuvO.mount: Deactivated successfully.
Feb 20 08:26:00 np0005625203.localdomain podman[86209]: 2026-02-20 08:26:00.78408505 +0000 UTC m=+0.097791622 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1766032510, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.openshift.expose-services=)
Feb 20 08:26:01 np0005625203.localdomain podman[86209]: 2026-02-20 08:26:01.001385426 +0000 UTC m=+0.315091988 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, distribution-scope=public, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, architecture=x86_64)
Feb 20 08:26:01 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:26:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:26:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:26:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:26:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:26:03 np0005625203.localdomain podman[86238]: 2026-02-20 08:26:03.770855757 +0000 UTC m=+0.080837573 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.buildah.version=1.41.5, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, config_id=tripleo_step4, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:26:03 np0005625203.localdomain podman[86242]: 2026-02-20 08:26:03.844546511 +0000 UTC m=+0.142289692 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, release=1766032510, config_id=tripleo_step4, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z)
Feb 20 08:26:03 np0005625203.localdomain podman[86239]: 2026-02-20 08:26:03.818355189 +0000 UTC m=+0.124453836 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.buildah.version=1.41.5, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, release=1766032510, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:26:03 np0005625203.localdomain podman[86240]: 2026-02-20 08:26:03.885718279 +0000 UTC m=+0.189172675 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.13, release=1766032510, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, tcib_managed=true, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:26:03 np0005625203.localdomain podman[86242]: 2026-02-20 08:26:03.897327464 +0000 UTC m=+0.195070595 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:26:03 np0005625203.localdomain podman[86238]: 2026-02-20 08:26:03.910467767 +0000 UTC m=+0.220449583 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Feb 20 08:26:03 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:26:03 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:26:03 np0005625203.localdomain podman[86239]: 2026-02-20 08:26:03.952123911 +0000 UTC m=+0.258222608 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=nova_compute, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git)
Feb 20 08:26:03 np0005625203.localdomain podman[86240]: 2026-02-20 08:26:03.962819728 +0000 UTC m=+0.266274154 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, distribution-scope=public, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=)
Feb 20 08:26:03 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:26:03 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:26:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:26:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:26:05 np0005625203.localdomain podman[86338]: 2026-02-20 08:26:05.769370493 +0000 UTC m=+0.083506225 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, version=17.1.13, architecture=x86_64, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:26:05 np0005625203.localdomain podman[86338]: 2026-02-20 08:26:05.824330553 +0000 UTC m=+0.138466235 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, architecture=x86_64, container_name=ovn_controller, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Feb 20 08:26:05 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:26:05 np0005625203.localdomain podman[86337]: 2026-02-20 08:26:05.825379546 +0000 UTC m=+0.139525228 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4)
Feb 20 08:26:05 np0005625203.localdomain podman[86337]: 2026-02-20 08:26:05.909389605 +0000 UTC m=+0.223535297 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.13, config_id=tripleo_step4, io.buildah.version=1.41.5, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:26:05 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:26:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:26:08 np0005625203.localdomain systemd[1]: tmp-crun.2I3g5D.mount: Deactivated successfully.
Feb 20 08:26:08 np0005625203.localdomain podman[86385]: 2026-02-20 08:26:08.765708933 +0000 UTC m=+0.087677192 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, architecture=x86_64, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, release=1766032510, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:26:09 np0005625203.localdomain podman[86385]: 2026-02-20 08:26:09.107532726 +0000 UTC m=+0.429500995 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=nova_migration_target, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com)
Feb 20 08:26:09 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:26:11 np0005625203.localdomain sshd[86407]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:26:11 np0005625203.localdomain sshd[86407]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:26:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:26:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:26:28 np0005625203.localdomain systemd[1]: tmp-crun.atbZwF.mount: Deactivated successfully.
Feb 20 08:26:28 np0005625203.localdomain podman[86410]: 2026-02-20 08:26:28.788246725 +0000 UTC m=+0.103554758 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, vcs-type=git, release=1766032510, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=)
Feb 20 08:26:28 np0005625203.localdomain podman[86409]: 2026-02-20 08:26:28.83092194 +0000 UTC m=+0.146852262 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, io.openshift.expose-services=, release=1766032510, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, vcs-type=git, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1)
Feb 20 08:26:28 np0005625203.localdomain podman[86410]: 2026-02-20 08:26:28.868498949 +0000 UTC m=+0.183807022 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid)
Feb 20 08:26:28 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:26:28 np0005625203.localdomain podman[86409]: 2026-02-20 08:26:28.89237723 +0000 UTC m=+0.208307602 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1766032510, io.openshift.expose-services=, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:26:28 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:26:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:26:31 np0005625203.localdomain podman[86449]: 2026-02-20 08:26:31.764929253 +0000 UTC m=+0.079520122 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, tcib_managed=true, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, version=17.1.13, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd)
Feb 20 08:26:31 np0005625203.localdomain podman[86449]: 2026-02-20 08:26:31.986452348 +0000 UTC m=+0.301043237 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, vcs-type=git, config_id=tripleo_step1, tcib_managed=true, version=17.1.13, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:26:31 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:26:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:26:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:26:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:26:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:26:34 np0005625203.localdomain systemd[1]: tmp-crun.EktKR0.mount: Deactivated successfully.
Feb 20 08:26:34 np0005625203.localdomain podman[86479]: 2026-02-20 08:26:34.773165298 +0000 UTC m=+0.084470514 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, version=17.1.13, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-cron-container, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20260112.1, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Feb 20 08:26:34 np0005625203.localdomain podman[86479]: 2026-02-20 08:26:34.779476331 +0000 UTC m=+0.090781607 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:26:34 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:26:34 np0005625203.localdomain podman[86486]: 2026-02-20 08:26:34.795041217 +0000 UTC m=+0.090561401 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, version=17.1.13, tcib_managed=true, build-date=2026-01-12T23:07:47Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team)
Feb 20 08:26:34 np0005625203.localdomain podman[86492]: 2026-02-20 08:26:34.830424218 +0000 UTC m=+0.122806705 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1)
Feb 20 08:26:34 np0005625203.localdomain podman[86492]: 2026-02-20 08:26:34.879247122 +0000 UTC m=+0.171629629 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.expose-services=)
Feb 20 08:26:34 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:26:34 np0005625203.localdomain podman[86486]: 2026-02-20 08:26:34.895964823 +0000 UTC m=+0.191484997 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, version=17.1.13, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:26:34 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:26:34 np0005625203.localdomain podman[86480]: 2026-02-20 08:26:34.880073767 +0000 UTC m=+0.186127152 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, container_name=nova_compute, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:26:34 np0005625203.localdomain podman[86480]: 2026-02-20 08:26:34.965200601 +0000 UTC m=+0.271254006 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:26:34 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:26:35 np0005625203.localdomain systemd[1]: tmp-crun.MeCUCt.mount: Deactivated successfully.
Feb 20 08:26:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:26:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:26:36 np0005625203.localdomain sudo[86575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:26:36 np0005625203.localdomain sudo[86575]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:26:36 np0005625203.localdomain sudo[86575]: pam_unix(sudo:session): session closed for user root
Feb 20 08:26:36 np0005625203.localdomain podman[86588]: 2026-02-20 08:26:36.785085043 +0000 UTC m=+0.100045751 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, distribution-scope=public, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.openshift.expose-services=)
Feb 20 08:26:36 np0005625203.localdomain sudo[86608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:26:36 np0005625203.localdomain sudo[86608]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:26:36 np0005625203.localdomain podman[86589]: 2026-02-20 08:26:36.838269259 +0000 UTC m=+0.148487022 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, distribution-scope=public, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z)
Feb 20 08:26:36 np0005625203.localdomain podman[86588]: 2026-02-20 08:26:36.860319713 +0000 UTC m=+0.175280431 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, tcib_managed=true, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-type=git)
Feb 20 08:26:36 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:26:36 np0005625203.localdomain podman[86589]: 2026-02-20 08:26:36.893357554 +0000 UTC m=+0.203575297 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z)
Feb 20 08:26:36 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:26:37 np0005625203.localdomain sudo[86608]: pam_unix(sudo:session): session closed for user root
Feb 20 08:26:38 np0005625203.localdomain sudo[86682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:26:38 np0005625203.localdomain sudo[86682]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:26:38 np0005625203.localdomain sudo[86682]: pam_unix(sudo:session): session closed for user root
Feb 20 08:26:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:26:39 np0005625203.localdomain podman[86697]: 2026-02-20 08:26:39.775279584 +0000 UTC m=+0.088214409 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510)
Feb 20 08:26:40 np0005625203.localdomain podman[86697]: 2026-02-20 08:26:40.191591725 +0000 UTC m=+0.504526490 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_id=tripleo_step4, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:26:40 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:26:57 np0005625203.localdomain sshd[86765]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:26:57 np0005625203.localdomain sshd[86766]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:26:57 np0005625203.localdomain sshd[86766]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:26:58 np0005625203.localdomain sshd[86765]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:26:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:26:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:26:59 np0005625203.localdomain systemd[1]: tmp-crun.OLTFao.mount: Deactivated successfully.
Feb 20 08:26:59 np0005625203.localdomain podman[86770]: 2026-02-20 08:26:59.839652998 +0000 UTC m=+0.147915205 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, batch=17.1_20260112.1, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vendor=Red Hat, Inc., architecture=x86_64, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, vcs-type=git)
Feb 20 08:26:59 np0005625203.localdomain podman[86769]: 2026-02-20 08:26:59.813612159 +0000 UTC m=+0.126676095 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, container_name=collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13)
Feb 20 08:26:59 np0005625203.localdomain podman[86770]: 2026-02-20 08:26:59.87741456 +0000 UTC m=+0.185676807 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, architecture=x86_64, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:26:59 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:26:59 np0005625203.localdomain podman[86769]: 2026-02-20 08:26:59.898421623 +0000 UTC m=+0.211485569 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:26:59 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:27:02 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:27:03 np0005625203.localdomain systemd[1]: tmp-crun.kZteAz.mount: Deactivated successfully.
Feb 20 08:27:03 np0005625203.localdomain podman[86806]: 2026-02-20 08:27:03.470773333 +0000 UTC m=+0.791856071 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, managed_by=tripleo_ansible, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:27:03 np0005625203.localdomain podman[86806]: 2026-02-20 08:27:03.671333811 +0000 UTC m=+0.992416509 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, release=1766032510, config_id=tripleo_step1)
Feb 20 08:27:03 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:27:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:27:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:27:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:27:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:27:05 np0005625203.localdomain podman[86839]: 2026-02-20 08:27:05.790040172 +0000 UTC m=+0.090593453 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, architecture=x86_64, release=1766032510, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4)
Feb 20 08:27:05 np0005625203.localdomain systemd[1]: tmp-crun.dKI3pn.mount: Deactivated successfully.
Feb 20 08:27:05 np0005625203.localdomain podman[86837]: 2026-02-20 08:27:05.840733787 +0000 UTC m=+0.147185301 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ceilometer_agent_compute, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git)
Feb 20 08:27:05 np0005625203.localdomain podman[86839]: 2026-02-20 08:27:05.847570038 +0000 UTC m=+0.148123239 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, architecture=x86_64, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=)
Feb 20 08:27:05 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:27:05 np0005625203.localdomain podman[86837]: 2026-02-20 08:27:05.879215952 +0000 UTC m=+0.185667496 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, release=1766032510, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:27:05 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:27:05 np0005625203.localdomain podman[86836]: 2026-02-20 08:27:05.93101227 +0000 UTC m=+0.240028364 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, release=1766032510, config_id=tripleo_step5, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:27:05 np0005625203.localdomain podman[86836]: 2026-02-20 08:27:05.962248159 +0000 UTC m=+0.271264293 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, config_id=tripleo_step5, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z)
Feb 20 08:27:05 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:27:05 np0005625203.localdomain podman[86835]: 2026-02-20 08:27:05.98159136 +0000 UTC m=+0.293775383 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, tcib_managed=true, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20260112.1)
Feb 20 08:27:06 np0005625203.localdomain podman[86835]: 2026-02-20 08:27:06.01733023 +0000 UTC m=+0.329514253 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, release=1766032510, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-type=git, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team)
Feb 20 08:27:06 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:27:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:27:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:27:07 np0005625203.localdomain podman[86928]: 2026-02-20 08:27:07.772080568 +0000 UTC m=+0.083883064 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, version=17.1.13, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:27:07 np0005625203.localdomain podman[86927]: 2026-02-20 08:27:07.749511668 +0000 UTC m=+0.066482405 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:27:07 np0005625203.localdomain podman[86928]: 2026-02-20 08:27:07.80044839 +0000 UTC m=+0.112250816 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5)
Feb 20 08:27:07 np0005625203.localdomain podman[86927]: 2026-02-20 08:27:07.834310331 +0000 UTC m=+0.151281088 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=)
Feb 20 08:27:07 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:27:07 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:27:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:27:10 np0005625203.localdomain podman[86977]: 2026-02-20 08:27:10.756946986 +0000 UTC m=+0.076507257 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, distribution-scope=public, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, release=1766032510)
Feb 20 08:27:10 np0005625203.localdomain sshd[87001]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:27:11 np0005625203.localdomain podman[86977]: 2026-02-20 08:27:11.134264543 +0000 UTC m=+0.453824864 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, version=17.1.13, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:27:11 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:27:11 np0005625203.localdomain sshd[87001]: Invalid user c from 189.190.2.14 port 51600
Feb 20 08:27:11 np0005625203.localdomain sshd[87001]: Received disconnect from 189.190.2.14 port 51600:11: Bye Bye [preauth]
Feb 20 08:27:11 np0005625203.localdomain sshd[87001]: Disconnected from invalid user c 189.190.2.14 port 51600 [preauth]
Feb 20 08:27:29 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:27:29 np0005625203.localdomain recover_tripleo_nova_virtqemud[87004]: 62505
Feb 20 08:27:29 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:27:29 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:27:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:27:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:27:30 np0005625203.localdomain podman[87006]: 2026-02-20 08:27:30.775787569 +0000 UTC m=+0.094669991 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd)
Feb 20 08:27:30 np0005625203.localdomain podman[87006]: 2026-02-20 08:27:30.786249014 +0000 UTC m=+0.105131486 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step3, architecture=x86_64, release=1766032510, tcib_managed=true)
Feb 20 08:27:30 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:27:30 np0005625203.localdomain podman[87007]: 2026-02-20 08:27:30.869653303 +0000 UTC m=+0.185987436 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, batch=17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, release=1766032510, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3)
Feb 20 08:27:30 np0005625203.localdomain podman[87007]: 2026-02-20 08:27:30.909254554 +0000 UTC m=+0.225588717 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, architecture=x86_64, url=https://www.redhat.com, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Feb 20 08:27:30 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:27:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:27:34 np0005625203.localdomain podman[87046]: 2026-02-20 08:27:34.758846662 +0000 UTC m=+0.076449374 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git)
Feb 20 08:27:34 np0005625203.localdomain podman[87046]: 2026-02-20 08:27:34.988270676 +0000 UTC m=+0.305873368 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, config_id=tripleo_step1, container_name=metrics_qdr, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13)
Feb 20 08:27:35 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:27:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:27:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:27:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:27:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:27:36 np0005625203.localdomain podman[87081]: 2026-02-20 08:27:36.774568744 +0000 UTC m=+0.085420283 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, version=17.1.13, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, release=1766032510, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container)
Feb 20 08:27:36 np0005625203.localdomain podman[87075]: 2026-02-20 08:27:36.823262677 +0000 UTC m=+0.140121492 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:27:36 np0005625203.localdomain podman[87081]: 2026-02-20 08:27:36.835206948 +0000 UTC m=+0.146058477 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, release=1766032510)
Feb 20 08:27:36 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:27:36 np0005625203.localdomain podman[87074]: 2026-02-20 08:27:36.80469609 +0000 UTC m=+0.121644908 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:27:36 np0005625203.localdomain podman[87073]: 2026-02-20 08:27:36.75603845 +0000 UTC m=+0.079007375 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, release=1766032510, io.buildah.version=1.41.5)
Feb 20 08:27:36 np0005625203.localdomain podman[87075]: 2026-02-20 08:27:36.882132355 +0000 UTC m=+0.198991160 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, version=17.1.13)
Feb 20 08:27:36 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:27:36 np0005625203.localdomain podman[87074]: 2026-02-20 08:27:36.938379332 +0000 UTC m=+0.255328210 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, distribution-scope=public)
Feb 20 08:27:36 np0005625203.localdomain podman[87073]: 2026-02-20 08:27:36.938815685 +0000 UTC m=+0.261784650 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-type=git, com.redhat.component=openstack-cron-container, distribution-scope=public, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1766032510, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:27:36 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:27:37 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:27:38 np0005625203.localdomain sudo[87172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:27:38 np0005625203.localdomain sudo[87172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:27:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:27:38 np0005625203.localdomain sudo[87172]: pam_unix(sudo:session): session closed for user root
Feb 20 08:27:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:27:38 np0005625203.localdomain sudo[87204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:27:38 np0005625203.localdomain sudo[87204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:27:38 np0005625203.localdomain podman[87187]: 2026-02-20 08:27:38.400545276 +0000 UTC m=+0.082639988 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, vcs-type=git, architecture=x86_64, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z)
Feb 20 08:27:38 np0005625203.localdomain podman[87188]: 2026-02-20 08:27:38.46705429 +0000 UTC m=+0.147512471 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, release=1766032510, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=ovn_controller)
Feb 20 08:27:38 np0005625203.localdomain podman[87188]: 2026-02-20 08:27:38.485281667 +0000 UTC m=+0.165739818 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, release=1766032510, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:27:38 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:27:38 np0005625203.localdomain podman[87187]: 2026-02-20 08:27:38.496378911 +0000 UTC m=+0.178473613 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, architecture=x86_64, distribution-scope=public, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:27:38 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:27:39 np0005625203.localdomain sudo[87204]: pam_unix(sudo:session): session closed for user root
Feb 20 08:27:39 np0005625203.localdomain sudo[87282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:27:39 np0005625203.localdomain sudo[87282]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:27:39 np0005625203.localdomain sudo[87282]: pam_unix(sudo:session): session closed for user root
Feb 20 08:27:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:27:41 np0005625203.localdomain podman[87297]: 2026-02-20 08:27:41.768444037 +0000 UTC m=+0.084054651 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc.)
Feb 20 08:27:42 np0005625203.localdomain podman[87297]: 2026-02-20 08:27:42.135384252 +0000 UTC m=+0.450994866 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_migration_target, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510)
Feb 20 08:27:42 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:27:43 np0005625203.localdomain sshd[87320]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:27:44 np0005625203.localdomain sshd[87320]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:28:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:28:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:28:01 np0005625203.localdomain podman[87368]: 2026-02-20 08:28:01.778275694 +0000 UTC m=+0.089507592 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-iscsid-container, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, container_name=iscsid, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3)
Feb 20 08:28:01 np0005625203.localdomain podman[87367]: 2026-02-20 08:28:01.823819609 +0000 UTC m=+0.137008298 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, release=1766032510, managed_by=tripleo_ansible, config_id=tripleo_step3, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:28:01 np0005625203.localdomain podman[87367]: 2026-02-20 08:28:01.834083879 +0000 UTC m=+0.147272568 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:28:01 np0005625203.localdomain podman[87368]: 2026-02-20 08:28:01.842529391 +0000 UTC m=+0.153761319 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.5, architecture=x86_64, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3)
Feb 20 08:28:01 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:28:01 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:28:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:28:05 np0005625203.localdomain podman[87404]: 2026-02-20 08:28:05.760820981 +0000 UTC m=+0.081426871 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, config_id=tripleo_step1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.openshift.expose-services=)
Feb 20 08:28:05 np0005625203.localdomain podman[87404]: 2026-02-20 08:28:05.953411156 +0000 UTC m=+0.274017076 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, container_name=metrics_qdr, batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, release=1766032510, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5)
Feb 20 08:28:05 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:28:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:28:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:28:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:28:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:28:07 np0005625203.localdomain podman[87435]: 2026-02-20 08:28:07.770869908 +0000 UTC m=+0.081058129 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, url=https://www.redhat.com, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc.)
Feb 20 08:28:07 np0005625203.localdomain podman[87435]: 2026-02-20 08:28:07.830452909 +0000 UTC m=+0.140641210 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, release=1766032510, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:28:07 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:28:07 np0005625203.localdomain podman[87432]: 2026-02-20 08:28:07.829645795 +0000 UTC m=+0.144917274 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, url=https://www.redhat.com, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.13)
Feb 20 08:28:07 np0005625203.localdomain systemd[1]: tmp-crun.Rr1pqv.mount: Deactivated successfully.
Feb 20 08:28:07 np0005625203.localdomain podman[87433]: 2026-02-20 08:28:07.888966718 +0000 UTC m=+0.203241096 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, release=1766032510, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:28:07 np0005625203.localdomain podman[87433]: 2026-02-20 08:28:07.912670405 +0000 UTC m=+0.226944763 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, container_name=nova_compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true)
Feb 20 08:28:07 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:28:07 np0005625203.localdomain podman[87434]: 2026-02-20 08:28:07.937857177 +0000 UTC m=+0.250318639 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, build-date=2026-01-12T23:07:47Z, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, version=17.1.13, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:28:07 np0005625203.localdomain podman[87432]: 2026-02-20 08:28:07.963530525 +0000 UTC m=+0.278802004 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, container_name=logrotate_crond, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, distribution-scope=public, release=1766032510, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1)
Feb 20 08:28:07 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:28:08 np0005625203.localdomain podman[87434]: 2026-02-20 08:28:08.018117381 +0000 UTC m=+0.330578843 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:28:08 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:28:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:28:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:28:08 np0005625203.localdomain systemd[1]: tmp-crun.T2K37o.mount: Deactivated successfully.
Feb 20 08:28:08 np0005625203.localdomain podman[87532]: 2026-02-20 08:28:08.785159834 +0000 UTC m=+0.099528404 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, container_name=ovn_metadata_agent, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5)
Feb 20 08:28:08 np0005625203.localdomain systemd[1]: tmp-crun.YU7RTt.mount: Deactivated successfully.
Feb 20 08:28:08 np0005625203.localdomain podman[87532]: 2026-02-20 08:28:08.840426422 +0000 UTC m=+0.154794992 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, tcib_managed=true)
Feb 20 08:28:08 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:28:08 np0005625203.localdomain podman[87533]: 2026-02-20 08:28:08.841702961 +0000 UTC m=+0.153666916 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1)
Feb 20 08:28:08 np0005625203.localdomain podman[87533]: 2026-02-20 08:28:08.926371352 +0000 UTC m=+0.238335317 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:28:08 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:28:10 np0005625203.localdomain sshd[87579]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:28:11 np0005625203.localdomain sshd[87579]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:28:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:28:12 np0005625203.localdomain podman[87581]: 2026-02-20 08:28:12.761935953 +0000 UTC m=+0.081802454 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, tcib_managed=true, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.13, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:28:13 np0005625203.localdomain podman[87581]: 2026-02-20 08:28:13.137357988 +0000 UTC m=+0.457224459 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:28:13 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:28:29 np0005625203.localdomain sshd[87604]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:28:30 np0005625203.localdomain sshd[87604]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:28:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:28:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:28:32 np0005625203.localdomain systemd[1]: tmp-crun.6ub7JL.mount: Deactivated successfully.
Feb 20 08:28:32 np0005625203.localdomain podman[87606]: 2026-02-20 08:28:32.789947898 +0000 UTC m=+0.100765402 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, container_name=collectd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, release=1766032510, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, version=17.1.13, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Feb 20 08:28:32 np0005625203.localdomain podman[87606]: 2026-02-20 08:28:32.804387837 +0000 UTC m=+0.115205361 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, io.openshift.expose-services=, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step3, release=1766032510, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team)
Feb 20 08:28:32 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:28:32 np0005625203.localdomain systemd[1]: tmp-crun.JoyY8f.mount: Deactivated successfully.
Feb 20 08:28:32 np0005625203.localdomain podman[87607]: 2026-02-20 08:28:32.906937474 +0000 UTC m=+0.210281925 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, release=1766032510, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, container_name=iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-iscsid-container)
Feb 20 08:28:32 np0005625203.localdomain podman[87607]: 2026-02-20 08:28:32.946288666 +0000 UTC m=+0.249633097 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, config_id=tripleo_step3, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2026-01-12T22:34:43Z, version=17.1.13, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.)
Feb 20 08:28:32 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:28:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:28:36 np0005625203.localdomain systemd[1]: tmp-crun.xoK397.mount: Deactivated successfully.
Feb 20 08:28:36 np0005625203.localdomain podman[87644]: 2026-02-20 08:28:36.776091136 +0000 UTC m=+0.090991608 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, vcs-type=git, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:28:36 np0005625203.localdomain podman[87644]: 2026-02-20 08:28:36.98121262 +0000 UTC m=+0.296113032 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, container_name=metrics_qdr, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:28:36 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:28:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:28:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:28:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:28:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:28:38 np0005625203.localdomain podman[87674]: 2026-02-20 08:28:38.764079847 +0000 UTC m=+0.080811101 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., distribution-scope=public, container_name=logrotate_crond, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:28:38 np0005625203.localdomain systemd[1]: tmp-crun.EPWmLB.mount: Deactivated successfully.
Feb 20 08:28:38 np0005625203.localdomain podman[87675]: 2026-02-20 08:28:38.791337624 +0000 UTC m=+0.102115784 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.expose-services=)
Feb 20 08:28:38 np0005625203.localdomain podman[87676]: 2026-02-20 08:28:38.833712241 +0000 UTC m=+0.139437764 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.5)
Feb 20 08:28:38 np0005625203.localdomain podman[87675]: 2026-02-20 08:28:38.845216638 +0000 UTC m=+0.155994798 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, architecture=x86_64, container_name=nova_compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:28:38 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:28:38 np0005625203.localdomain podman[87674]: 2026-02-20 08:28:38.857304644 +0000 UTC m=+0.174035948 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, batch=17.1_20260112.1, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.13, release=1766032510, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Feb 20 08:28:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:28:38 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:28:38 np0005625203.localdomain podman[87676]: 2026-02-20 08:28:38.893708525 +0000 UTC m=+0.199434108 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, version=17.1.13, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, batch=17.1_20260112.1, release=1766032510)
Feb 20 08:28:38 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:28:38 np0005625203.localdomain podman[87682]: 2026-02-20 08:28:38.939152257 +0000 UTC m=+0.242640351 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., io.openshift.expose-services=, release=1766032510, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:28:38 np0005625203.localdomain podman[87756]: 2026-02-20 08:28:38.962870844 +0000 UTC m=+0.080746199 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, vendor=Red Hat, Inc., io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, vcs-type=git, version=17.1.13, tcib_managed=true, config_id=tripleo_step4, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Feb 20 08:28:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:28:39 np0005625203.localdomain podman[87756]: 2026-02-20 08:28:39.000275227 +0000 UTC m=+0.118150562 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, version=17.1.13, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:28:39 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:28:39 np0005625203.localdomain podman[87682]: 2026-02-20 08:28:39.016696357 +0000 UTC m=+0.320184521 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, distribution-scope=public, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4)
Feb 20 08:28:39 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:28:39 np0005625203.localdomain podman[87793]: 2026-02-20 08:28:39.072576233 +0000 UTC m=+0.083620649 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, tcib_managed=true, managed_by=tripleo_ansible, batch=17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:28:39 np0005625203.localdomain podman[87793]: 2026-02-20 08:28:39.131430402 +0000 UTC m=+0.142474868 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, release=1766032510, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.openshift.expose-services=, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:28:39 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:28:39 np0005625203.localdomain sudo[87819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:28:39 np0005625203.localdomain sudo[87819]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:28:39 np0005625203.localdomain sudo[87819]: pam_unix(sudo:session): session closed for user root
Feb 20 08:28:40 np0005625203.localdomain sudo[87834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:28:40 np0005625203.localdomain sudo[87834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:28:40 np0005625203.localdomain sudo[87834]: pam_unix(sudo:session): session closed for user root
Feb 20 08:28:41 np0005625203.localdomain sudo[87881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:28:41 np0005625203.localdomain sudo[87881]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:28:41 np0005625203.localdomain sudo[87881]: pam_unix(sudo:session): session closed for user root
Feb 20 08:28:41 np0005625203.localdomain sshd[87896]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:28:43 np0005625203.localdomain sshd[87896]: Received disconnect from 102.210.148.92 port 48146:11: Bye Bye [preauth]
Feb 20 08:28:43 np0005625203.localdomain sshd[87896]: Disconnected from authenticating user root 102.210.148.92 port 48146 [preauth]
Feb 20 08:28:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:28:43 np0005625203.localdomain podman[87898]: 2026-02-20 08:28:43.467319966 +0000 UTC m=+0.090983967 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=nova_migration_target, release=1766032510, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true)
Feb 20 08:28:43 np0005625203.localdomain podman[87898]: 2026-02-20 08:28:43.841479823 +0000 UTC m=+0.465143844 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com)
Feb 20 08:28:43 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:28:54 np0005625203.localdomain sshd[87921]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:28:56 np0005625203.localdomain sshd[87921]: Invalid user superadmin from 103.200.25.162 port 57016
Feb 20 08:28:56 np0005625203.localdomain sshd[87921]: Received disconnect from 103.200.25.162 port 57016:11: Bye Bye [preauth]
Feb 20 08:28:56 np0005625203.localdomain sshd[87921]: Disconnected from invalid user superadmin 103.200.25.162 port 57016 [preauth]
Feb 20 08:29:01 np0005625203.localdomain sshd[87968]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:29:01 np0005625203.localdomain sshd[87968]: Invalid user oracle from 147.135.114.8 port 46354
Feb 20 08:29:01 np0005625203.localdomain sshd[87968]: Received disconnect from 147.135.114.8 port 46354:11: Bye Bye [preauth]
Feb 20 08:29:01 np0005625203.localdomain sshd[87968]: Disconnected from invalid user oracle 147.135.114.8 port 46354 [preauth]
Feb 20 08:29:02 np0005625203.localdomain sshd[87970]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:29:03 np0005625203.localdomain sshd[87970]: Invalid user shalini from 212.154.234.9 port 29410
Feb 20 08:29:03 np0005625203.localdomain sshd[87970]: Received disconnect from 212.154.234.9 port 29410:11: Bye Bye [preauth]
Feb 20 08:29:03 np0005625203.localdomain sshd[87970]: Disconnected from invalid user shalini 212.154.234.9 port 29410 [preauth]
Feb 20 08:29:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:29:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:29:03 np0005625203.localdomain podman[87972]: 2026-02-20 08:29:03.404725357 +0000 UTC m=+0.082610727 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, batch=17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, container_name=collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team)
Feb 20 08:29:03 np0005625203.localdomain podman[87972]: 2026-02-20 08:29:03.440496029 +0000 UTC m=+0.118381369 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.component=openstack-collectd-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, release=1766032510, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc.)
Feb 20 08:29:03 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:29:03 np0005625203.localdomain podman[87973]: 2026-02-20 08:29:03.455566407 +0000 UTC m=+0.131312430 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, container_name=iscsid, architecture=x86_64, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:29:03 np0005625203.localdomain podman[87973]: 2026-02-20 08:29:03.495356134 +0000 UTC m=+0.171102167 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, container_name=iscsid, batch=17.1_20260112.1)
Feb 20 08:29:03 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:29:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:29:07 np0005625203.localdomain podman[88011]: 2026-02-20 08:29:07.774432384 +0000 UTC m=+0.093721883 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.expose-services=, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, container_name=metrics_qdr, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:29:07 np0005625203.localdomain podman[88011]: 2026-02-20 08:29:07.971479766 +0000 UTC m=+0.290769255 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 08:29:07 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:29:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:29:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:29:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:29:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:29:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:29:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:29:09 np0005625203.localdomain podman[88043]: 2026-02-20 08:29:09.786906986 +0000 UTC m=+0.091105371 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute)
Feb 20 08:29:09 np0005625203.localdomain podman[88055]: 2026-02-20 08:29:09.847388246 +0000 UTC m=+0.141894491 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, release=1766032510, io.buildah.version=1.41.5, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, managed_by=tripleo_ansible)
Feb 20 08:29:09 np0005625203.localdomain podman[88041]: 2026-02-20 08:29:09.893163878 +0000 UTC m=+0.208107187 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, architecture=x86_64)
Feb 20 08:29:09 np0005625203.localdomain podman[88055]: 2026-02-20 08:29:09.898507444 +0000 UTC m=+0.193013689 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.5, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, tcib_managed=true, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, build-date=2026-01-12T22:56:19Z, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Feb 20 08:29:09 np0005625203.localdomain podman[88041]: 2026-02-20 08:29:09.905237554 +0000 UTC m=+0.220180903 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, io.buildah.version=1.41.5, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., url=https://www.redhat.com, config_id=tripleo_step4)
Feb 20 08:29:09 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:29:09 np0005625203.localdomain podman[88043]: 2026-02-20 08:29:09.926228915 +0000 UTC m=+0.230427340 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1766032510)
Feb 20 08:29:09 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:29:09 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:29:09 np0005625203.localdomain podman[88042]: 2026-02-20 08:29:09.966266719 +0000 UTC m=+0.277018928 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, container_name=nova_compute, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible)
Feb 20 08:29:09 np0005625203.localdomain podman[88062]: 2026-02-20 08:29:09.92284286 +0000 UTC m=+0.211901655 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public)
Feb 20 08:29:09 np0005625203.localdomain podman[88042]: 2026-02-20 08:29:09.992492615 +0000 UTC m=+0.303244854 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64)
Feb 20 08:29:10 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:29:10 np0005625203.localdomain podman[88062]: 2026-02-20 08:29:10.005455957 +0000 UTC m=+0.294514732 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, version=17.1.13, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, distribution-scope=public, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, container_name=ovn_controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git)
Feb 20 08:29:10 np0005625203.localdomain podman[88049]: 2026-02-20 08:29:09.811128979 +0000 UTC m=+0.108295996 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git)
Feb 20 08:29:10 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:29:10 np0005625203.localdomain podman[88049]: 2026-02-20 08:29:10.046659277 +0000 UTC m=+0.343826344 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z)
Feb 20 08:29:10 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:29:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:29:14 np0005625203.localdomain podman[88184]: 2026-02-20 08:29:14.748964419 +0000 UTC m=+0.070606785 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, config_id=tripleo_step4, container_name=nova_migration_target, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:29:15 np0005625203.localdomain podman[88184]: 2026-02-20 08:29:15.089321784 +0000 UTC m=+0.410964140 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, vcs-type=git)
Feb 20 08:29:15 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:29:15 np0005625203.localdomain sshd[88207]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:29:15 np0005625203.localdomain sshd[88207]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:29:21 np0005625203.localdomain sshd[88209]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:29:22 np0005625203.localdomain sshd[88209]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:29:29 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:29:29 np0005625203.localdomain recover_tripleo_nova_virtqemud[88212]: 62505
Feb 20 08:29:29 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:29:29 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:29:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:29:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:29:33 np0005625203.localdomain podman[88214]: 2026-02-20 08:29:33.780973856 +0000 UTC m=+0.091816304 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, container_name=iscsid)
Feb 20 08:29:33 np0005625203.localdomain podman[88213]: 2026-02-20 08:29:33.831060752 +0000 UTC m=+0.141809997 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, container_name=collectd, distribution-scope=public, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:29:33 np0005625203.localdomain podman[88213]: 2026-02-20 08:29:33.839459783 +0000 UTC m=+0.150209058 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, distribution-scope=public, vcs-type=git, config_id=tripleo_step3, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:29:33 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:29:33 np0005625203.localdomain podman[88214]: 2026-02-20 08:29:33.895850856 +0000 UTC m=+0.206693344 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.13, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, release=1766032510, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, container_name=iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:29:33 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:29:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:29:38 np0005625203.localdomain podman[88251]: 2026-02-20 08:29:38.746530536 +0000 UTC m=+0.068232011 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 08:29:38 np0005625203.localdomain podman[88251]: 2026-02-20 08:29:38.951187315 +0000 UTC m=+0.272888740 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64)
Feb 20 08:29:38 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:29:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:29:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:29:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:29:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:29:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:29:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:29:40 np0005625203.localdomain systemd[1]: tmp-crun.RjdnC1.mount: Deactivated successfully.
Feb 20 08:29:40 np0005625203.localdomain podman[88281]: 2026-02-20 08:29:40.788037281 +0000 UTC m=+0.101509645 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, tcib_managed=true, release=1766032510, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:29:40 np0005625203.localdomain podman[88281]: 2026-02-20 08:29:40.794315035 +0000 UTC m=+0.107787419 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, architecture=x86_64, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vcs-type=git)
Feb 20 08:29:40 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:29:40 np0005625203.localdomain podman[88292]: 2026-02-20 08:29:40.843228125 +0000 UTC m=+0.142049984 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, managed_by=tripleo_ansible, release=1766032510, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com)
Feb 20 08:29:40 np0005625203.localdomain podman[88292]: 2026-02-20 08:29:40.917206334 +0000 UTC m=+0.216028193 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z)
Feb 20 08:29:40 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:29:40 np0005625203.localdomain podman[88282]: 2026-02-20 08:29:40.937938608 +0000 UTC m=+0.246287424 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5)
Feb 20 08:29:40 np0005625203.localdomain podman[88291]: 2026-02-20 08:29:40.908419641 +0000 UTC m=+0.210141670 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, version=17.1.13, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vendor=Red Hat, Inc., release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team)
Feb 20 08:29:40 np0005625203.localdomain podman[88284]: 2026-02-20 08:29:40.98626202 +0000 UTC m=+0.291211630 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, release=1766032510, version=17.1.13, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:29:41 np0005625203.localdomain podman[88283]: 2026-02-20 08:29:41.004251539 +0000 UTC m=+0.311063317 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5)
Feb 20 08:29:41 np0005625203.localdomain podman[88284]: 2026-02-20 08:29:41.019386108 +0000 UTC m=+0.324335728 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4)
Feb 20 08:29:41 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:29:41 np0005625203.localdomain podman[88291]: 2026-02-20 08:29:41.043487108 +0000 UTC m=+0.345209147 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:29:41 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:29:41 np0005625203.localdomain podman[88283]: 2026-02-20 08:29:41.063383156 +0000 UTC m=+0.370194894 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:29:41 np0005625203.localdomain podman[88282]: 2026-02-20 08:29:41.073450499 +0000 UTC m=+0.381799365 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, container_name=nova_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:29:41 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:29:41 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:29:41 np0005625203.localdomain sudo[88429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:29:41 np0005625203.localdomain sudo[88429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:29:41 np0005625203.localdomain sudo[88429]: pam_unix(sudo:session): session closed for user root
Feb 20 08:29:41 np0005625203.localdomain sudo[88444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:29:41 np0005625203.localdomain sudo[88444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:29:42 np0005625203.localdomain sudo[88444]: pam_unix(sudo:session): session closed for user root
Feb 20 08:29:45 np0005625203.localdomain sudo[88491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:29:45 np0005625203.localdomain sudo[88491]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:29:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:29:45 np0005625203.localdomain sudo[88491]: pam_unix(sudo:session): session closed for user root
Feb 20 08:29:45 np0005625203.localdomain podman[88505]: 2026-02-20 08:29:45.576245902 +0000 UTC m=+0.089408139 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z)
Feb 20 08:29:45 np0005625203.localdomain podman[88505]: 2026-02-20 08:29:45.993850148 +0000 UTC m=+0.507012335 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:29:46 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:29:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 08:29:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3000.1 total, 600.0 interval
                                                          Cumulative writes: 4844 writes, 21K keys, 4844 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4844 writes, 660 syncs, 7.34 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 448 writes, 1802 keys, 448 commit groups, 1.0 writes per commit group, ingest: 2.73 MB, 0.00 MB/s
                                                          Interval WAL: 448 writes, 158 syncs, 2.84 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 08:29:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 08:29:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3000.1 total, 600.0 interval
                                                          Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5843 writes, 764 syncs, 7.65 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 550 writes, 2057 keys, 550 commit groups, 1.0 writes per commit group, ingest: 2.08 MB, 0.00 MB/s
                                                          Interval WAL: 550 writes, 193 syncs, 2.85 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 08:29:58 np0005625203.localdomain sshd[88571]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:29:59 np0005625203.localdomain sshd[88571]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:30:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:30:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:30:04 np0005625203.localdomain podman[88576]: 2026-02-20 08:30:04.786011491 +0000 UTC m=+0.094774936 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, vcs-type=git, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, tcib_managed=true, version=17.1.13)
Feb 20 08:30:04 np0005625203.localdomain podman[88576]: 2026-02-20 08:30:04.823509197 +0000 UTC m=+0.132272662 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, container_name=iscsid, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, build-date=2026-01-12T22:34:43Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true)
Feb 20 08:30:04 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:30:04 np0005625203.localdomain podman[88575]: 2026-02-20 08:30:04.829450271 +0000 UTC m=+0.138540216 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, container_name=collectd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com)
Feb 20 08:30:04 np0005625203.localdomain podman[88575]: 2026-02-20 08:30:04.913525634 +0000 UTC m=+0.222615539 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, container_name=collectd, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:30:04 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:30:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:30:09 np0005625203.localdomain podman[88617]: 2026-02-20 08:30:09.75730563 +0000 UTC m=+0.079183751 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.13, architecture=x86_64, container_name=metrics_qdr, io.buildah.version=1.41.5, vendor=Red Hat, Inc., tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, maintainer=OpenStack TripleO Team)
Feb 20 08:30:09 np0005625203.localdomain podman[88617]: 2026-02-20 08:30:09.948071218 +0000 UTC m=+0.269949339 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, vendor=Red Hat, Inc., batch=17.1_20260112.1)
Feb 20 08:30:09 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:30:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:30:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:30:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:30:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:30:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:30:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:30:11 np0005625203.localdomain podman[88648]: 2026-02-20 08:30:11.770572706 +0000 UTC m=+0.081572175 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, architecture=x86_64, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4)
Feb 20 08:30:11 np0005625203.localdomain systemd[1]: tmp-crun.b8TZFe.mount: Deactivated successfully.
Feb 20 08:30:11 np0005625203.localdomain podman[88655]: 2026-02-20 08:30:11.787454301 +0000 UTC m=+0.093407093 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, maintainer=OpenStack TripleO Team)
Feb 20 08:30:11 np0005625203.localdomain podman[88646]: 2026-02-20 08:30:11.825113111 +0000 UTC m=+0.141291401 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:30:11 np0005625203.localdomain podman[88655]: 2026-02-20 08:30:11.838774166 +0000 UTC m=+0.144726938 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, container_name=ovn_controller, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5)
Feb 20 08:30:11 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:30:11 np0005625203.localdomain podman[88646]: 2026-02-20 08:30:11.851097279 +0000 UTC m=+0.167275559 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, io.openshift.expose-services=, release=1766032510, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Feb 20 08:30:11 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:30:11 np0005625203.localdomain podman[88648]: 2026-02-20 08:30:11.870147531 +0000 UTC m=+0.181147060 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, version=17.1.13)
Feb 20 08:30:11 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:30:11 np0005625203.localdomain podman[88647]: 2026-02-20 08:30:11.83923694 +0000 UTC m=+0.153371387 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:30:11 np0005625203.localdomain podman[88645]: 2026-02-20 08:30:11.941526848 +0000 UTC m=+0.261063413 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1)
Feb 20 08:30:11 np0005625203.localdomain podman[88645]: 2026-02-20 08:30:11.953362317 +0000 UTC m=+0.272898922 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, io.buildah.version=1.41.5, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, release=1766032510, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:30:11 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:30:11 np0005625203.localdomain podman[88647]: 2026-02-20 08:30:11.971552411 +0000 UTC m=+0.285686938 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, architecture=x86_64, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, url=https://www.redhat.com, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, build-date=2026-01-12T23:07:47Z)
Feb 20 08:30:11 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:30:12 np0005625203.localdomain podman[88649]: 2026-02-20 08:30:12.039637728 +0000 UTC m=+0.347833990 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20260112.1, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-type=git, url=https://www.redhat.com)
Feb 20 08:30:12 np0005625203.localdomain podman[88649]: 2026-02-20 08:30:12.088276989 +0000 UTC m=+0.396473261 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, batch=17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team)
Feb 20 08:30:12 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:30:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:30:16 np0005625203.localdomain podman[88791]: 2026-02-20 08:30:16.76934235 +0000 UTC m=+0.088931665 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, vcs-type=git, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:30:17 np0005625203.localdomain podman[88791]: 2026-02-20 08:30:17.142803054 +0000 UTC m=+0.462392419 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, release=1766032510, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.5, version=17.1.13, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:30:17 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:30:33 np0005625203.localdomain sshd[88816]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:30:33 np0005625203.localdomain sshd[88816]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:30:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:30:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:30:35 np0005625203.localdomain podman[88819]: 2026-02-20 08:30:35.777189358 +0000 UTC m=+0.092648260 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, com.redhat.component=openstack-iscsid-container, container_name=iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, config_id=tripleo_step3)
Feb 20 08:30:35 np0005625203.localdomain podman[88819]: 2026-02-20 08:30:35.810335488 +0000 UTC m=+0.125794300 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1766032510, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, managed_by=tripleo_ansible, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Feb 20 08:30:35 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:30:35 np0005625203.localdomain podman[88818]: 2026-02-20 08:30:35.826308774 +0000 UTC m=+0.141672613 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, container_name=collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step3, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13)
Feb 20 08:30:35 np0005625203.localdomain podman[88818]: 2026-02-20 08:30:35.839258816 +0000 UTC m=+0.154622595 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, architecture=x86_64, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., release=1766032510, io.openshift.expose-services=, version=17.1.13, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:30:35 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:30:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:30:40 np0005625203.localdomain podman[88859]: 2026-02-20 08:30:40.762987257 +0000 UTC m=+0.084252729 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, url=https://www.redhat.com, release=1766032510, config_id=tripleo_step1, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:30:40 np0005625203.localdomain podman[88859]: 2026-02-20 08:30:40.950184504 +0000 UTC m=+0.271449946 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, release=1766032510, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:30:40 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:30:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:30:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:30:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:30:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:30:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:30:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:30:42 np0005625203.localdomain systemd[1]: tmp-crun.3Sfp2Y.mount: Deactivated successfully.
Feb 20 08:30:42 np0005625203.localdomain podman[88903]: 2026-02-20 08:30:42.79418747 +0000 UTC m=+0.090837654 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, distribution-scope=public, tcib_managed=true, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:30:42 np0005625203.localdomain podman[88889]: 2026-02-20 08:30:42.825648838 +0000 UTC m=+0.135127980 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-type=git, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, distribution-scope=public)
Feb 20 08:30:42 np0005625203.localdomain podman[88903]: 2026-02-20 08:30:42.843049798 +0000 UTC m=+0.139699952 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vendor=Red Hat, Inc., release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:30:42 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:30:42 np0005625203.localdomain podman[88889]: 2026-02-20 08:30:42.873218656 +0000 UTC m=+0.182697778 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step5, io.buildah.version=1.41.5)
Feb 20 08:30:42 np0005625203.localdomain podman[88892]: 2026-02-20 08:30:42.776931804 +0000 UTC m=+0.080686538 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13)
Feb 20 08:30:42 np0005625203.localdomain podman[88888]: 2026-02-20 08:30:42.886095036 +0000 UTC m=+0.200250573 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:30:42 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:30:42 np0005625203.localdomain podman[88888]: 2026-02-20 08:30:42.89813134 +0000 UTC m=+0.212286877 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, release=1766032510, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.13, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:30:42 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:30:42 np0005625203.localdomain podman[88897]: 2026-02-20 08:30:42.997009432 +0000 UTC m=+0.296696890 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1766032510, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true)
Feb 20 08:30:43 np0005625203.localdomain podman[88892]: 2026-02-20 08:30:43.019264244 +0000 UTC m=+0.323019008 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, architecture=x86_64)
Feb 20 08:30:43 np0005625203.localdomain podman[88897]: 2026-02-20 08:30:43.029715169 +0000 UTC m=+0.329402687 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T22:56:19Z, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn)
Feb 20 08:30:43 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:30:43 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:30:43 np0005625203.localdomain podman[88890]: 2026-02-20 08:30:42.97216524 +0000 UTC m=+0.277296477 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, config_id=tripleo_step4, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z)
Feb 20 08:30:43 np0005625203.localdomain podman[88890]: 2026-02-20 08:30:43.102522101 +0000 UTC m=+0.407653328 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, tcib_managed=true, batch=17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:30:43 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:30:44 np0005625203.localdomain sshd[89031]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:30:44 np0005625203.localdomain sshd[89031]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:30:45 np0005625203.localdomain sudo[89033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:30:45 np0005625203.localdomain sudo[89033]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:30:45 np0005625203.localdomain sudo[89033]: pam_unix(sudo:session): session closed for user root
Feb 20 08:30:45 np0005625203.localdomain sudo[89048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 08:30:45 np0005625203.localdomain sudo[89048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:30:46 np0005625203.localdomain sudo[89048]: pam_unix(sudo:session): session closed for user root
Feb 20 08:30:46 np0005625203.localdomain sudo[89084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:30:46 np0005625203.localdomain sudo[89084]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:30:46 np0005625203.localdomain sudo[89084]: pam_unix(sudo:session): session closed for user root
Feb 20 08:30:46 np0005625203.localdomain sudo[89099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:30:46 np0005625203.localdomain sudo[89099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:30:46 np0005625203.localdomain sudo[89099]: pam_unix(sudo:session): session closed for user root
Feb 20 08:30:47 np0005625203.localdomain sudo[89146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:30:47 np0005625203.localdomain sudo[89146]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:30:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:30:47 np0005625203.localdomain sudo[89146]: pam_unix(sudo:session): session closed for user root
Feb 20 08:30:47 np0005625203.localdomain podman[89161]: 2026-02-20 08:30:47.655272545 +0000 UTC m=+0.082071880 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, distribution-scope=public, vcs-type=git, version=17.1.13, architecture=x86_64, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, container_name=nova_migration_target, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:30:48 np0005625203.localdomain podman[89161]: 2026-02-20 08:30:48.044318654 +0000 UTC m=+0.471117909 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64)
Feb 20 08:30:48 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:31:06 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:31:06 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:31:06 np0005625203.localdomain podman[89231]: 2026-02-20 08:31:06.78347763 +0000 UTC m=+0.091420781 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, distribution-scope=public, config_id=tripleo_step3, architecture=x86_64, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Feb 20 08:31:06 np0005625203.localdomain podman[89231]: 2026-02-20 08:31:06.821981897 +0000 UTC m=+0.129924998 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_id=tripleo_step3, tcib_managed=true, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, container_name=collectd, maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, vendor=Red Hat, Inc.)
Feb 20 08:31:06 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:31:06 np0005625203.localdomain podman[89232]: 2026-02-20 08:31:06.83883136 +0000 UTC m=+0.146841394 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, vcs-type=git, build-date=2026-01-12T22:34:43Z, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Feb 20 08:31:06 np0005625203.localdomain podman[89232]: 2026-02-20 08:31:06.877379928 +0000 UTC m=+0.185389972 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, config_id=tripleo_step3, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:31:06 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:31:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:31:11 np0005625203.localdomain systemd[1]: tmp-crun.jVbtwb.mount: Deactivated successfully.
Feb 20 08:31:11 np0005625203.localdomain podman[89270]: 2026-02-20 08:31:11.761766106 +0000 UTC m=+0.082684610 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:31:11 np0005625203.localdomain podman[89270]: 2026-02-20 08:31:11.970860503 +0000 UTC m=+0.291778987 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, config_id=tripleo_step1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Feb 20 08:31:11 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:31:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:31:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:31:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:31:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:31:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:31:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:31:13 np0005625203.localdomain systemd[1]: tmp-crun.qV6F4h.mount: Deactivated successfully.
Feb 20 08:31:13 np0005625203.localdomain podman[89301]: 2026-02-20 08:31:13.754983089 +0000 UTC m=+0.073403171 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, version=17.1.13, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, container_name=ceilometer_agent_compute)
Feb 20 08:31:13 np0005625203.localdomain systemd[1]: tmp-crun.w5jue6.mount: Deactivated successfully.
Feb 20 08:31:13 np0005625203.localdomain podman[89305]: 2026-02-20 08:31:13.769011945 +0000 UTC m=+0.081316497 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, version=17.1.13, vendor=Red Hat, Inc., release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, distribution-scope=public, batch=17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:31:13 np0005625203.localdomain podman[89300]: 2026-02-20 08:31:13.805987944 +0000 UTC m=+0.126342497 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, tcib_managed=true, vcs-type=git, version=17.1.13, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:31:13 np0005625203.localdomain podman[89313]: 2026-02-20 08:31:13.814904551 +0000 UTC m=+0.124522540 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.13, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_controller)
Feb 20 08:31:13 np0005625203.localdomain podman[89305]: 2026-02-20 08:31:13.827533884 +0000 UTC m=+0.139838446 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, batch=17.1_20260112.1, release=1766032510, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, vcs-type=git)
Feb 20 08:31:13 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:31:13 np0005625203.localdomain podman[89301]: 2026-02-20 08:31:13.837344838 +0000 UTC m=+0.155764930 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc.)
Feb 20 08:31:13 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:31:13 np0005625203.localdomain podman[89313]: 2026-02-20 08:31:13.86060268 +0000 UTC m=+0.170220669 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, vendor=Red Hat, Inc., release=1766032510, distribution-scope=public)
Feb 20 08:31:13 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:31:13 np0005625203.localdomain podman[89300]: 2026-02-20 08:31:13.911589256 +0000 UTC m=+0.231943859 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, config_id=tripleo_step5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:31:13 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:31:13 np0005625203.localdomain podman[89307]: 2026-02-20 08:31:13.912687229 +0000 UTC m=+0.227545411 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-type=git, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, distribution-scope=public, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:31:13 np0005625203.localdomain podman[89307]: 2026-02-20 08:31:13.995298707 +0000 UTC m=+0.310156869 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=)
Feb 20 08:31:14 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:31:14 np0005625203.localdomain podman[89299]: 2026-02-20 08:31:13.965108678 +0000 UTC m=+0.288864796 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, io.buildah.version=1.41.5, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:31:14 np0005625203.localdomain podman[89299]: 2026-02-20 08:31:14.04979707 +0000 UTC m=+0.373553198 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13)
Feb 20 08:31:14 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:31:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:31:18 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:31:18 np0005625203.localdomain recover_tripleo_nova_virtqemud[89444]: 62505
Feb 20 08:31:18 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:31:18 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:31:18 np0005625203.localdomain podman[89441]: 2026-02-20 08:31:18.760969136 +0000 UTC m=+0.078493530 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., container_name=nova_migration_target, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:31:19 np0005625203.localdomain podman[89441]: 2026-02-20 08:31:19.168319003 +0000 UTC m=+0.485843387 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., distribution-scope=public, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:31:19 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:31:28 np0005625203.localdomain sshd[89466]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:31:28 np0005625203.localdomain sshd[89466]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:31:37 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:31:37 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:31:37 np0005625203.localdomain podman[89469]: 2026-02-20 08:31:37.774299924 +0000 UTC m=+0.091339439 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T22:34:43Z, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Feb 20 08:31:37 np0005625203.localdomain podman[89469]: 2026-02-20 08:31:37.814112601 +0000 UTC m=+0.131152176 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, release=1766032510)
Feb 20 08:31:37 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:31:37 np0005625203.localdomain podman[89468]: 2026-02-20 08:31:37.819785587 +0000 UTC m=+0.138462172 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, release=1766032510, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:31:37 np0005625203.localdomain podman[89468]: 2026-02-20 08:31:37.904420817 +0000 UTC m=+0.223097392 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, version=17.1.13, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=collectd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:31:37 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:31:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:31:42 np0005625203.localdomain podman[89508]: 2026-02-20 08:31:42.748785142 +0000 UTC m=+0.069284483 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container)
Feb 20 08:31:42 np0005625203.localdomain podman[89508]: 2026-02-20 08:31:42.937184276 +0000 UTC m=+0.257683617 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Feb 20 08:31:42 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:31:44 np0005625203.localdomain sshd[89539]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:31:44 np0005625203.localdomain sshd[89539]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:31:44 np0005625203.localdomain podman[89542]: 2026-02-20 08:31:44.593613675 +0000 UTC m=+0.093093734 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:31:44 np0005625203.localdomain podman[89542]: 2026-02-20 08:31:44.625244617 +0000 UTC m=+0.124724666 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., container_name=nova_compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: tmp-crun.DZYXle.mount: Deactivated successfully.
Feb 20 08:31:44 np0005625203.localdomain podman[89557]: 2026-02-20 08:31:44.707861305 +0000 UTC m=+0.193616527 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Feb 20 08:31:44 np0005625203.localdomain podman[89541]: 2026-02-20 08:31:44.805584511 +0000 UTC m=+0.304428000 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.buildah.version=1.41.5, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, managed_by=tripleo_ansible)
Feb 20 08:31:44 np0005625203.localdomain podman[89541]: 2026-02-20 08:31:44.817229903 +0000 UTC m=+0.316073432 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, release=1766032510, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5)
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:31:44 np0005625203.localdomain podman[89557]: 2026-02-20 08:31:44.836574984 +0000 UTC m=+0.322330216 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team)
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:31:44 np0005625203.localdomain podman[89545]: 2026-02-20 08:31:44.859083234 +0000 UTC m=+0.350305676 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.expose-services=, container_name=ovn_metadata_agent, version=17.1.13, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:31:44 np0005625203.localdomain podman[89543]: 2026-02-20 08:31:44.767125196 +0000 UTC m=+0.261303360 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1)
Feb 20 08:31:44 np0005625203.localdomain podman[89544]: 2026-02-20 08:31:44.906294731 +0000 UTC m=+0.398549506 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, release=1766032510, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:31:44 np0005625203.localdomain podman[89545]: 2026-02-20 08:31:44.909998336 +0000 UTC m=+0.401220768 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, url=https://www.redhat.com, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent)
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:31:44 np0005625203.localdomain podman[89544]: 2026-02-20 08:31:44.940322498 +0000 UTC m=+0.432577313 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, container_name=ceilometer_agent_ipmi)
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:31:44 np0005625203.localdomain podman[89543]: 2026-02-20 08:31:44.952336451 +0000 UTC m=+0.446514575 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, url=https://www.redhat.com, version=17.1.13, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z)
Feb 20 08:31:44 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:31:45 np0005625203.localdomain systemd[1]: tmp-crun.1aq928.mount: Deactivated successfully.
Feb 20 08:31:47 np0005625203.localdomain sudo[89690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:31:47 np0005625203.localdomain sudo[89690]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:31:47 np0005625203.localdomain sudo[89690]: pam_unix(sudo:session): session closed for user root
Feb 20 08:31:47 np0005625203.localdomain sudo[89705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:31:47 np0005625203.localdomain sudo[89705]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:31:48 np0005625203.localdomain sudo[89705]: pam_unix(sudo:session): session closed for user root
Feb 20 08:31:49 np0005625203.localdomain sudo[89752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:31:49 np0005625203.localdomain sudo[89752]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:31:49 np0005625203.localdomain sudo[89752]: pam_unix(sudo:session): session closed for user root
Feb 20 08:31:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:31:49 np0005625203.localdomain systemd[1]: tmp-crun.GeDdss.mount: Deactivated successfully.
Feb 20 08:31:49 np0005625203.localdomain podman[89767]: 2026-02-20 08:31:49.783153295 +0000 UTC m=+0.099408941 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:31:50 np0005625203.localdomain podman[89767]: 2026-02-20 08:31:50.216300133 +0000 UTC m=+0.532555739 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, container_name=nova_migration_target, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, url=https://www.redhat.com, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Feb 20 08:31:50 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:31:57 np0005625203.localdomain sshd[89791]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:31:57 np0005625203.localdomain sshd[89791]: Received disconnect from 147.135.114.8 port 54902:11: Bye Bye [preauth]
Feb 20 08:31:57 np0005625203.localdomain sshd[89791]: Disconnected from authenticating user root 147.135.114.8 port 54902 [preauth]
Feb 20 08:32:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:32:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:32:08 np0005625203.localdomain podman[89839]: 2026-02-20 08:32:08.79297358 +0000 UTC m=+0.099198733 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, url=https://www.redhat.com)
Feb 20 08:32:08 np0005625203.localdomain podman[89839]: 2026-02-20 08:32:08.833400627 +0000 UTC m=+0.139625740 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, container_name=iscsid, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., distribution-scope=public, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:32:08 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:32:08 np0005625203.localdomain systemd[1]: tmp-crun.pVNro4.mount: Deactivated successfully.
Feb 20 08:32:08 np0005625203.localdomain podman[89838]: 2026-02-20 08:32:08.888644203 +0000 UTC m=+0.197310542 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, batch=17.1_20260112.1, version=17.1.13, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd)
Feb 20 08:32:08 np0005625203.localdomain podman[89838]: 2026-02-20 08:32:08.901285886 +0000 UTC m=+0.209952225 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, batch=17.1_20260112.1, release=1766032510, distribution-scope=public, config_id=tripleo_step3, container_name=collectd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd)
Feb 20 08:32:08 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:32:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:32:13 np0005625203.localdomain systemd[1]: tmp-crun.PGLOFs.mount: Deactivated successfully.
Feb 20 08:32:13 np0005625203.localdomain podman[89876]: 2026-02-20 08:32:13.766492398 +0000 UTC m=+0.084336001 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, managed_by=tripleo_ansible, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, batch=17.1_20260112.1, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z)
Feb 20 08:32:13 np0005625203.localdomain podman[89876]: 2026-02-20 08:32:13.974525032 +0000 UTC m=+0.292368605 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com)
Feb 20 08:32:13 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:32:14 np0005625203.localdomain sshd[89905]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:32:14 np0005625203.localdomain sshd[89905]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:32:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:32:14 np0005625203.localdomain podman[89907]: 2026-02-20 08:32:14.757589203 +0000 UTC m=+0.074819796 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-type=git, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, config_id=tripleo_step5, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:32:14 np0005625203.localdomain podman[89907]: 2026-02-20 08:32:14.789176615 +0000 UTC m=+0.106407248 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.41.5, tcib_managed=true, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, maintainer=OpenStack TripleO Team, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vcs-type=git)
Feb 20 08:32:14 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:32:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:32:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:32:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:32:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:32:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:32:15 np0005625203.localdomain systemd[1]: tmp-crun.plklI2.mount: Deactivated successfully.
Feb 20 08:32:15 np0005625203.localdomain podman[89938]: 2026-02-20 08:32:15.786791832 +0000 UTC m=+0.086771927 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ovn_controller, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container)
Feb 20 08:32:15 np0005625203.localdomain podman[89933]: 2026-02-20 08:32:15.826932609 +0000 UTC m=+0.133692895 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, name=rhosp-rhel9/openstack-cron, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, batch=17.1_20260112.1)
Feb 20 08:32:15 np0005625203.localdomain podman[89933]: 2026-02-20 08:32:15.832235664 +0000 UTC m=+0.138995940 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1)
Feb 20 08:32:15 np0005625203.localdomain podman[89938]: 2026-02-20 08:32:15.838821589 +0000 UTC m=+0.138801764 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Feb 20 08:32:15 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:32:15 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:32:15 np0005625203.localdomain podman[89936]: 2026-02-20 08:32:15.8842353 +0000 UTC m=+0.184950247 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1)
Feb 20 08:32:15 np0005625203.localdomain podman[89934]: 2026-02-20 08:32:15.933461569 +0000 UTC m=+0.237931844 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, container_name=ceilometer_agent_compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute)
Feb 20 08:32:15 np0005625203.localdomain podman[89936]: 2026-02-20 08:32:15.941537161 +0000 UTC m=+0.242252228 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, version=17.1.13, io.buildah.version=1.41.5, vcs-type=git, io.openshift.expose-services=, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:32:15 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:32:15 np0005625203.localdomain podman[89934]: 2026-02-20 08:32:15.987253661 +0000 UTC m=+0.291723916 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, batch=17.1_20260112.1)
Feb 20 08:32:15 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:32:16 np0005625203.localdomain podman[89935]: 2026-02-20 08:32:15.989018566 +0000 UTC m=+0.292497649 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.buildah.version=1.41.5, distribution-scope=public, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:32:16 np0005625203.localdomain podman[89935]: 2026-02-20 08:32:16.071328403 +0000 UTC m=+0.374807416 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, config_id=tripleo_step4, version=17.1.13)
Feb 20 08:32:16 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:32:16 np0005625203.localdomain systemd[1]: tmp-crun.ilgA5B.mount: Deactivated successfully.
Feb 20 08:32:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:32:20 np0005625203.localdomain systemd[1]: tmp-crun.UaLRuw.mount: Deactivated successfully.
Feb 20 08:32:20 np0005625203.localdomain podman[90049]: 2026-02-20 08:32:20.760784885 +0000 UTC m=+0.078928584 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com)
Feb 20 08:32:21 np0005625203.localdomain podman[90049]: 2026-02-20 08:32:21.096964641 +0000 UTC m=+0.415108350 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, tcib_managed=true, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible)
Feb 20 08:32:21 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:32:34 np0005625203.localdomain sshd[90072]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:32:35 np0005625203.localdomain sshd[90072]: Invalid user user from 212.154.234.9 port 9677
Feb 20 08:32:35 np0005625203.localdomain sshd[90072]: Received disconnect from 212.154.234.9 port 9677:11: Bye Bye [preauth]
Feb 20 08:32:35 np0005625203.localdomain sshd[90072]: Disconnected from invalid user user 212.154.234.9 port 9677 [preauth]
Feb 20 08:32:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:32:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:32:39 np0005625203.localdomain systemd[1]: tmp-crun.NOpHzr.mount: Deactivated successfully.
Feb 20 08:32:39 np0005625203.localdomain podman[90074]: 2026-02-20 08:32:39.779668765 +0000 UTC m=+0.096140559 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, url=https://www.redhat.com, distribution-scope=public, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_id=tripleo_step3, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible)
Feb 20 08:32:39 np0005625203.localdomain systemd[1]: tmp-crun.7qnQBr.mount: Deactivated successfully.
Feb 20 08:32:39 np0005625203.localdomain podman[90075]: 2026-02-20 08:32:39.825361715 +0000 UTC m=+0.139322821 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, tcib_managed=true, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5)
Feb 20 08:32:39 np0005625203.localdomain podman[90075]: 2026-02-20 08:32:39.837213533 +0000 UTC m=+0.151174619 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, container_name=iscsid, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, vcs-type=git, release=1766032510, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3)
Feb 20 08:32:39 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:32:39 np0005625203.localdomain podman[90074]: 2026-02-20 08:32:39.891924933 +0000 UTC m=+0.208396767 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, container_name=collectd, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, release=1766032510)
Feb 20 08:32:39 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:32:39 np0005625203.localdomain sshd[90113]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:32:41 np0005625203.localdomain sshd[90113]: Invalid user oracle from 102.210.148.92 port 36840
Feb 20 08:32:41 np0005625203.localdomain sshd[90113]: Received disconnect from 102.210.148.92 port 36840:11: Bye Bye [preauth]
Feb 20 08:32:41 np0005625203.localdomain sshd[90113]: Disconnected from invalid user oracle 102.210.148.92 port 36840 [preauth]
Feb 20 08:32:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:32:44 np0005625203.localdomain podman[90116]: 2026-02-20 08:32:44.769770119 +0000 UTC m=+0.089828802 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc.)
Feb 20 08:32:44 np0005625203.localdomain podman[90116]: 2026-02-20 08:32:44.981503177 +0000 UTC m=+0.301561850 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr)
Feb 20 08:32:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:32:44 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:32:45 np0005625203.localdomain podman[90146]: 2026-02-20 08:32:45.088941266 +0000 UTC m=+0.079655896 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=)
Feb 20 08:32:45 np0005625203.localdomain podman[90146]: 2026-02-20 08:32:45.114772289 +0000 UTC m=+0.105486909 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2026-01-12T23:32:04Z)
Feb 20 08:32:45 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:32:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:32:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:32:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:32:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:32:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:32:46 np0005625203.localdomain podman[90174]: 2026-02-20 08:32:46.789341262 +0000 UTC m=+0.090865135 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container)
Feb 20 08:32:46 np0005625203.localdomain podman[90173]: 2026-02-20 08:32:46.834204476 +0000 UTC m=+0.136510193 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:32:46 np0005625203.localdomain podman[90173]: 2026-02-20 08:32:46.842773322 +0000 UTC m=+0.145079029 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.component=openstack-cron-container, release=1766032510, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, container_name=logrotate_crond, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, tcib_managed=true, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:32:46 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:32:46 np0005625203.localdomain podman[90175]: 2026-02-20 08:32:46.886213231 +0000 UTC m=+0.182853052 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible)
Feb 20 08:32:46 np0005625203.localdomain podman[90176]: 2026-02-20 08:32:46.846460946 +0000 UTC m=+0.142185419 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, distribution-scope=public, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, io.buildah.version=1.41.5, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Feb 20 08:32:46 np0005625203.localdomain podman[90174]: 2026-02-20 08:32:46.916197873 +0000 UTC m=+0.217721776 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, architecture=x86_64, distribution-scope=public, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:32:46 np0005625203.localdomain podman[90176]: 2026-02-20 08:32:46.931212699 +0000 UTC m=+0.226937172 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f)
Feb 20 08:32:46 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:32:46 np0005625203.localdomain podman[90177]: 2026-02-20 08:32:46.945852575 +0000 UTC m=+0.240203365 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public)
Feb 20 08:32:46 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:32:46 np0005625203.localdomain podman[90175]: 2026-02-20 08:32:46.984235887 +0000 UTC m=+0.280875708 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, vcs-type=git, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team)
Feb 20 08:32:46 np0005625203.localdomain podman[90177]: 2026-02-20 08:32:46.994382803 +0000 UTC m=+0.288733563 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, version=17.1.13, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:32:47 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:32:47 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:32:49 np0005625203.localdomain sudo[90293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:32:49 np0005625203.localdomain sudo[90293]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:32:49 np0005625203.localdomain sudo[90293]: pam_unix(sudo:session): session closed for user root
Feb 20 08:32:49 np0005625203.localdomain sudo[90308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 08:32:49 np0005625203.localdomain sudo[90308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:32:50 np0005625203.localdomain systemd[1]: tmp-crun.JpZdBS.mount: Deactivated successfully.
Feb 20 08:32:50 np0005625203.localdomain podman[90396]: 2026-02-20 08:32:50.282276445 +0000 UTC m=+0.094377604 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, RELEASE=main, build-date=2026-02-09T10:25:24Z, release=1770267347, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, version=7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 08:32:50 np0005625203.localdomain podman[90396]: 2026-02-20 08:32:50.378928927 +0000 UTC m=+0.191030046 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, io.buildah.version=1.42.2, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc.)
Feb 20 08:32:50 np0005625203.localdomain sudo[90308]: pam_unix(sudo:session): session closed for user root
Feb 20 08:32:50 np0005625203.localdomain sudo[90461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:32:50 np0005625203.localdomain sudo[90461]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:32:50 np0005625203.localdomain sudo[90461]: pam_unix(sudo:session): session closed for user root
Feb 20 08:32:50 np0005625203.localdomain sudo[90476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:32:50 np0005625203.localdomain sudo[90476]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:32:51 np0005625203.localdomain sudo[90476]: pam_unix(sudo:session): session closed for user root
Feb 20 08:32:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:32:51 np0005625203.localdomain podman[90522]: 2026-02-20 08:32:51.781265692 +0000 UTC m=+0.094487878 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:32:52 np0005625203.localdomain sudo[90545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:32:52 np0005625203.localdomain sudo[90545]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:32:52 np0005625203.localdomain sudo[90545]: pam_unix(sudo:session): session closed for user root
Feb 20 08:32:52 np0005625203.localdomain podman[90522]: 2026-02-20 08:32:52.161131964 +0000 UTC m=+0.474354170 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, io.buildah.version=1.41.5, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, vcs-type=git, version=17.1.13)
Feb 20 08:32:52 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:32:54 np0005625203.localdomain sshd[90560]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:32:54 np0005625203.localdomain sshd[90560]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:32:59 np0005625203.localdomain sshd[90562]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:32:59 np0005625203.localdomain sshd[90562]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:33:00 np0005625203.localdomain sshd[90564]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:33:02 np0005625203.localdomain sshd[90564]: Received disconnect from 103.200.25.162 port 34908:11: Bye Bye [preauth]
Feb 20 08:33:02 np0005625203.localdomain sshd[90564]: Disconnected from authenticating user root 103.200.25.162 port 34908 [preauth]
Feb 20 08:33:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:33:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:33:10 np0005625203.localdomain podman[90590]: 2026-02-20 08:33:10.765768462 +0000 UTC m=+0.077804868 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, managed_by=tripleo_ansible, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, vcs-type=git, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, batch=17.1_20260112.1, container_name=iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:33:10 np0005625203.localdomain podman[90590]: 2026-02-20 08:33:10.77501583 +0000 UTC m=+0.087052216 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1766032510, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, version=17.1.13, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public)
Feb 20 08:33:10 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:33:10 np0005625203.localdomain podman[90589]: 2026-02-20 08:33:10.865570223 +0000 UTC m=+0.177416913 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, release=1766032510, container_name=collectd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, tcib_managed=true, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Feb 20 08:33:10 np0005625203.localdomain podman[90589]: 2026-02-20 08:33:10.903251305 +0000 UTC m=+0.215098025 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, version=17.1.13, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, container_name=collectd, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container)
Feb 20 08:33:10 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:33:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:33:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:33:15 np0005625203.localdomain podman[90625]: 2026-02-20 08:33:15.776169387 +0000 UTC m=+0.090495233 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z)
Feb 20 08:33:15 np0005625203.localdomain podman[90624]: 2026-02-20 08:33:15.81229703 +0000 UTC m=+0.129066401 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible)
Feb 20 08:33:15 np0005625203.localdomain podman[90624]: 2026-02-20 08:33:15.84865156 +0000 UTC m=+0.165420991 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, config_id=tripleo_step5, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:33:15 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:33:15 np0005625203.localdomain podman[90625]: 2026-02-20 08:33:15.977521964 +0000 UTC m=+0.291847779 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=tripleo_step1, url=https://www.redhat.com, tcib_managed=true, release=1766032510, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team)
Feb 20 08:33:15 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:33:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:33:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:33:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:33:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:33:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:33:17 np0005625203.localdomain podman[90680]: 2026-02-20 08:33:17.779860477 +0000 UTC m=+0.089848303 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vcs-type=git, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, build-date=2026-01-12T23:07:47Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Feb 20 08:33:17 np0005625203.localdomain podman[90681]: 2026-02-20 08:33:17.829433727 +0000 UTC m=+0.137285327 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, build-date=2026-01-12T23:07:30Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, batch=17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Feb 20 08:33:17 np0005625203.localdomain podman[90680]: 2026-02-20 08:33:17.834391392 +0000 UTC m=+0.144379178 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, distribution-scope=public, tcib_managed=true, release=1766032510, url=https://www.redhat.com, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4)
Feb 20 08:33:17 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:33:17 np0005625203.localdomain podman[90681]: 2026-02-20 08:33:17.887184222 +0000 UTC m=+0.195035842 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:33:17 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:33:17 np0005625203.localdomain podman[90682]: 2026-02-20 08:33:17.890444623 +0000 UTC m=+0.193289057 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, tcib_managed=true, config_id=tripleo_step4, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f)
Feb 20 08:33:17 np0005625203.localdomain podman[90688]: 2026-02-20 08:33:17.947858887 +0000 UTC m=+0.250235546 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5)
Feb 20 08:33:17 np0005625203.localdomain podman[90688]: 2026-02-20 08:33:17.980393008 +0000 UTC m=+0.282769667 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, batch=17.1_20260112.1, container_name=ovn_controller)
Feb 20 08:33:17 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:33:18 np0005625203.localdomain podman[90679]: 2026-02-20 08:33:18.000060799 +0000 UTC m=+0.314759932 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.buildah.version=1.41.5, release=1766032510, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, container_name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible)
Feb 20 08:33:18 np0005625203.localdomain podman[90682]: 2026-02-20 08:33:18.020481133 +0000 UTC m=+0.323325567 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public)
Feb 20 08:33:18 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:33:18 np0005625203.localdomain podman[90679]: 2026-02-20 08:33:18.036559693 +0000 UTC m=+0.351258796 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, batch=17.1_20260112.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:33:18 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:33:18 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:33:18 np0005625203.localdomain recover_tripleo_nova_virtqemud[90800]: 62505
Feb 20 08:33:18 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:33:18 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:33:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:33:22 np0005625203.localdomain podman[90801]: 2026-02-20 08:33:22.772067675 +0000 UTC m=+0.091309208 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, batch=17.1_20260112.1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5)
Feb 20 08:33:23 np0005625203.localdomain podman[90801]: 2026-02-20 08:33:23.103235945 +0000 UTC m=+0.422477488 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, container_name=nova_migration_target, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:33:23 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:33:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:33:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:33:41 np0005625203.localdomain systemd[1]: tmp-crun.HRws9w.mount: Deactivated successfully.
Feb 20 08:33:41 np0005625203.localdomain podman[90824]: 2026-02-20 08:33:41.78079054 +0000 UTC m=+0.097133329 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, version=17.1.13, tcib_managed=true, architecture=x86_64, io.buildah.version=1.41.5, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510)
Feb 20 08:33:41 np0005625203.localdomain podman[90824]: 2026-02-20 08:33:41.820575686 +0000 UTC m=+0.136918435 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, release=1766032510, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:33:41 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:33:41 np0005625203.localdomain podman[90825]: 2026-02-20 08:33:41.834945692 +0000 UTC m=+0.148122733 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5)
Feb 20 08:33:41 np0005625203.localdomain podman[90825]: 2026-02-20 08:33:41.870375704 +0000 UTC m=+0.183552735 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, config_id=tripleo_step3, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3)
Feb 20 08:33:41 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:33:45 np0005625203.localdomain sshd[90864]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:33:45 np0005625203.localdomain sshd[90864]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:33:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:33:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:33:46 np0005625203.localdomain podman[90866]: 2026-02-20 08:33:46.764392942 +0000 UTC m=+0.084480445 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, vcs-type=git, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:33:46 np0005625203.localdomain podman[90866]: 2026-02-20 08:33:46.793272059 +0000 UTC m=+0.113359552 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, release=1766032510, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.openshift.expose-services=, config_id=tripleo_step5, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:33:46 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:33:46 np0005625203.localdomain podman[90867]: 2026-02-20 08:33:46.871905443 +0000 UTC m=+0.187569190 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, version=17.1.13, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr)
Feb 20 08:33:47 np0005625203.localdomain podman[90867]: 2026-02-20 08:33:47.060977768 +0000 UTC m=+0.376641515 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.5, version=17.1.13, io.openshift.expose-services=, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-type=git, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:33:47 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:33:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:33:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:33:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:33:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:33:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:33:48 np0005625203.localdomain systemd[1]: tmp-crun.VasEw9.mount: Deactivated successfully.
Feb 20 08:33:48 np0005625203.localdomain podman[90922]: 2026-02-20 08:33:48.769104943 +0000 UTC m=+0.082725371 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, vendor=Red Hat, Inc., config_id=tripleo_step4, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, build-date=2026-01-12T22:10:15Z)
Feb 20 08:33:48 np0005625203.localdomain podman[90924]: 2026-02-20 08:33:48.832302947 +0000 UTC m=+0.141313693 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:33:48 np0005625203.localdomain podman[90931]: 2026-02-20 08:33:48.873266529 +0000 UTC m=+0.175919446 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vcs-type=git)
Feb 20 08:33:48 np0005625203.localdomain podman[90931]: 2026-02-20 08:33:48.895158669 +0000 UTC m=+0.197811596 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z)
Feb 20 08:33:48 np0005625203.localdomain podman[90924]: 2026-02-20 08:33:48.904424768 +0000 UTC m=+0.213435524 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, distribution-scope=public, batch=17.1_20260112.1, config_id=tripleo_step4, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:33:48 np0005625203.localdomain podman[90922]: 2026-02-20 08:33:48.904912563 +0000 UTC m=+0.218533011 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1766032510, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:33:48 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:33:48 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:33:48 np0005625203.localdomain podman[90925]: 2026-02-20 08:33:48.978430247 +0000 UTC m=+0.284816811 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, io.openshift.expose-services=, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z)
Feb 20 08:33:49 np0005625203.localdomain podman[90923]: 2026-02-20 08:33:48.79991816 +0000 UTC m=+0.109810963 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-type=git, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., version=17.1.13, config_id=tripleo_step4, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5)
Feb 20 08:33:49 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:33:49 np0005625203.localdomain podman[90925]: 2026-02-20 08:33:49.023179748 +0000 UTC m=+0.329566262 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, release=1766032510, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z)
Feb 20 08:33:49 np0005625203.localdomain podman[90923]: 2026-02-20 08:33:49.035433968 +0000 UTC m=+0.345326701 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, release=1766032510, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container)
Feb 20 08:33:49 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:33:49 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:33:52 np0005625203.localdomain sudo[91040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:33:52 np0005625203.localdomain sudo[91040]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:33:52 np0005625203.localdomain sudo[91040]: pam_unix(sudo:session): session closed for user root
Feb 20 08:33:52 np0005625203.localdomain sudo[91055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:33:52 np0005625203.localdomain sudo[91055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:33:52 np0005625203.localdomain sudo[91055]: pam_unix(sudo:session): session closed for user root
Feb 20 08:33:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:33:53 np0005625203.localdomain sudo[91103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:33:53 np0005625203.localdomain sudo[91103]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:33:53 np0005625203.localdomain sudo[91103]: pam_unix(sudo:session): session closed for user root
Feb 20 08:33:53 np0005625203.localdomain systemd[1]: tmp-crun.eBFbLa.mount: Deactivated successfully.
Feb 20 08:33:53 np0005625203.localdomain podman[91111]: 2026-02-20 08:33:53.779114204 +0000 UTC m=+0.087230571 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, url=https://www.redhat.com, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, version=17.1.13, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:33:54 np0005625203.localdomain podman[91111]: 2026-02-20 08:33:54.153555958 +0000 UTC m=+0.461672345 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.buildah.version=1.41.5)
Feb 20 08:33:54 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:34:03 np0005625203.localdomain sshd[91165]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:34:03 np0005625203.localdomain sshd[91165]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:34:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:34:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:34:12 np0005625203.localdomain podman[91168]: 2026-02-20 08:34:12.773504832 +0000 UTC m=+0.085331582 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, container_name=iscsid, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, managed_by=tripleo_ansible, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:34:12 np0005625203.localdomain podman[91168]: 2026-02-20 08:34:12.784218295 +0000 UTC m=+0.096045025 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, version=17.1.13, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z)
Feb 20 08:34:12 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:34:12 np0005625203.localdomain podman[91167]: 2026-02-20 08:34:12.873390846 +0000 UTC m=+0.187455676 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:34:12 np0005625203.localdomain podman[91167]: 2026-02-20 08:34:12.886621436 +0000 UTC m=+0.200686266 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, release=1766032510, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:34:12 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:34:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:34:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:34:17 np0005625203.localdomain systemd[1]: tmp-crun.g0O55x.mount: Deactivated successfully.
Feb 20 08:34:17 np0005625203.localdomain podman[91207]: 2026-02-20 08:34:17.777236879 +0000 UTC m=+0.093207257 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, config_id=tripleo_step1, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13)
Feb 20 08:34:17 np0005625203.localdomain podman[91206]: 2026-02-20 08:34:17.74281436 +0000 UTC m=+0.063340130 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., distribution-scope=public, url=https://www.redhat.com, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, batch=17.1_20260112.1, config_id=tripleo_step5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, version=17.1.13, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Feb 20 08:34:17 np0005625203.localdomain podman[91206]: 2026-02-20 08:34:17.826324975 +0000 UTC m=+0.146850795 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., version=17.1.13, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, config_id=tripleo_step5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:34:17 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:34:17 np0005625203.localdomain podman[91207]: 2026-02-20 08:34:17.974279952 +0000 UTC m=+0.290250340 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.expose-services=)
Feb 20 08:34:17 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:34:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:34:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:34:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:34:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:34:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:34:19 np0005625203.localdomain podman[91272]: 2026-02-20 08:34:19.774991234 +0000 UTC m=+0.079403778 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller)
Feb 20 08:34:19 np0005625203.localdomain podman[91272]: 2026-02-20 08:34:19.802231071 +0000 UTC m=+0.106643615 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, config_id=tripleo_step4, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:34:19 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:34:19 np0005625203.localdomain podman[91263]: 2026-02-20 08:34:19.820672014 +0000 UTC m=+0.136468051 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, distribution-scope=public, release=1766032510, io.buildah.version=1.41.5, config_id=tripleo_step4, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, com.redhat.component=openstack-cron-container)
Feb 20 08:34:19 np0005625203.localdomain podman[91263]: 2026-02-20 08:34:19.830461588 +0000 UTC m=+0.146257675 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20260112.1, release=1766032510, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=)
Feb 20 08:34:19 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:34:19 np0005625203.localdomain podman[91264]: 2026-02-20 08:34:19.878682896 +0000 UTC m=+0.189925071 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z)
Feb 20 08:34:19 np0005625203.localdomain podman[91264]: 2026-02-20 08:34:19.9344854 +0000 UTC m=+0.245727585 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:34:19 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:34:19 np0005625203.localdomain podman[91267]: 2026-02-20 08:34:19.987439916 +0000 UTC m=+0.290694664 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, container_name=ovn_metadata_agent, distribution-scope=public, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f)
Feb 20 08:34:20 np0005625203.localdomain podman[91267]: 2026-02-20 08:34:20.036258963 +0000 UTC m=+0.339513741 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., distribution-scope=public, container_name=ovn_metadata_agent)
Feb 20 08:34:20 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:34:20 np0005625203.localdomain podman[91265]: 2026-02-20 08:34:19.938790204 +0000 UTC m=+0.245809929 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:34:20 np0005625203.localdomain podman[91265]: 2026-02-20 08:34:20.123453992 +0000 UTC m=+0.430473727 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, io.openshift.expose-services=, version=17.1.13, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi)
Feb 20 08:34:20 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:34:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:34:24 np0005625203.localdomain podman[91381]: 2026-02-20 08:34:24.761227438 +0000 UTC m=+0.082848184 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, version=17.1.13, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, container_name=nova_migration_target)
Feb 20 08:34:25 np0005625203.localdomain podman[91381]: 2026-02-20 08:34:25.129264065 +0000 UTC m=+0.450884801 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.expose-services=, version=17.1.13, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, vcs-type=git)
Feb 20 08:34:25 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:34:30 np0005625203.localdomain sshd[91404]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:34:30 np0005625203.localdomain sshd[91404]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:34:42 np0005625203.localdomain sshd[91406]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:34:42 np0005625203.localdomain sshd[91407]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:34:42 np0005625203.localdomain sshd[91407]: error: kex_exchange_identification: read: Connection reset by peer
Feb 20 08:34:42 np0005625203.localdomain sshd[91407]: Connection reset by 176.120.22.52 port 1075
Feb 20 08:34:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:34:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:34:43 np0005625203.localdomain podman[91409]: 2026-02-20 08:34:43.755006857 +0000 UTC m=+0.062079070 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, batch=17.1_20260112.1, config_id=tripleo_step3, vcs-type=git, com.redhat.component=openstack-iscsid-container, tcib_managed=true, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com)
Feb 20 08:34:43 np0005625203.localdomain podman[91409]: 2026-02-20 08:34:43.767388893 +0000 UTC m=+0.074461156 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:34:43 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:34:43 np0005625203.localdomain systemd[1]: tmp-crun.lOMKFA.mount: Deactivated successfully.
Feb 20 08:34:43 np0005625203.localdomain podman[91408]: 2026-02-20 08:34:43.808918263 +0000 UTC m=+0.118404141 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc.)
Feb 20 08:34:43 np0005625203.localdomain podman[91408]: 2026-02-20 08:34:43.845205771 +0000 UTC m=+0.154691619 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, version=17.1.13, release=1766032510, batch=17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, vendor=Red Hat, Inc., container_name=collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:34:43 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:34:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:34:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:34:48 np0005625203.localdomain systemd[1]: tmp-crun.VnUh6i.mount: Deactivated successfully.
Feb 20 08:34:48 np0005625203.localdomain podman[91448]: 2026-02-20 08:34:48.758605951 +0000 UTC m=+0.073606237 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, managed_by=tripleo_ansible, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, io.openshift.expose-services=, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, config_id=tripleo_step1, tcib_managed=true, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com)
Feb 20 08:34:48 np0005625203.localdomain systemd[1]: tmp-crun.Po43Je.mount: Deactivated successfully.
Feb 20 08:34:48 np0005625203.localdomain podman[91447]: 2026-02-20 08:34:48.827575415 +0000 UTC m=+0.145059049 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2026-01-12T23:32:04Z, release=1766032510, version=17.1.13, batch=17.1_20260112.1, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Feb 20 08:34:48 np0005625203.localdomain podman[91447]: 2026-02-20 08:34:48.878206598 +0000 UTC m=+0.195690172 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, container_name=nova_compute, vcs-type=git, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:34:48 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:34:48 np0005625203.localdomain podman[91448]: 2026-02-20 08:34:48.933911299 +0000 UTC m=+0.248911585 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., batch=17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, build-date=2026-01-12T22:10:14Z)
Feb 20 08:34:48 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:34:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:34:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:34:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:34:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:34:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:34:50 np0005625203.localdomain podman[91510]: 2026-02-20 08:34:50.782827858 +0000 UTC m=+0.089731639 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, distribution-scope=public, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, version=17.1.13, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com)
Feb 20 08:34:50 np0005625203.localdomain podman[91510]: 2026-02-20 08:34:50.824453512 +0000 UTC m=+0.131357403 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, build-date=2026-01-12T22:36:40Z, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.expose-services=, container_name=ovn_controller)
Feb 20 08:34:50 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:34:50 np0005625203.localdomain podman[91503]: 2026-02-20 08:34:50.828019082 +0000 UTC m=+0.144102418 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, architecture=x86_64, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, distribution-scope=public, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:34:50 np0005625203.localdomain podman[91506]: 2026-02-20 08:34:50.902612371 +0000 UTC m=+0.208096867 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=ovn_metadata_agent, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Feb 20 08:34:50 np0005625203.localdomain podman[91504]: 2026-02-20 08:34:50.945585855 +0000 UTC m=+0.257905794 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-type=git, release=1766032510)
Feb 20 08:34:50 np0005625203.localdomain podman[91503]: 2026-02-20 08:34:50.963175082 +0000 UTC m=+0.279258398 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team)
Feb 20 08:34:50 np0005625203.localdomain podman[91506]: 2026-02-20 08:34:50.968291091 +0000 UTC m=+0.273775597 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent)
Feb 20 08:34:50 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:34:50 np0005625203.localdomain podman[91504]: 2026-02-20 08:34:50.980754389 +0000 UTC m=+0.293074308 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, tcib_managed=true, config_id=tripleo_step4)
Feb 20 08:34:50 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:34:50 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:34:51 np0005625203.localdomain podman[91505]: 2026-02-20 08:34:51.077195625 +0000 UTC m=+0.384611521 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, batch=17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, architecture=x86_64)
Feb 20 08:34:51 np0005625203.localdomain podman[91505]: 2026-02-20 08:34:51.101893533 +0000 UTC m=+0.409309409 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, architecture=x86_64, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Feb 20 08:34:51 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:34:51 np0005625203.localdomain systemd[1]: tmp-crun.UoPZEL.mount: Deactivated successfully.
Feb 20 08:34:53 np0005625203.localdomain sudo[91628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:34:53 np0005625203.localdomain sudo[91628]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:34:53 np0005625203.localdomain sudo[91628]: pam_unix(sudo:session): session closed for user root
Feb 20 08:34:53 np0005625203.localdomain sudo[91643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:34:54 np0005625203.localdomain sudo[91643]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:34:54 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:34:54 np0005625203.localdomain recover_tripleo_nova_virtqemud[91659]: 62505
Feb 20 08:34:54 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:34:54 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:34:54 np0005625203.localdomain sudo[91643]: pam_unix(sudo:session): session closed for user root
Feb 20 08:34:55 np0005625203.localdomain sudo[91691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:34:55 np0005625203.localdomain sudo[91691]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:34:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:34:55 np0005625203.localdomain sudo[91691]: pam_unix(sudo:session): session closed for user root
Feb 20 08:34:55 np0005625203.localdomain podman[91706]: 2026-02-20 08:34:55.38149168 +0000 UTC m=+0.080656027 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, tcib_managed=true)
Feb 20 08:34:55 np0005625203.localdomain podman[91706]: 2026-02-20 08:34:55.750213147 +0000 UTC m=+0.449377484 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target)
Feb 20 08:34:55 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:34:57 np0005625203.localdomain sshd[91731]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:34:57 np0005625203.localdomain sshd[91731]: Invalid user msf from 147.135.114.8 port 56582
Feb 20 08:34:57 np0005625203.localdomain sshd[91731]: Received disconnect from 147.135.114.8 port 56582:11: Bye Bye [preauth]
Feb 20 08:34:57 np0005625203.localdomain sshd[91731]: Disconnected from invalid user msf 147.135.114.8 port 56582 [preauth]
Feb 20 08:35:14 np0005625203.localdomain sshd[91733]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:35:14 np0005625203.localdomain sshd[91733]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:35:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:35:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:35:14 np0005625203.localdomain podman[91736]: 2026-02-20 08:35:14.509734177 +0000 UTC m=+0.139331211 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, config_id=tripleo_step3, tcib_managed=true, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.buildah.version=1.41.5)
Feb 20 08:35:14 np0005625203.localdomain podman[91736]: 2026-02-20 08:35:14.522222855 +0000 UTC m=+0.151819899 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, url=https://www.redhat.com, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, managed_by=tripleo_ansible, release=1766032510, name=rhosp-rhel9/openstack-iscsid)
Feb 20 08:35:14 np0005625203.localdomain podman[91735]: 2026-02-20 08:35:14.479990872 +0000 UTC m=+0.113769566 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, container_name=collectd, io.buildah.version=1.41.5, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-type=git)
Feb 20 08:35:14 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:35:14 np0005625203.localdomain podman[91735]: 2026-02-20 08:35:14.564355044 +0000 UTC m=+0.198133728 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-collectd-container, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.13, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:35:14 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:35:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:35:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:35:19 np0005625203.localdomain podman[91775]: 2026-02-20 08:35:19.775189527 +0000 UTC m=+0.087649284 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, tcib_managed=true, architecture=x86_64, managed_by=tripleo_ansible, container_name=metrics_qdr, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com)
Feb 20 08:35:19 np0005625203.localdomain systemd[1]: tmp-crun.2BQFx9.mount: Deactivated successfully.
Feb 20 08:35:19 np0005625203.localdomain podman[91774]: 2026-02-20 08:35:19.838923048 +0000 UTC m=+0.153336596 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, container_name=nova_compute, url=https://www.redhat.com, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public)
Feb 20 08:35:19 np0005625203.localdomain podman[91774]: 2026-02-20 08:35:19.893542125 +0000 UTC m=+0.207955633 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.13, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, release=1766032510)
Feb 20 08:35:19 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:35:19 np0005625203.localdomain podman[91775]: 2026-02-20 08:35:19.992362116 +0000 UTC m=+0.304821963 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, container_name=metrics_qdr, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com)
Feb 20 08:35:20 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:35:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:35:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:35:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:35:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:35:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:35:21 np0005625203.localdomain podman[91830]: 2026-02-20 08:35:21.776681639 +0000 UTC m=+0.090112801 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, release=1766032510, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com)
Feb 20 08:35:21 np0005625203.localdomain podman[91830]: 2026-02-20 08:35:21.808412005 +0000 UTC m=+0.121843167 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, container_name=ceilometer_agent_compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible)
Feb 20 08:35:21 np0005625203.localdomain podman[91831]: 2026-02-20 08:35:21.822469371 +0000 UTC m=+0.132845268 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:35:21 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:35:21 np0005625203.localdomain podman[91834]: 2026-02-20 08:35:21.877921785 +0000 UTC m=+0.182471491 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, release=1766032510, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, architecture=x86_64)
Feb 20 08:35:21 np0005625203.localdomain podman[91829]: 2026-02-20 08:35:21.92994864 +0000 UTC m=+0.243532157 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, batch=17.1_20260112.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, version=17.1.13, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:35:21 np0005625203.localdomain podman[91829]: 2026-02-20 08:35:21.937137014 +0000 UTC m=+0.250720521 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.expose-services=, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron)
Feb 20 08:35:21 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:35:21 np0005625203.localdomain podman[91834]: 2026-02-20 08:35:21.981362989 +0000 UTC m=+0.285912685 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, url=https://www.redhat.com, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, container_name=ovn_controller, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z)
Feb 20 08:35:21 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:35:22 np0005625203.localdomain podman[91831]: 2026-02-20 08:35:22.031670892 +0000 UTC m=+0.342046779 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step4, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, batch=17.1_20260112.1, distribution-scope=public, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git)
Feb 20 08:35:22 np0005625203.localdomain podman[91832]: 2026-02-20 08:35:21.988023725 +0000 UTC m=+0.296377890 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, release=1766032510, batch=17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vcs-type=git, architecture=x86_64, config_id=tripleo_step4)
Feb 20 08:35:22 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:35:22 np0005625203.localdomain podman[91832]: 2026-02-20 08:35:22.070985863 +0000 UTC m=+0.379339978 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.13, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, release=1766032510, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, distribution-scope=public)
Feb 20 08:35:22 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:35:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:35:26 np0005625203.localdomain systemd[1]: tmp-crun.iFfGPG.mount: Deactivated successfully.
Feb 20 08:35:26 np0005625203.localdomain podman[91950]: 2026-02-20 08:35:26.777532777 +0000 UTC m=+0.099374270 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, release=1766032510, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5)
Feb 20 08:35:27 np0005625203.localdomain podman[91950]: 2026-02-20 08:35:27.142559309 +0000 UTC m=+0.464400862 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, build-date=2026-01-12T23:32:04Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13)
Feb 20 08:35:27 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:35:32 np0005625203.localdomain sshd[91972]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:35:33 np0005625203.localdomain sshd[91972]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:35:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:35:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:35:44 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:35:44 np0005625203.localdomain recover_tripleo_nova_virtqemud[91987]: 62505
Feb 20 08:35:44 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:35:44 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:35:44 np0005625203.localdomain podman[91975]: 2026-02-20 08:35:44.788495167 +0000 UTC m=+0.098968576 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Feb 20 08:35:44 np0005625203.localdomain podman[91974]: 2026-02-20 08:35:44.843110524 +0000 UTC m=+0.155257185 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, io.openshift.expose-services=, version=17.1.13, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, tcib_managed=true, batch=17.1_20260112.1, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com)
Feb 20 08:35:44 np0005625203.localdomain podman[91975]: 2026-02-20 08:35:44.856101067 +0000 UTC m=+0.166574516 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public)
Feb 20 08:35:44 np0005625203.localdomain podman[91974]: 2026-02-20 08:35:44.85424201 +0000 UTC m=+0.166388671 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, version=17.1.13, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible)
Feb 20 08:35:44 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:35:44 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:35:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:35:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:35:50 np0005625203.localdomain podman[92016]: 2026-02-20 08:35:50.747384983 +0000 UTC m=+0.069395928 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, release=1766032510, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_id=tripleo_step1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public)
Feb 20 08:35:50 np0005625203.localdomain systemd[1]: tmp-crun.BLy69p.mount: Deactivated successfully.
Feb 20 08:35:50 np0005625203.localdomain podman[92015]: 2026-02-20 08:35:50.809784502 +0000 UTC m=+0.130943750 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, managed_by=tripleo_ansible, config_id=tripleo_step5, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:35:50 np0005625203.localdomain podman[92015]: 2026-02-20 08:35:50.842037354 +0000 UTC m=+0.163196672 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:35:50 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:35:50 np0005625203.localdomain podman[92016]: 2026-02-20 08:35:50.962512257 +0000 UTC m=+0.284523272 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_id=tripleo_step1, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team)
Feb 20 08:35:50 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:35:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:35:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:35:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:35:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:35:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:35:52 np0005625203.localdomain podman[92068]: 2026-02-20 08:35:52.787830674 +0000 UTC m=+0.100465853 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-cron-container, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, architecture=x86_64, container_name=logrotate_crond, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:35:52 np0005625203.localdomain systemd[1]: tmp-crun.cM4TiJ.mount: Deactivated successfully.
Feb 20 08:35:52 np0005625203.localdomain podman[92069]: 2026-02-20 08:35:52.847301801 +0000 UTC m=+0.159451435 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:35:52 np0005625203.localdomain podman[92069]: 2026-02-20 08:35:52.878724147 +0000 UTC m=+0.190873861 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, managed_by=tripleo_ansible, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:35:52 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:35:52 np0005625203.localdomain podman[92070]: 2026-02-20 08:35:52.895104507 +0000 UTC m=+0.204020441 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, url=https://www.redhat.com, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., io.openshift.expose-services=)
Feb 20 08:35:52 np0005625203.localdomain podman[92070]: 2026-02-20 08:35:52.931146947 +0000 UTC m=+0.240062881 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi)
Feb 20 08:35:52 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:35:52 np0005625203.localdomain podman[92074]: 2026-02-20 08:35:52.945225664 +0000 UTC m=+0.246642345 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=ovn_controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, version=17.1.13, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c)
Feb 20 08:35:52 np0005625203.localdomain podman[92074]: 2026-02-20 08:35:52.989032695 +0000 UTC m=+0.290449446 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, release=1766032510, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c)
Feb 20 08:35:53 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:35:53 np0005625203.localdomain podman[92071]: 2026-02-20 08:35:53.00367783 +0000 UTC m=+0.307626199 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5)
Feb 20 08:35:53 np0005625203.localdomain podman[92068]: 2026-02-20 08:35:53.022295048 +0000 UTC m=+0.334930297 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, build-date=2026-01-12T22:10:15Z)
Feb 20 08:35:53 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:35:53 np0005625203.localdomain podman[92071]: 2026-02-20 08:35:53.056286104 +0000 UTC m=+0.360234423 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.buildah.version=1.41.5, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f)
Feb 20 08:35:53 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:35:55 np0005625203.localdomain sudo[92192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:35:55 np0005625203.localdomain sudo[92192]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:35:55 np0005625203.localdomain sudo[92192]: pam_unix(sudo:session): session closed for user root
Feb 20 08:35:55 np0005625203.localdomain sudo[92207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:35:55 np0005625203.localdomain sudo[92207]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:35:56 np0005625203.localdomain sudo[92207]: pam_unix(sudo:session): session closed for user root
Feb 20 08:35:56 np0005625203.localdomain sudo[92253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:35:56 np0005625203.localdomain sudo[92253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:35:56 np0005625203.localdomain sudo[92253]: pam_unix(sudo:session): session closed for user root
Feb 20 08:35:57 np0005625203.localdomain sshd[92268]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:35:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:35:57 np0005625203.localdomain podman[92270]: 2026-02-20 08:35:57.766204643 +0000 UTC m=+0.086004804 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, vcs-type=git, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:35:57 np0005625203.localdomain sshd[92268]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:35:58 np0005625203.localdomain podman[92270]: 2026-02-20 08:35:58.13977654 +0000 UTC m=+0.459576751 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, version=17.1.13, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team)
Feb 20 08:35:58 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:36:03 np0005625203.localdomain sshd[92294]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:36:04 np0005625203.localdomain sshd[92294]: Received disconnect from 212.154.234.9 port 58579:11: Bye Bye [preauth]
Feb 20 08:36:04 np0005625203.localdomain sshd[92294]: Disconnected from authenticating user root 212.154.234.9 port 58579 [preauth]
Feb 20 08:36:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:36:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:36:15 np0005625203.localdomain podman[92297]: 2026-02-20 08:36:15.769118881 +0000 UTC m=+0.081183364 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:36:15 np0005625203.localdomain podman[92297]: 2026-02-20 08:36:15.778864364 +0000 UTC m=+0.090928837 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, container_name=iscsid, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, config_id=tripleo_step3, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510)
Feb 20 08:36:15 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:36:15 np0005625203.localdomain podman[92296]: 2026-02-20 08:36:15.871784071 +0000 UTC m=+0.183556505 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, tcib_managed=true, build-date=2026-01-12T22:10:15Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, io.buildah.version=1.41.5, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:36:15 np0005625203.localdomain podman[92296]: 2026-02-20 08:36:15.884338191 +0000 UTC m=+0.196110625 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, distribution-scope=public, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z)
Feb 20 08:36:15 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:36:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:36:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:36:21 np0005625203.localdomain podman[92336]: 2026-02-20 08:36:21.744247974 +0000 UTC m=+0.063179514 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, version=17.1.13, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:36:21 np0005625203.localdomain systemd[1]: tmp-crun.lUinnj.mount: Deactivated successfully.
Feb 20 08:36:21 np0005625203.localdomain podman[92335]: 2026-02-20 08:36:21.799400168 +0000 UTC m=+0.119150954 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=nova_compute, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., url=https://www.redhat.com, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:36:21 np0005625203.localdomain podman[92335]: 2026-02-20 08:36:21.826304173 +0000 UTC m=+0.146054959 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, version=17.1.13, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_compute)
Feb 20 08:36:21 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:36:21 np0005625203.localdomain podman[92336]: 2026-02-20 08:36:21.958597604 +0000 UTC m=+0.277529164 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, managed_by=tripleo_ansible, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64)
Feb 20 08:36:21 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:36:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:36:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:36:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:36:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:36:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:36:23 np0005625203.localdomain podman[92388]: 2026-02-20 08:36:23.790224136 +0000 UTC m=+0.103324950 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container)
Feb 20 08:36:23 np0005625203.localdomain podman[92390]: 2026-02-20 08:36:23.847185556 +0000 UTC m=+0.149034941 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, version=17.1.13, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:36:23 np0005625203.localdomain podman[92388]: 2026-02-20 08:36:23.875325351 +0000 UTC m=+0.188426205 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:15Z, vcs-type=git, architecture=x86_64, container_name=logrotate_crond)
Feb 20 08:36:23 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:36:23 np0005625203.localdomain podman[92389]: 2026-02-20 08:36:23.898733707 +0000 UTC m=+0.206127604 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_id=tripleo_step4, container_name=ceilometer_agent_compute, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20260112.1)
Feb 20 08:36:23 np0005625203.localdomain podman[92390]: 2026-02-20 08:36:23.931495696 +0000 UTC m=+0.233345091 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, config_id=tripleo_step4, version=17.1.13, distribution-scope=public, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., tcib_managed=true, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git)
Feb 20 08:36:23 np0005625203.localdomain podman[92389]: 2026-02-20 08:36:23.936360627 +0000 UTC m=+0.243754564 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute)
Feb 20 08:36:23 np0005625203.localdomain podman[92395]: 2026-02-20 08:36:23.9451315 +0000 UTC m=+0.243259479 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.5)
Feb 20 08:36:23 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:36:23 np0005625203.localdomain podman[92401]: 2026-02-20 08:36:23.815036267 +0000 UTC m=+0.107917693 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, architecture=x86_64)
Feb 20 08:36:23 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:36:24 np0005625203.localdomain podman[92401]: 2026-02-20 08:36:23.999582161 +0000 UTC m=+0.292463587 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:36:24 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:36:24 np0005625203.localdomain podman[92395]: 2026-02-20 08:36:24.013134492 +0000 UTC m=+0.311262481 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true)
Feb 20 08:36:24 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:36:24 np0005625203.localdomain systemd[1]: tmp-crun.oVOUK0.mount: Deactivated successfully.
Feb 20 08:36:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:36:28 np0005625203.localdomain podman[92505]: 2026-02-20 08:36:28.767455313 +0000 UTC m=+0.082409779 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, io.buildah.version=1.41.5, architecture=x86_64)
Feb 20 08:36:29 np0005625203.localdomain podman[92505]: 2026-02-20 08:36:29.140811637 +0000 UTC m=+0.455766143 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, vendor=Red Hat, Inc., managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:36:29 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:36:37 np0005625203.localdomain sshd[92528]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:36:38 np0005625203.localdomain sshd[92528]: Received disconnect from 102.210.148.92 port 59720:11: Bye Bye [preauth]
Feb 20 08:36:38 np0005625203.localdomain sshd[92528]: Disconnected from authenticating user root 102.210.148.92 port 59720 [preauth]
Feb 20 08:36:41 np0005625203.localdomain sshd[92530]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:36:41 np0005625203.localdomain sshd[92530]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:36:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:36:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:36:46 np0005625203.localdomain podman[92532]: 2026-02-20 08:36:46.770231781 +0000 UTC m=+0.086827698 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, container_name=collectd, architecture=x86_64, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']})
Feb 20 08:36:46 np0005625203.localdomain podman[92533]: 2026-02-20 08:36:46.815299031 +0000 UTC m=+0.129096671 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, config_id=tripleo_step3, distribution-scope=public, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, url=https://www.redhat.com, io.buildah.version=1.41.5, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=)
Feb 20 08:36:46 np0005625203.localdomain podman[92532]: 2026-02-20 08:36:46.835630182 +0000 UTC m=+0.152226049 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=openstack-collectd-container, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, vcs-type=git)
Feb 20 08:36:46 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:36:46 np0005625203.localdomain podman[92533]: 2026-02-20 08:36:46.853187367 +0000 UTC m=+0.166985037 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, version=17.1.13, release=1766032510, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step3, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, vcs-type=git, architecture=x86_64)
Feb 20 08:36:46 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:36:49 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:36:49 np0005625203.localdomain recover_tripleo_nova_virtqemud[92571]: 62505
Feb 20 08:36:49 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:36:49 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:36:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:36:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:36:52 np0005625203.localdomain podman[92572]: 2026-02-20 08:36:52.758292145 +0000 UTC m=+0.077318583 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, vendor=Red Hat, Inc.)
Feb 20 08:36:52 np0005625203.localdomain podman[92573]: 2026-02-20 08:36:52.809386171 +0000 UTC m=+0.128973796 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_id=tripleo_step1, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Feb 20 08:36:52 np0005625203.localdomain podman[92572]: 2026-02-20 08:36:52.837929928 +0000 UTC m=+0.156956356 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team)
Feb 20 08:36:52 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:36:53 np0005625203.localdomain podman[92573]: 2026-02-20 08:36:53.00739173 +0000 UTC m=+0.326979345 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, container_name=metrics_qdr, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:36:53 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:36:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:36:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:36:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:36:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:36:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:36:54 np0005625203.localdomain podman[92629]: 2026-02-20 08:36:54.791937728 +0000 UTC m=+0.100948986 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:36:54 np0005625203.localdomain podman[92631]: 2026-02-20 08:36:54.871134827 +0000 UTC m=+0.170741014 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vcs-type=git, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, version=17.1.13)
Feb 20 08:36:54 np0005625203.localdomain podman[92629]: 2026-02-20 08:36:54.877317078 +0000 UTC m=+0.186328336 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, io.buildah.version=1.41.5, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1)
Feb 20 08:36:54 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:36:54 np0005625203.localdomain podman[92631]: 2026-02-20 08:36:54.904214004 +0000 UTC m=+0.203820171 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:36:54 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:36:54 np0005625203.localdomain podman[92628]: 2026-02-20 08:36:54.95110167 +0000 UTC m=+0.261415069 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., version=17.1.13, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20260112.1)
Feb 20 08:36:54 np0005625203.localdomain podman[92639]: 2026-02-20 08:36:54.997403718 +0000 UTC m=+0.292996659 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, distribution-scope=public, version=17.1.13, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, config_id=tripleo_step4, io.buildah.version=1.41.5, architecture=x86_64)
Feb 20 08:36:55 np0005625203.localdomain podman[92630]: 2026-02-20 08:36:55.044533081 +0000 UTC m=+0.347291575 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, release=1766032510, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z)
Feb 20 08:36:55 np0005625203.localdomain podman[92628]: 2026-02-20 08:36:55.071101557 +0000 UTC m=+0.381414956 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, architecture=x86_64, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510)
Feb 20 08:36:55 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:36:55 np0005625203.localdomain podman[92630]: 2026-02-20 08:36:55.082108369 +0000 UTC m=+0.384866833 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, build-date=2026-01-12T23:07:30Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z)
Feb 20 08:36:55 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:36:55 np0005625203.localdomain podman[92639]: 2026-02-20 08:36:55.123185254 +0000 UTC m=+0.418778225 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z)
Feb 20 08:36:55 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:36:55 np0005625203.localdomain systemd[1]: tmp-crun.0D4khX.mount: Deactivated successfully.
Feb 20 08:36:56 np0005625203.localdomain sshd[92751]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:36:57 np0005625203.localdomain sudo[92752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:36:57 np0005625203.localdomain sudo[92752]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:36:57 np0005625203.localdomain sudo[92752]: pam_unix(sudo:session): session closed for user root
Feb 20 08:36:57 np0005625203.localdomain sudo[92767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:36:57 np0005625203.localdomain sudo[92767]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:36:57 np0005625203.localdomain sudo[92767]: pam_unix(sudo:session): session closed for user root
Feb 20 08:36:57 np0005625203.localdomain sshd[92751]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:36:58 np0005625203.localdomain sudo[92814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:36:58 np0005625203.localdomain sudo[92814]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:36:58 np0005625203.localdomain sudo[92814]: pam_unix(sudo:session): session closed for user root
Feb 20 08:36:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:36:59 np0005625203.localdomain podman[92829]: 2026-02-20 08:36:59.769759838 +0000 UTC m=+0.081966707 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:37:00 np0005625203.localdomain podman[92829]: 2026-02-20 08:37:00.166471216 +0000 UTC m=+0.478678135 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:37:00 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:37:08 np0005625203.localdomain sshd[92853]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:37:10 np0005625203.localdomain sshd[92853]: Invalid user antoine from 103.200.25.162 port 57406
Feb 20 08:37:10 np0005625203.localdomain sshd[92853]: Received disconnect from 103.200.25.162 port 57406:11: Bye Bye [preauth]
Feb 20 08:37:10 np0005625203.localdomain sshd[92853]: Disconnected from invalid user antoine 103.200.25.162 port 57406 [preauth]
Feb 20 08:37:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:37:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:37:17 np0005625203.localdomain podman[92855]: 2026-02-20 08:37:17.76591827 +0000 UTC m=+0.079392378 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_id=tripleo_step3, vendor=Red Hat, Inc., container_name=collectd, managed_by=tripleo_ansible, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z)
Feb 20 08:37:17 np0005625203.localdomain podman[92855]: 2026-02-20 08:37:17.780295926 +0000 UTC m=+0.093770054 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64)
Feb 20 08:37:17 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:37:17 np0005625203.localdomain podman[92856]: 2026-02-20 08:37:17.824036265 +0000 UTC m=+0.133407495 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=iscsid, url=https://www.redhat.com, release=1766032510, io.buildah.version=1.41.5, tcib_managed=true, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13)
Feb 20 08:37:17 np0005625203.localdomain podman[92856]: 2026-02-20 08:37:17.864360956 +0000 UTC m=+0.173732136 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, container_name=iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Feb 20 08:37:17 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:37:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:37:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:37:23 np0005625203.localdomain systemd[1]: tmp-crun.huy1eV.mount: Deactivated successfully.
Feb 20 08:37:23 np0005625203.localdomain podman[92896]: 2026-02-20 08:37:23.773500169 +0000 UTC m=+0.094467284 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, distribution-scope=public, io.openshift.expose-services=, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, container_name=nova_compute)
Feb 20 08:37:23 np0005625203.localdomain podman[92897]: 2026-02-20 08:37:23.82536607 +0000 UTC m=+0.143531268 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, url=https://www.redhat.com, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z)
Feb 20 08:37:23 np0005625203.localdomain podman[92896]: 2026-02-20 08:37:23.880682328 +0000 UTC m=+0.201649403 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, distribution-scope=public, vcs-type=git, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:37:23 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:37:24 np0005625203.localdomain podman[92897]: 2026-02-20 08:37:24.020306153 +0000 UTC m=+0.338471381 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, url=https://www.redhat.com, release=1766032510)
Feb 20 08:37:24 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:37:24 np0005625203.localdomain systemd[1]: tmp-crun.opLeit.mount: Deactivated successfully.
Feb 20 08:37:25 np0005625203.localdomain sshd[92950]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:37:25 np0005625203.localdomain sshd[92950]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:37:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:37:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:37:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:37:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:37:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:37:25 np0005625203.localdomain podman[92953]: 2026-02-20 08:37:25.374852617 +0000 UTC m=+0.086906059 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, architecture=x86_64, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:37:25 np0005625203.localdomain podman[92964]: 2026-02-20 08:37:25.433957493 +0000 UTC m=+0.131734962 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, architecture=x86_64, version=17.1.13, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5)
Feb 20 08:37:25 np0005625203.localdomain podman[92955]: 2026-02-20 08:37:25.395371014 +0000 UTC m=+0.099837210 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, distribution-scope=public, tcib_managed=true, release=1766032510)
Feb 20 08:37:25 np0005625203.localdomain podman[92955]: 2026-02-20 08:37:25.474280505 +0000 UTC m=+0.178746681 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, io.buildah.version=1.41.5, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, release=1766032510, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Feb 20 08:37:25 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:37:25 np0005625203.localdomain podman[92954]: 2026-02-20 08:37:25.485430221 +0000 UTC m=+0.189847896 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.openshift.expose-services=)
Feb 20 08:37:25 np0005625203.localdomain podman[92964]: 2026-02-20 08:37:25.501265673 +0000 UTC m=+0.199043142 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, maintainer=OpenStack TripleO Team, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com)
Feb 20 08:37:25 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:37:25 np0005625203.localdomain podman[92953]: 2026-02-20 08:37:25.513333187 +0000 UTC m=+0.225386589 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., release=1766032510, version=17.1.13, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, config_id=tripleo_step4, batch=17.1_20260112.1, architecture=x86_64)
Feb 20 08:37:25 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:37:25 np0005625203.localdomain podman[92954]: 2026-02-20 08:37:25.540032627 +0000 UTC m=+0.244450292 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4)
Feb 20 08:37:25 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:37:25 np0005625203.localdomain podman[92952]: 2026-02-20 08:37:25.41551542 +0000 UTC m=+0.127943774 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_id=tripleo_step4, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible)
Feb 20 08:37:25 np0005625203.localdomain podman[92952]: 2026-02-20 08:37:25.596045906 +0000 UTC m=+0.308474330 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.buildah.version=1.41.5, container_name=logrotate_crond, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:37:25 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:37:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:37:30 np0005625203.localdomain systemd[1]: tmp-crun.DQJ8nd.mount: Deactivated successfully.
Feb 20 08:37:30 np0005625203.localdomain podman[93075]: 2026-02-20 08:37:30.773990662 +0000 UTC m=+0.093647479 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, tcib_managed=true, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:37:31 np0005625203.localdomain podman[93075]: 2026-02-20 08:37:31.139374449 +0000 UTC m=+0.459031246 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vcs-type=git, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13)
Feb 20 08:37:31 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:37:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:37:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:37:48 np0005625203.localdomain podman[93099]: 2026-02-20 08:37:48.76363166 +0000 UTC m=+0.078259841 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, managed_by=tripleo_ansible, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, container_name=iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:37:48 np0005625203.localdomain podman[93099]: 2026-02-20 08:37:48.77523263 +0000 UTC m=+0.089860821 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., version=17.1.13, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=iscsid, vcs-type=git, tcib_managed=true, distribution-scope=public)
Feb 20 08:37:48 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:37:48 np0005625203.localdomain systemd[1]: tmp-crun.vhMnBk.mount: Deactivated successfully.
Feb 20 08:37:48 np0005625203.localdomain podman[93098]: 2026-02-20 08:37:48.827234736 +0000 UTC m=+0.143511679 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.5, container_name=collectd, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, release=1766032510, batch=17.1_20260112.1, architecture=x86_64, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, maintainer=OpenStack TripleO Team)
Feb 20 08:37:48 np0005625203.localdomain podman[93098]: 2026-02-20 08:37:48.864218424 +0000 UTC m=+0.180495367 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, io.buildah.version=1.41.5)
Feb 20 08:37:48 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:37:52 np0005625203.localdomain sshd[93134]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:37:52 np0005625203.localdomain sshd[93134]: Invalid user claude from 147.135.114.8 port 33956
Feb 20 08:37:52 np0005625203.localdomain sshd[93134]: Received disconnect from 147.135.114.8 port 33956:11: Bye Bye [preauth]
Feb 20 08:37:52 np0005625203.localdomain sshd[93134]: Disconnected from invalid user claude 147.135.114.8 port 33956 [preauth]
Feb 20 08:37:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:37:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:37:54 np0005625203.localdomain systemd[1]: tmp-crun.NpfV0n.mount: Deactivated successfully.
Feb 20 08:37:54 np0005625203.localdomain podman[93136]: 2026-02-20 08:37:54.781725856 +0000 UTC m=+0.098268723 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1766032510, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5)
Feb 20 08:37:54 np0005625203.localdomain podman[93136]: 2026-02-20 08:37:54.839564702 +0000 UTC m=+0.156107559 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, distribution-scope=public, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=)
Feb 20 08:37:54 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:37:54 np0005625203.localdomain systemd[1]: tmp-crun.RdsXfo.mount: Deactivated successfully.
Feb 20 08:37:54 np0005625203.localdomain podman[93137]: 2026-02-20 08:37:54.93222197 +0000 UTC m=+0.242966547 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, version=17.1.13, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64)
Feb 20 08:37:55 np0005625203.localdomain podman[93137]: 2026-02-20 08:37:55.12735261 +0000 UTC m=+0.438097197 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.buildah.version=1.41.5, config_id=tripleo_step1, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, url=https://www.redhat.com)
Feb 20 08:37:55 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:37:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:37:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:37:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:37:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:37:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:37:55 np0005625203.localdomain podman[93189]: 2026-02-20 08:37:55.781621167 +0000 UTC m=+0.089714277 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team)
Feb 20 08:37:55 np0005625203.localdomain podman[93191]: 2026-02-20 08:37:55.837630006 +0000 UTC m=+0.139782102 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, batch=17.1_20260112.1)
Feb 20 08:37:55 np0005625203.localdomain podman[93188]: 2026-02-20 08:37:55.899163767 +0000 UTC m=+0.211311443 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, batch=17.1_20260112.1, container_name=logrotate_crond, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, release=1766032510, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:37:55 np0005625203.localdomain podman[93197]: 2026-02-20 08:37:55.817839332 +0000 UTC m=+0.115304772 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, version=17.1.13, build-date=2026-01-12T22:36:40Z)
Feb 20 08:37:55 np0005625203.localdomain podman[93191]: 2026-02-20 08:37:55.928323682 +0000 UTC m=+0.230475778 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, distribution-scope=public)
Feb 20 08:37:55 np0005625203.localdomain podman[93188]: 2026-02-20 08:37:55.935387242 +0000 UTC m=+0.247534898 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, architecture=x86_64)
Feb 20 08:37:55 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:37:55 np0005625203.localdomain podman[93197]: 2026-02-20 08:37:55.95142809 +0000 UTC m=+0.248893480 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=)
Feb 20 08:37:55 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:37:55 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:37:55 np0005625203.localdomain podman[93190]: 2026-02-20 08:37:55.996522171 +0000 UTC m=+0.301165024 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, tcib_managed=true, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:37:56 np0005625203.localdomain podman[93189]: 2026-02-20 08:37:56.020373591 +0000 UTC m=+0.328466701 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=)
Feb 20 08:37:56 np0005625203.localdomain podman[93190]: 2026-02-20 08:37:56.032318763 +0000 UTC m=+0.336961626 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com)
Feb 20 08:37:56 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:37:56 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:37:56 np0005625203.localdomain systemd[1]: tmp-crun.1OYOvK.mount: Deactivated successfully.
Feb 20 08:37:58 np0005625203.localdomain sudo[93306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:37:58 np0005625203.localdomain sudo[93306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:37:58 np0005625203.localdomain sudo[93306]: pam_unix(sudo:session): session closed for user root
Feb 20 08:37:58 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:37:58 np0005625203.localdomain recover_tripleo_nova_virtqemud[93322]: 62505
Feb 20 08:37:58 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:37:58 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:37:58 np0005625203.localdomain sudo[93323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:37:58 np0005625203.localdomain sudo[93323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:37:59 np0005625203.localdomain sudo[93323]: pam_unix(sudo:session): session closed for user root
Feb 20 08:38:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:38:01 np0005625203.localdomain systemd[1]: tmp-crun.gRjsuL.mount: Deactivated successfully.
Feb 20 08:38:01 np0005625203.localdomain podman[93369]: 2026-02-20 08:38:01.779509424 +0000 UTC m=+0.089392047 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, url=https://www.redhat.com, architecture=x86_64, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=nova_migration_target, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Feb 20 08:38:02 np0005625203.localdomain podman[93369]: 2026-02-20 08:38:02.174278333 +0000 UTC m=+0.484160936 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, tcib_managed=true)
Feb 20 08:38:02 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:38:03 np0005625203.localdomain sudo[93392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:38:03 np0005625203.localdomain sudo[93392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:38:03 np0005625203.localdomain sudo[93392]: pam_unix(sudo:session): session closed for user root
Feb 20 08:38:10 np0005625203.localdomain sshd[93407]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:38:10 np0005625203.localdomain sshd[93407]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:38:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:38:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:38:19 np0005625203.localdomain systemd[1]: tmp-crun.lActXh.mount: Deactivated successfully.
Feb 20 08:38:19 np0005625203.localdomain podman[93410]: 2026-02-20 08:38:19.773291283 +0000 UTC m=+0.087173718 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, architecture=x86_64, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:38:19 np0005625203.localdomain podman[93410]: 2026-02-20 08:38:19.782676395 +0000 UTC m=+0.096558870 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, container_name=iscsid, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z)
Feb 20 08:38:19 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:38:19 np0005625203.localdomain systemd[1]: tmp-crun.eqLNP3.mount: Deactivated successfully.
Feb 20 08:38:19 np0005625203.localdomain podman[93409]: 2026-02-20 08:38:19.876439866 +0000 UTC m=+0.190080404 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, architecture=x86_64, container_name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, version=17.1.13, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public)
Feb 20 08:38:19 np0005625203.localdomain podman[93409]: 2026-02-20 08:38:19.889322546 +0000 UTC m=+0.202962984 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, release=1766032510, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64)
Feb 20 08:38:19 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:38:22 np0005625203.localdomain sshd[93448]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:38:23 np0005625203.localdomain sshd[93448]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:38:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:38:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:38:25 np0005625203.localdomain podman[93451]: 2026-02-20 08:38:25.778768768 +0000 UTC m=+0.092035819 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step1, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team)
Feb 20 08:38:25 np0005625203.localdomain systemd[1]: tmp-crun.TKnAYE.mount: Deactivated successfully.
Feb 20 08:38:25 np0005625203.localdomain podman[93450]: 2026-02-20 08:38:25.824699514 +0000 UTC m=+0.140860045 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, container_name=nova_compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_id=tripleo_step5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:38:25 np0005625203.localdomain podman[93450]: 2026-02-20 08:38:25.850761994 +0000 UTC m=+0.166922495 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, batch=17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, io.openshift.expose-services=, release=1766032510)
Feb 20 08:38:25 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:38:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:38:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:38:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:38:26 np0005625203.localdomain podman[93451]: 2026-02-20 08:38:26.017309196 +0000 UTC m=+0.330576227 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, build-date=2026-01-12T22:10:14Z, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step1, container_name=metrics_qdr, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, io.buildah.version=1.41.5, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true)
Feb 20 08:38:26 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:38:26 np0005625203.localdomain podman[93506]: 2026-02-20 08:38:26.09986679 +0000 UTC m=+0.085529108 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, release=1766032510, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com)
Feb 20 08:38:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:38:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:38:26 np0005625203.localdomain podman[93505]: 2026-02-20 08:38:26.151826543 +0000 UTC m=+0.141259977 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., batch=17.1_20260112.1, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=logrotate_crond, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.component=openstack-cron-container)
Feb 20 08:38:26 np0005625203.localdomain podman[93505]: 2026-02-20 08:38:26.186216731 +0000 UTC m=+0.175650205 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true)
Feb 20 08:38:26 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:38:26 np0005625203.localdomain podman[93550]: 2026-02-20 08:38:26.197502171 +0000 UTC m=+0.080970734 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:38:26 np0005625203.localdomain podman[93507]: 2026-02-20 08:38:26.241777226 +0000 UTC m=+0.227188676 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, io.buildah.version=1.41.5, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, release=1766032510)
Feb 20 08:38:26 np0005625203.localdomain podman[93551]: 2026-02-20 08:38:26.25990825 +0000 UTC m=+0.139126542 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.5)
Feb 20 08:38:26 np0005625203.localdomain podman[93550]: 2026-02-20 08:38:26.273796761 +0000 UTC m=+0.157265284 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, url=https://www.redhat.com, architecture=x86_64, version=17.1.13, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vcs-type=git, container_name=ceilometer_agent_compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4)
Feb 20 08:38:26 np0005625203.localdomain podman[93506]: 2026-02-20 08:38:26.28212741 +0000 UTC m=+0.267789688 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f)
Feb 20 08:38:26 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:38:26 np0005625203.localdomain podman[93551]: 2026-02-20 08:38:26.290267993 +0000 UTC m=+0.169486315 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, version=17.1.13, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:38:26 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:38:26 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:38:26 np0005625203.localdomain podman[93507]: 2026-02-20 08:38:26.344154266 +0000 UTC m=+0.329565726 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, release=1766032510, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:38:26 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Deactivated successfully.
Feb 20 08:38:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:38:32 np0005625203.localdomain systemd[1]: tmp-crun.WNMjM2.mount: Deactivated successfully.
Feb 20 08:38:32 np0005625203.localdomain podman[93623]: 2026-02-20 08:38:32.769341842 +0000 UTC m=+0.091394048 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1)
Feb 20 08:38:33 np0005625203.localdomain podman[93623]: 2026-02-20 08:38:33.137356521 +0000 UTC m=+0.459408747 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, distribution-scope=public, vcs-type=git, managed_by=tripleo_ansible, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, architecture=x86_64, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:38:33 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:38:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:38:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:38:50 np0005625203.localdomain systemd[1]: tmp-crun.KcMxiE.mount: Deactivated successfully.
Feb 20 08:38:50 np0005625203.localdomain podman[93644]: 2026-02-20 08:38:50.763004826 +0000 UTC m=+0.081821952 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_id=tripleo_step3, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=collectd, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-type=git, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z)
Feb 20 08:38:50 np0005625203.localdomain podman[93644]: 2026-02-20 08:38:50.771648715 +0000 UTC m=+0.090465801 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container)
Feb 20 08:38:50 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:38:50 np0005625203.localdomain podman[93645]: 2026-02-20 08:38:50.859304066 +0000 UTC m=+0.173782957 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, version=17.1.13, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:38:50 np0005625203.localdomain podman[93645]: 2026-02-20 08:38:50.869523084 +0000 UTC m=+0.184001985 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3)
Feb 20 08:38:50 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:38:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:38:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:38:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:38:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:38:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:38:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:38:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:38:56 np0005625203.localdomain podman[93684]: 2026-02-20 08:38:56.794325981 +0000 UTC m=+0.098729606 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, url=https://www.redhat.com, vcs-type=git, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1766032510, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container)
Feb 20 08:38:56 np0005625203.localdomain podman[93684]: 2026-02-20 08:38:56.81422759 +0000 UTC m=+0.118631205 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, release=1766032510, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:38:56 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:38:56 np0005625203.localdomain podman[93683]: 2026-02-20 08:38:56.853806959 +0000 UTC m=+0.164467439 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, build-date=2026-01-12T23:32:04Z, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step5, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:38:56 np0005625203.localdomain podman[93703]: 2026-02-20 08:38:56.91793092 +0000 UTC m=+0.184991085 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.5, architecture=x86_64, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team)
Feb 20 08:38:56 np0005625203.localdomain podman[93710]: 2026-02-20 08:38:56.900919652 +0000 UTC m=+0.167619286 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, container_name=ovn_controller, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, version=17.1.13, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public)
Feb 20 08:38:56 np0005625203.localdomain podman[93703]: 2026-02-20 08:38:56.960829222 +0000 UTC m=+0.227889447 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public)
Feb 20 08:38:56 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:38:57 np0005625203.localdomain podman[93683]: 2026-02-20 08:38:57.00261599 +0000 UTC m=+0.313276500 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, version=17.1.13, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:38:57 np0005625203.localdomain podman[93686]: 2026-02-20 08:38:57.01487222 +0000 UTC m=+0.318149350 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5)
Feb 20 08:38:57 np0005625203.localdomain podman[93696]: 2026-02-20 08:38:56.964994411 +0000 UTC m=+0.261386557 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container)
Feb 20 08:38:57 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:38:57 np0005625203.localdomain podman[93686]: 2026-02-20 08:38:57.065962627 +0000 UTC m=+0.369239767 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, io.buildah.version=1.41.5, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, batch=17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Feb 20 08:38:57 np0005625203.localdomain podman[93682]: 2026-02-20 08:38:57.06543395 +0000 UTC m=+0.376865273 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, version=17.1.13, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, container_name=logrotate_crond)
Feb 20 08:38:57 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:38:57 np0005625203.localdomain podman[93710]: 2026-02-20 08:38:57.084020928 +0000 UTC m=+0.350720572 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, container_name=ovn_controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public)
Feb 20 08:38:57 np0005625203.localdomain podman[93710]: unhealthy
Feb 20 08:38:57 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:38:57 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:38:57 np0005625203.localdomain podman[93696]: 2026-02-20 08:38:57.166785928 +0000 UTC m=+0.463178094 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-type=git, container_name=metrics_qdr, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:38:57 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:38:57 np0005625203.localdomain podman[93682]: 2026-02-20 08:38:57.24605488 +0000 UTC m=+0.557486163 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, tcib_managed=true, version=17.1.13, build-date=2026-01-12T22:10:15Z, architecture=x86_64, distribution-scope=public, io.buildah.version=1.41.5, container_name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4)
Feb 20 08:38:57 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:38:57 np0005625203.localdomain sshd[93855]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:38:57 np0005625203.localdomain sshd[93855]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:38:57 np0005625203.localdomain systemd[1]: tmp-crun.IFnmBY.mount: Deactivated successfully.
Feb 20 08:39:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:39:03 np0005625203.localdomain podman[93857]: 2026-02-20 08:39:03.762663006 +0000 UTC m=+0.082155381 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, distribution-scope=public, container_name=nova_migration_target, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:39:04 np0005625203.localdomain sudo[93880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:39:04 np0005625203.localdomain sudo[93880]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:39:04 np0005625203.localdomain sudo[93880]: pam_unix(sudo:session): session closed for user root
Feb 20 08:39:04 np0005625203.localdomain podman[93857]: 2026-02-20 08:39:04.142558714 +0000 UTC m=+0.462051079 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, architecture=x86_64, io.buildah.version=1.41.5, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4)
Feb 20 08:39:04 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:39:04 np0005625203.localdomain sudo[93895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:39:04 np0005625203.localdomain sudo[93895]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:39:04 np0005625203.localdomain sudo[93895]: pam_unix(sudo:session): session closed for user root
Feb 20 08:39:05 np0005625203.localdomain sudo[93942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:39:05 np0005625203.localdomain sudo[93942]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:39:05 np0005625203.localdomain sudo[93942]: pam_unix(sudo:session): session closed for user root
Feb 20 08:39:05 np0005625203.localdomain sudo[93957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Feb 20 08:39:05 np0005625203.localdomain sudo[93957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:39:05 np0005625203.localdomain sudo[93957]: pam_unix(sudo:session): session closed for user root
Feb 20 08:39:08 np0005625203.localdomain sudo[93990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:39:08 np0005625203.localdomain sudo[93990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:39:08 np0005625203.localdomain sudo[93990]: pam_unix(sudo:session): session closed for user root
Feb 20 08:39:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:39:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:39:21 np0005625203.localdomain systemd[1]: tmp-crun.4njLaE.mount: Deactivated successfully.
Feb 20 08:39:21 np0005625203.localdomain podman[94005]: 2026-02-20 08:39:21.793037322 +0000 UTC m=+0.102622887 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1766032510, distribution-scope=public, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z)
Feb 20 08:39:21 np0005625203.localdomain podman[94006]: 2026-02-20 08:39:21.864902564 +0000 UTC m=+0.173843389 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step3, container_name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., architecture=x86_64)
Feb 20 08:39:21 np0005625203.localdomain podman[94006]: 2026-02-20 08:39:21.874586325 +0000 UTC m=+0.183527140 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, maintainer=OpenStack TripleO Team, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510)
Feb 20 08:39:21 np0005625203.localdomain podman[94005]: 2026-02-20 08:39:21.874929285 +0000 UTC m=+0.184514860 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, container_name=collectd, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, vcs-type=git)
Feb 20 08:39:21 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:39:21 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:39:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:39:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:39:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:39:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:39:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:39:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:39:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:39:27 np0005625203.localdomain podman[94055]: 2026-02-20 08:39:27.814215386 +0000 UTC m=+0.110973568 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_id=tripleo_step1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, summary=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, container_name=metrics_qdr, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:39:27 np0005625203.localdomain systemd[1]: tmp-crun.7kMDQU.mount: Deactivated successfully.
Feb 20 08:39:27 np0005625203.localdomain podman[94061]: 2026-02-20 08:39:27.847829709 +0000 UTC m=+0.143720184 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, container_name=ovn_metadata_agent, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:39:27 np0005625203.localdomain podman[94046]: 2026-02-20 08:39:27.786166204 +0000 UTC m=+0.097256191 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, release=1766032510, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team)
Feb 20 08:39:27 np0005625203.localdomain podman[94048]: 2026-02-20 08:39:27.911477795 +0000 UTC m=+0.211165908 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true)
Feb 20 08:39:27 np0005625203.localdomain podman[94049]: 2026-02-20 08:39:27.872403182 +0000 UTC m=+0.170804765 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, build-date=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:39:27 np0005625203.localdomain podman[94047]: 2026-02-20 08:39:27.951146317 +0000 UTC m=+0.258936282 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, release=1766032510, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Feb 20 08:39:27 np0005625203.localdomain podman[94049]: 2026-02-20 08:39:27.958183866 +0000 UTC m=+0.256585489 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, release=1766032510, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:39:27 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:39:27 np0005625203.localdomain podman[94046]: 2026-02-20 08:39:27.972201151 +0000 UTC m=+0.283291118 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.13, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=logrotate_crond, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:39:27 np0005625203.localdomain podman[94048]: 2026-02-20 08:39:27.972572403 +0000 UTC m=+0.272260486 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, tcib_managed=true, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:39:27 np0005625203.localdomain podman[94061]: 2026-02-20 08:39:27.982340157 +0000 UTC m=+0.278230632 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, managed_by=tripleo_ansible, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:39:27 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:39:27 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Deactivated successfully.
Feb 20 08:39:27 np0005625203.localdomain podman[94055]: 2026-02-20 08:39:27.994354539 +0000 UTC m=+0.291112771 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, architecture=x86_64, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, container_name=metrics_qdr, io.openshift.expose-services=, config_id=tripleo_step1)
Feb 20 08:39:28 np0005625203.localdomain podman[94047]: 2026-02-20 08:39:28.016384934 +0000 UTC m=+0.324174949 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T23:32:04Z, distribution-scope=public, container_name=nova_compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:39:28 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:39:28 np0005625203.localdomain podman[94066]: 2026-02-20 08:39:27.925621875 +0000 UTC m=+0.217930709 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.expose-services=, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:39:28 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:39:28 np0005625203.localdomain podman[94066]: 2026-02-20 08:39:28.060347818 +0000 UTC m=+0.352656642 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, config_id=tripleo_step4, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible)
Feb 20 08:39:28 np0005625203.localdomain podman[94066]: unhealthy
Feb 20 08:39:28 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:39:28 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:39:28 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:39:29 np0005625203.localdomain sshd[94219]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:39:30 np0005625203.localdomain sshd[94219]: Received disconnect from 212.154.234.9 port 9983:11: Bye Bye [preauth]
Feb 20 08:39:30 np0005625203.localdomain sshd[94219]: Disconnected from authenticating user root 212.154.234.9 port 9983 [preauth]
Feb 20 08:39:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:39:34 np0005625203.localdomain podman[94221]: 2026-02-20 08:39:34.781466197 +0000 UTC m=+0.089672276 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, version=17.1.13, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., tcib_managed=true, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com)
Feb 20 08:39:35 np0005625203.localdomain podman[94221]: 2026-02-20 08:39:35.158456683 +0000 UTC m=+0.466662782 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., io.openshift.expose-services=)
Feb 20 08:39:35 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:39:44 np0005625203.localdomain sshd[94244]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:39:45 np0005625203.localdomain sshd[94244]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 08:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3600.1 total, 600.0 interval
                                                          Cumulative writes: 4844 writes, 21K keys, 4844 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 4844 writes, 660 syncs, 7.34 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 08:39:49 np0005625203.localdomain sshd[94246]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:39:49 np0005625203.localdomain sshd[94246]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 08:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 3600.1 total, 600.0 interval
                                                          Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s
                                                          Cumulative WAL: 5843 writes, 764 syncs, 7.65 writes per sync, written: 0.02 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 08:39:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:39:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:39:52 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:39:52 np0005625203.localdomain recover_tripleo_nova_virtqemud[94259]: 62505
Feb 20 08:39:52 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:39:52 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:39:52 np0005625203.localdomain podman[94248]: 2026-02-20 08:39:52.783567444 +0000 UTC m=+0.096622252 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, release=1766032510, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.buildah.version=1.41.5, config_id=tripleo_step3, architecture=x86_64, managed_by=tripleo_ansible, tcib_managed=true, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git)
Feb 20 08:39:52 np0005625203.localdomain podman[94249]: 2026-02-20 08:39:52.829251322 +0000 UTC m=+0.140392571 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, config_id=tripleo_step3, version=17.1.13, architecture=x86_64, url=https://www.redhat.com)
Feb 20 08:39:52 np0005625203.localdomain podman[94249]: 2026-02-20 08:39:52.837794267 +0000 UTC m=+0.148935586 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, architecture=x86_64, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, config_id=tripleo_step3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.expose-services=, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, container_name=iscsid, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13)
Feb 20 08:39:52 np0005625203.localdomain podman[94248]: 2026-02-20 08:39:52.848301574 +0000 UTC m=+0.161356382 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, release=1766032510, version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, distribution-scope=public, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:39:52 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:39:52 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: tmp-crun.qKssLk.mount: Deactivated successfully.
Feb 20 08:39:58 np0005625203.localdomain podman[94296]: 2026-02-20 08:39:58.795963642 +0000 UTC m=+0.094904758 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team, vcs-type=git, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, container_name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: tmp-crun.dv50SH.mount: Deactivated successfully.
Feb 20 08:39:58 np0005625203.localdomain podman[94292]: 2026-02-20 08:39:58.854739878 +0000 UTC m=+0.155503061 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:39:58 np0005625203.localdomain podman[94289]: 2026-02-20 08:39:58.902495611 +0000 UTC m=+0.212325125 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, container_name=logrotate_crond, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, version=17.1.13, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z)
Feb 20 08:39:58 np0005625203.localdomain podman[94292]: 2026-02-20 08:39:58.909139717 +0000 UTC m=+0.209902930 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2026-01-12T23:07:30Z)
Feb 20 08:39:58 np0005625203.localdomain podman[94313]: 2026-02-20 08:39:58.817819391 +0000 UTC m=+0.104240869 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=healthy, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:39:58 np0005625203.localdomain podman[94313]: 2026-02-20 08:39:58.951476321 +0000 UTC m=+0.237897819 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.buildah.version=1.41.5, distribution-scope=public)
Feb 20 08:39:58 np0005625203.localdomain podman[94313]: unhealthy
Feb 20 08:39:58 np0005625203.localdomain podman[94289]: 2026-02-20 08:39:58.962559175 +0000 UTC m=+0.272388699 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:39:58 np0005625203.localdomain podman[94308]: 2026-02-20 08:39:58.974018411 +0000 UTC m=+0.264660789 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, batch=17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-type=git, maintainer=OpenStack TripleO Team, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:39:58 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:39:58 np0005625203.localdomain podman[94296]: 2026-02-20 08:39:58.994873179 +0000 UTC m=+0.293814295 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2026-01-12T22:10:14Z, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public)
Feb 20 08:39:59 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:39:59 np0005625203.localdomain podman[94291]: 2026-02-20 08:39:59.014469468 +0000 UTC m=+0.320382290 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510)
Feb 20 08:39:59 np0005625203.localdomain podman[94308]: 2026-02-20 08:39:59.041681693 +0000 UTC m=+0.332324121 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc.)
Feb 20 08:39:59 np0005625203.localdomain podman[94308]: unhealthy
Feb 20 08:39:59 np0005625203.localdomain podman[94291]: 2026-02-20 08:39:59.048609148 +0000 UTC m=+0.354521940 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:39:59 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:39:59 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:39:59 np0005625203.localdomain podman[94290]: 2026-02-20 08:39:59.060734644 +0000 UTC m=+0.368215475 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, config_id=tripleo_step5, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:39:59 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:39:59 np0005625203.localdomain podman[94290]: 2026-02-20 08:39:59.117300892 +0000 UTC m=+0.424781713 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., version=17.1.13, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-type=git, config_id=tripleo_step5, release=1766032510, managed_by=tripleo_ansible, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:39:59 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:40:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:40:05 np0005625203.localdomain podman[94455]: 2026-02-20 08:40:05.769965741 +0000 UTC m=+0.083959798 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, config_id=tripleo_step4, container_name=nova_migration_target, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.buildah.version=1.41.5, vcs-type=git)
Feb 20 08:40:06 np0005625203.localdomain podman[94455]: 2026-02-20 08:40:06.114661315 +0000 UTC m=+0.428655332 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13)
Feb 20 08:40:06 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:40:09 np0005625203.localdomain sudo[94478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:40:09 np0005625203.localdomain sudo[94478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:40:09 np0005625203.localdomain sudo[94478]: pam_unix(sudo:session): session closed for user root
Feb 20 08:40:09 np0005625203.localdomain sudo[94493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:40:09 np0005625203.localdomain sudo[94493]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:40:09 np0005625203.localdomain sudo[94493]: pam_unix(sudo:session): session closed for user root
Feb 20 08:40:10 np0005625203.localdomain sudo[94539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:40:10 np0005625203.localdomain sudo[94539]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:40:10 np0005625203.localdomain sudo[94539]: pam_unix(sudo:session): session closed for user root
Feb 20 08:40:10 np0005625203.localdomain sudo[94554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -- inventory --format=json-pretty --filter-for-batch
Feb 20 08:40:10 np0005625203.localdomain sudo[94554]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:40:10 np0005625203.localdomain podman[94610]: 
Feb 20 08:40:10 np0005625203.localdomain podman[94610]: 2026-02-20 08:40:10.742002023 +0000 UTC m=+0.067030992 container create 1cd72cd719884460fa6bf41a507b018708e40b5ffd402620f280531ebbb2fb8b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_hertz, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_BRANCH=main, vcs-type=git, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vendor=Red Hat, Inc., distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 08:40:10 np0005625203.localdomain systemd[1]: Started libpod-conmon-1cd72cd719884460fa6bf41a507b018708e40b5ffd402620f280531ebbb2fb8b.scope.
Feb 20 08:40:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:40:10 np0005625203.localdomain podman[94610]: 2026-02-20 08:40:10.718645018 +0000 UTC m=+0.043674027 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 08:40:10 np0005625203.localdomain podman[94610]: 2026-02-20 08:40:10.821204223 +0000 UTC m=+0.146233202 container init 1cd72cd719884460fa6bf41a507b018708e40b5ffd402620f280531ebbb2fb8b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_hertz, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, distribution-scope=public, architecture=x86_64, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, version=7)
Feb 20 08:40:10 np0005625203.localdomain podman[94610]: 2026-02-20 08:40:10.830803421 +0000 UTC m=+0.155832390 container start 1cd72cd719884460fa6bf41a507b018708e40b5ffd402620f280531ebbb2fb8b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_hertz, distribution-scope=public, vcs-type=git, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7)
Feb 20 08:40:10 np0005625203.localdomain podman[94610]: 2026-02-20 08:40:10.831064009 +0000 UTC m=+0.156092978 container attach 1cd72cd719884460fa6bf41a507b018708e40b5ffd402620f280531ebbb2fb8b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_hertz, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, release=1770267347, version=7, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, vcs-type=git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhceph ceph)
Feb 20 08:40:10 np0005625203.localdomain systemd[1]: libpod-1cd72cd719884460fa6bf41a507b018708e40b5ffd402620f280531ebbb2fb8b.scope: Deactivated successfully.
Feb 20 08:40:10 np0005625203.localdomain admiring_hertz[94625]: 167 167
Feb 20 08:40:10 np0005625203.localdomain podman[94610]: 2026-02-20 08:40:10.834205087 +0000 UTC m=+0.159234066 container died 1cd72cd719884460fa6bf41a507b018708e40b5ffd402620f280531ebbb2fb8b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_hertz, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, io.openshift.expose-services=, release=1770267347, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, version=7, name=rhceph)
Feb 20 08:40:10 np0005625203.localdomain podman[94630]: 2026-02-20 08:40:10.925666937 +0000 UTC m=+0.082565025 container remove 1cd72cd719884460fa6bf41a507b018708e40b5ffd402620f280531ebbb2fb8b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_hertz, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1770267347, distribution-scope=public, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.buildah.version=1.42.2, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, RELEASE=main, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, description=Red Hat Ceph Storage 7)
Feb 20 08:40:10 np0005625203.localdomain systemd[1]: libpod-conmon-1cd72cd719884460fa6bf41a507b018708e40b5ffd402620f280531ebbb2fb8b.scope: Deactivated successfully.
Feb 20 08:40:11 np0005625203.localdomain podman[94651]: 
Feb 20 08:40:11 np0005625203.localdomain podman[94651]: 2026-02-20 08:40:11.147099584 +0000 UTC m=+0.075016361 container create 6ecdc16a50c8ac75cfe2d36863ba56a93cd87d293ae711a27c4897538638332c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_northcutt, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, distribution-scope=public, name=rhceph, release=1770267347, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, ceph=True, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main)
Feb 20 08:40:11 np0005625203.localdomain systemd[1]: Started libpod-conmon-6ecdc16a50c8ac75cfe2d36863ba56a93cd87d293ae711a27c4897538638332c.scope.
Feb 20 08:40:11 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 08:40:11 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2f8522fd491a1f3beb0bb5d2afd2fbf6b8301e9005f6e42291fcd822f9f572/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 08:40:11 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2f8522fd491a1f3beb0bb5d2afd2fbf6b8301e9005f6e42291fcd822f9f572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 08:40:11 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd2f8522fd491a1f3beb0bb5d2afd2fbf6b8301e9005f6e42291fcd822f9f572/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 08:40:11 np0005625203.localdomain podman[94651]: 2026-02-20 08:40:11.20880429 +0000 UTC m=+0.136721067 container init 6ecdc16a50c8ac75cfe2d36863ba56a93cd87d293ae711a27c4897538638332c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_northcutt, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, version=7, RELEASE=main, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, release=1770267347, name=rhceph, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64)
Feb 20 08:40:11 np0005625203.localdomain podman[94651]: 2026-02-20 08:40:11.11769841 +0000 UTC m=+0.045615227 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 08:40:11 np0005625203.localdomain podman[94651]: 2026-02-20 08:40:11.218423788 +0000 UTC m=+0.146340565 container start 6ecdc16a50c8ac75cfe2d36863ba56a93cd87d293ae711a27c4897538638332c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_northcutt, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, name=rhceph, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, release=1770267347, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 08:40:11 np0005625203.localdomain podman[94651]: 2026-02-20 08:40:11.218726697 +0000 UTC m=+0.146643484 container attach 6ecdc16a50c8ac75cfe2d36863ba56a93cd87d293ae711a27c4897538638332c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_northcutt, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, RELEASE=main, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 08:40:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4ccb4f9d825ad4201931473be3e394bf8a019658db207b266bd307e15cb39829-merged.mount: Deactivated successfully.
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]: [
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:     {
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:         "available": false,
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:         "ceph_device": false,
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:         "lsm_data": {},
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:         "lvs": [],
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:         "path": "/dev/sr0",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:         "rejected_reasons": [
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "Has a FileSystem",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "Insufficient space (<5GB)"
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:         ],
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:         "sys_api": {
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "actuators": null,
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "device_nodes": "sr0",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "human_readable_size": "482.00 KB",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "id_bus": "ata",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "model": "QEMU DVD-ROM",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "nr_requests": "2",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "partitions": {},
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "path": "/dev/sr0",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "removable": "1",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "rev": "2.5+",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "ro": "0",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "rotational": "1",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "sas_address": "",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "sas_device_handle": "",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "scheduler_mode": "mq-deadline",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "sectors": 0,
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "sectorsize": "2048",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "size": 493568.0,
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "support_discard": "0",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "type": "disk",
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:             "vendor": "QEMU"
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:         }
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]:     }
Feb 20 08:40:12 np0005625203.localdomain eager_northcutt[94666]: ]
Feb 20 08:40:12 np0005625203.localdomain systemd[1]: libpod-6ecdc16a50c8ac75cfe2d36863ba56a93cd87d293ae711a27c4897538638332c.scope: Deactivated successfully.
Feb 20 08:40:12 np0005625203.localdomain systemd[1]: libpod-6ecdc16a50c8ac75cfe2d36863ba56a93cd87d293ae711a27c4897538638332c.scope: Consumed 1.005s CPU time.
Feb 20 08:40:12 np0005625203.localdomain podman[94651]: 2026-02-20 08:40:12.185417158 +0000 UTC m=+1.113333935 container died 6ecdc16a50c8ac75cfe2d36863ba56a93cd87d293ae711a27c4897538638332c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_northcutt, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, name=rhceph, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git)
Feb 20 08:40:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bd2f8522fd491a1f3beb0bb5d2afd2fbf6b8301e9005f6e42291fcd822f9f572-merged.mount: Deactivated successfully.
Feb 20 08:40:12 np0005625203.localdomain podman[96603]: 2026-02-20 08:40:12.274962659 +0000 UTC m=+0.079529230 container remove 6ecdc16a50c8ac75cfe2d36863ba56a93cd87d293ae711a27c4897538638332c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=eager_northcutt, release=1770267347, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, version=7, name=rhceph, GIT_CLEAN=True, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 08:40:12 np0005625203.localdomain systemd[1]: libpod-conmon-6ecdc16a50c8ac75cfe2d36863ba56a93cd87d293ae711a27c4897538638332c.scope: Deactivated successfully.
Feb 20 08:40:12 np0005625203.localdomain sudo[94554]: pam_unix(sudo:session): session closed for user root
Feb 20 08:40:12 np0005625203.localdomain sudo[96617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:40:12 np0005625203.localdomain sudo[96617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:40:12 np0005625203.localdomain sudo[96617]: pam_unix(sudo:session): session closed for user root
Feb 20 08:40:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:40:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:40:23 np0005625203.localdomain podman[96633]: 2026-02-20 08:40:23.774606657 +0000 UTC m=+0.087085765 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_id=tripleo_step3, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, batch=17.1_20260112.1, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, name=rhosp-rhel9/openstack-iscsid)
Feb 20 08:40:23 np0005625203.localdomain podman[96633]: 2026-02-20 08:40:23.785701522 +0000 UTC m=+0.098180610 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, batch=17.1_20260112.1, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13)
Feb 20 08:40:23 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:40:23 np0005625203.localdomain podman[96632]: 2026-02-20 08:40:23.876174402 +0000 UTC m=+0.188829526 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, release=1766032510, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, version=17.1.13, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:40:23 np0005625203.localdomain podman[96632]: 2026-02-20 08:40:23.913417028 +0000 UTC m=+0.226072152 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, batch=17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true)
Feb 20 08:40:23 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: tmp-crun.LV9TvA.mount: Deactivated successfully.
Feb 20 08:40:29 np0005625203.localdomain podman[96677]: 2026-02-20 08:40:29.798180494 +0000 UTC m=+0.100269245 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step4, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:40:29 np0005625203.localdomain podman[96674]: 2026-02-20 08:40:29.813421057 +0000 UTC m=+0.115226019 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, release=1766032510, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, tcib_managed=true, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:40:29 np0005625203.localdomain podman[96672]: 2026-02-20 08:40:29.767920674 +0000 UTC m=+0.079673855 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, version=17.1.13, config_id=tripleo_step4, io.openshift.expose-services=)
Feb 20 08:40:29 np0005625203.localdomain podman[96674]: 2026-02-20 08:40:29.842528361 +0000 UTC m=+0.144333253 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13)
Feb 20 08:40:29 np0005625203.localdomain podman[96677]: 2026-02-20 08:40:29.848194397 +0000 UTC m=+0.150283138 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., config_id=tripleo_step4, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:40:29 np0005625203.localdomain podman[96693]: 2026-02-20 08:40:29.900926794 +0000 UTC m=+0.189672970 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:40:29 np0005625203.localdomain podman[96687]: 2026-02-20 08:40:29.85311456 +0000 UTC m=+0.146100858 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, vcs-type=git, release=1766032510, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:40:29 np0005625203.localdomain podman[96693]: 2026-02-20 08:40:29.919099959 +0000 UTC m=+0.207846155 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z)
Feb 20 08:40:29 np0005625203.localdomain podman[96693]: unhealthy
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:40:29 np0005625203.localdomain podman[96672]: 2026-02-20 08:40:29.952769035 +0000 UTC m=+0.264522256 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, distribution-scope=public, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z)
Feb 20 08:40:29 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:40:29 np0005625203.localdomain podman[96673]: 2026-02-20 08:40:29.969502914 +0000 UTC m=+0.276958841 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, io.openshift.expose-services=, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, config_id=tripleo_step5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1)
Feb 20 08:40:30 np0005625203.localdomain podman[96681]: 2026-02-20 08:40:30.02153169 +0000 UTC m=+0.317377517 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, distribution-scope=public, config_id=tripleo_step1, release=1766032510, container_name=metrics_qdr, version=17.1.13)
Feb 20 08:40:30 np0005625203.localdomain podman[96673]: 2026-02-20 08:40:30.025274896 +0000 UTC m=+0.332730794 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, tcib_managed=true, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:40:30 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:40:30 np0005625203.localdomain podman[96687]: 2026-02-20 08:40:30.037236438 +0000 UTC m=+0.330222756 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:56:19Z, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, vcs-type=git, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f)
Feb 20 08:40:30 np0005625203.localdomain podman[96687]: unhealthy
Feb 20 08:40:30 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:40:30 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:40:30 np0005625203.localdomain podman[96681]: 2026-02-20 08:40:30.248840638 +0000 UTC m=+0.544686425 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, vendor=Red Hat, Inc., architecture=x86_64, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true)
Feb 20 08:40:30 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:40:32 np0005625203.localdomain sshd[96832]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:40:32 np0005625203.localdomain sshd[96832]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:40:34 np0005625203.localdomain sshd[96834]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:40:36 np0005625203.localdomain sshd[96834]: Received disconnect from 102.210.148.92 port 38488:11: Bye Bye [preauth]
Feb 20 08:40:36 np0005625203.localdomain sshd[96834]: Disconnected from authenticating user root 102.210.148.92 port 38488 [preauth]
Feb 20 08:40:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:40:36 np0005625203.localdomain podman[96836]: 2026-02-20 08:40:36.322859662 +0000 UTC m=+0.077899460 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z)
Feb 20 08:40:36 np0005625203.localdomain podman[96836]: 2026-02-20 08:40:36.683097338 +0000 UTC m=+0.438137136 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:40:36 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:40:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:40:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:40:54 np0005625203.localdomain podman[96857]: 2026-02-20 08:40:54.759921989 +0000 UTC m=+0.078007794 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container)
Feb 20 08:40:54 np0005625203.localdomain podman[96857]: 2026-02-20 08:40:54.769080603 +0000 UTC m=+0.087166408 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.buildah.version=1.41.5, managed_by=tripleo_ansible, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd)
Feb 20 08:40:54 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:40:54 np0005625203.localdomain podman[96858]: 2026-02-20 08:40:54.870031618 +0000 UTC m=+0.184007904 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:40:54 np0005625203.localdomain podman[96858]: 2026-02-20 08:40:54.907277165 +0000 UTC m=+0.221253441 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, tcib_managed=true, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=iscsid, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Feb 20 08:40:54 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:41:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:41:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:41:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:41:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:41:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:41:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:41:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:41:00 np0005625203.localdomain podman[96912]: 2026-02-20 08:41:00.835979096 +0000 UTC m=+0.122942269 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=healthy, architecture=x86_64, distribution-scope=public, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Feb 20 08:41:00 np0005625203.localdomain systemd[1]: tmp-crun.bsheFu.mount: Deactivated successfully.
Feb 20 08:41:00 np0005625203.localdomain podman[96897]: 2026-02-20 08:41:00.802143345 +0000 UTC m=+0.106532139 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, version=17.1.13, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step5, container_name=nova_compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, build-date=2026-01-12T23:32:04Z, release=1766032510, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.expose-services=)
Feb 20 08:41:00 np0005625203.localdomain podman[96905]: 2026-02-20 08:41:00.866827414 +0000 UTC m=+0.164984115 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, release=1766032510, distribution-scope=public, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, batch=17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:41:00 np0005625203.localdomain podman[96912]: 2026-02-20 08:41:00.875229115 +0000 UTC m=+0.162192308 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:41:00 np0005625203.localdomain podman[96912]: unhealthy
Feb 20 08:41:00 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:41:00 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:41:00 np0005625203.localdomain podman[96897]: 2026-02-20 08:41:00.892264134 +0000 UTC m=+0.196652938 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc.)
Feb 20 08:41:00 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:41:00 np0005625203.localdomain podman[96899]: 2026-02-20 08:41:00.95912294 +0000 UTC m=+0.255187785 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Feb 20 08:41:01 np0005625203.localdomain podman[96898]: 2026-02-20 08:41:01.011708883 +0000 UTC m=+0.307748248 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-compute-container)
Feb 20 08:41:01 np0005625203.localdomain podman[96899]: 2026-02-20 08:41:01.021548668 +0000 UTC m=+0.317613483 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, version=17.1.13, io.openshift.expose-services=, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z)
Feb 20 08:41:01 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:41:01 np0005625203.localdomain podman[96905]: 2026-02-20 08:41:01.056283118 +0000 UTC m=+0.354439819 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, architecture=x86_64, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, distribution-scope=public, version=17.1.13)
Feb 20 08:41:01 np0005625203.localdomain podman[96896]: 2026-02-20 08:41:01.066645589 +0000 UTC m=+0.375657567 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, url=https://www.redhat.com, batch=17.1_20260112.1, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, version=17.1.13, tcib_managed=true, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z)
Feb 20 08:41:01 np0005625203.localdomain podman[96898]: 2026-02-20 08:41:01.067204087 +0000 UTC m=+0.363243452 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, build-date=2026-01-12T23:07:47Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, batch=17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container)
Feb 20 08:41:01 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:41:01 np0005625203.localdomain podman[96896]: 2026-02-20 08:41:01.077187706 +0000 UTC m=+0.386199614 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-cron, release=1766032510, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, vcs-type=git)
Feb 20 08:41:01 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:41:01 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:41:01 np0005625203.localdomain podman[96916]: 2026-02-20 08:41:01.119960554 +0000 UTC m=+0.405193993 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-type=git)
Feb 20 08:41:01 np0005625203.localdomain podman[96916]: 2026-02-20 08:41:01.204353895 +0000 UTC m=+0.489587384 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2026-01-12T22:36:40Z, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, container_name=ovn_controller, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:41:01 np0005625203.localdomain podman[96916]: unhealthy
Feb 20 08:41:01 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:41:01 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:41:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:41:07 np0005625203.localdomain podman[97065]: 2026-02-20 08:41:07.767360231 +0000 UTC m=+0.087205849 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1766032510, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:41:08 np0005625203.localdomain podman[97065]: 2026-02-20 08:41:08.160456349 +0000 UTC m=+0.480301977 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, container_name=nova_migration_target, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, vcs-type=git, batch=17.1_20260112.1, io.openshift.expose-services=)
Feb 20 08:41:08 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:41:13 np0005625203.localdomain sudo[97089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:41:13 np0005625203.localdomain sudo[97089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:41:13 np0005625203.localdomain sudo[97089]: pam_unix(sudo:session): session closed for user root
Feb 20 08:41:13 np0005625203.localdomain sudo[97104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 08:41:13 np0005625203.localdomain sudo[97104]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:41:13 np0005625203.localdomain sudo[97104]: pam_unix(sudo:session): session closed for user root
Feb 20 08:41:13 np0005625203.localdomain sudo[97141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:41:13 np0005625203.localdomain sudo[97141]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:41:13 np0005625203.localdomain sudo[97141]: pam_unix(sudo:session): session closed for user root
Feb 20 08:41:13 np0005625203.localdomain sudo[97156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:41:13 np0005625203.localdomain sudo[97156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:41:14 np0005625203.localdomain sudo[97156]: pam_unix(sudo:session): session closed for user root
Feb 20 08:41:15 np0005625203.localdomain sudo[97203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:41:15 np0005625203.localdomain sudo[97203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:41:15 np0005625203.localdomain sudo[97203]: pam_unix(sudo:session): session closed for user root
Feb 20 08:41:17 np0005625203.localdomain sshd[97218]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:41:17 np0005625203.localdomain sshd[97218]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:41:17 np0005625203.localdomain sshd[97220]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:41:18 np0005625203.localdomain sshd[97220]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:41:23 np0005625203.localdomain sshd[97222]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:41:24 np0005625203.localdomain sshd[97222]: Invalid user help from 103.200.25.162 port 57580
Feb 20 08:41:25 np0005625203.localdomain sshd[97222]: Received disconnect from 103.200.25.162 port 57580:11: Bye Bye [preauth]
Feb 20 08:41:25 np0005625203.localdomain sshd[97222]: Disconnected from invalid user help 103.200.25.162 port 57580 [preauth]
Feb 20 08:41:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:41:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:41:25 np0005625203.localdomain podman[97224]: 2026-02-20 08:41:25.140807586 +0000 UTC m=+0.091976767 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, vcs-type=git, container_name=collectd, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, release=1766032510, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, url=https://www.redhat.com, managed_by=tripleo_ansible, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:41:25 np0005625203.localdomain podman[97224]: 2026-02-20 08:41:25.180302943 +0000 UTC m=+0.131472104 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, container_name=collectd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, distribution-scope=public, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z)
Feb 20 08:41:25 np0005625203.localdomain systemd[1]: tmp-crun.QuLOmg.mount: Deactivated successfully.
Feb 20 08:41:25 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:41:25 np0005625203.localdomain podman[97225]: 2026-02-20 08:41:25.20308295 +0000 UTC m=+0.150346019 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, release=1766032510, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z)
Feb 20 08:41:25 np0005625203.localdomain podman[97225]: 2026-02-20 08:41:25.242482134 +0000 UTC m=+0.189745203 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, tcib_managed=true, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, release=1766032510, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:41:25 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: tmp-crun.3rQSvO.mount: Deactivated successfully.
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: tmp-crun.JfyhzX.mount: Deactivated successfully.
Feb 20 08:41:31 np0005625203.localdomain podman[97264]: 2026-02-20 08:41:31.78324824 +0000 UTC m=+0.092093971 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-type=git, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., version=17.1.13, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team)
Feb 20 08:41:31 np0005625203.localdomain podman[97284]: 2026-02-20 08:41:31.839755215 +0000 UTC m=+0.138941345 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20260112.1, config_id=tripleo_step4, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, architecture=x86_64, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13)
Feb 20 08:41:31 np0005625203.localdomain podman[97266]: 2026-02-20 08:41:31.844616766 +0000 UTC m=+0.148543994 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, container_name=ceilometer_agent_ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1)
Feb 20 08:41:31 np0005625203.localdomain podman[97265]: 2026-02-20 08:41:31.806320816 +0000 UTC m=+0.115790286 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5)
Feb 20 08:41:31 np0005625203.localdomain podman[97265]: 2026-02-20 08:41:31.885361362 +0000 UTC m=+0.194830822 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, url=https://www.redhat.com, io.openshift.expose-services=, container_name=ceilometer_agent_compute, architecture=x86_64, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, tcib_managed=true, release=1766032510, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1)
Feb 20 08:41:31 np0005625203.localdomain podman[97263]: 2026-02-20 08:41:31.883218875 +0000 UTC m=+0.196779562 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, distribution-scope=public, name=rhosp-rhel9/openstack-cron, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container)
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:41:31 np0005625203.localdomain podman[97272]: 2026-02-20 08:41:31.898358815 +0000 UTC m=+0.196534114 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=)
Feb 20 08:41:31 np0005625203.localdomain podman[97264]: 2026-02-20 08:41:31.917311454 +0000 UTC m=+0.226157145 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, vcs-type=git, version=17.1.13, container_name=nova_compute, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc.)
Feb 20 08:41:31 np0005625203.localdomain podman[97284]: 2026-02-20 08:41:31.922461404 +0000 UTC m=+0.221647514 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, release=1766032510, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, container_name=ovn_controller, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, tcib_managed=true, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:41:31 np0005625203.localdomain podman[97284]: unhealthy
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:41:31 np0005625203.localdomain podman[97263]: 2026-02-20 08:41:31.966950155 +0000 UTC m=+0.280510892 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, version=17.1.13, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:41:31 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:41:32 np0005625203.localdomain podman[97282]: 2026-02-20 08:41:32.040741826 +0000 UTC m=+0.337728879 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:41:32 np0005625203.localdomain podman[97282]: 2026-02-20 08:41:32.047986142 +0000 UTC m=+0.344973125 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, config_id=tripleo_step4, vcs-type=git, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn)
Feb 20 08:41:32 np0005625203.localdomain podman[97282]: unhealthy
Feb 20 08:41:32 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:41:32 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:41:32 np0005625203.localdomain podman[97266]: 2026-02-20 08:41:32.069162399 +0000 UTC m=+0.373089607 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, url=https://www.redhat.com, version=17.1.13, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public)
Feb 20 08:41:32 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:41:32 np0005625203.localdomain podman[97272]: 2026-02-20 08:41:32.150563017 +0000 UTC m=+0.448738376 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., config_id=tripleo_step1, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2026-01-12T22:10:14Z, release=1766032510, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:41:32 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:41:32 np0005625203.localdomain systemd[1]: tmp-crun.VTAqXd.mount: Deactivated successfully.
Feb 20 08:41:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:41:38 np0005625203.localdomain podman[97423]: 2026-02-20 08:41:38.762051841 +0000 UTC m=+0.078242111 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:41:39 np0005625203.localdomain podman[97423]: 2026-02-20 08:41:39.137286653 +0000 UTC m=+0.453476933 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, version=17.1.13, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com)
Feb 20 08:41:39 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:41:49 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:41:49 np0005625203.localdomain recover_tripleo_nova_virtqemud[97447]: 62505
Feb 20 08:41:49 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:41:49 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:41:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:41:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:41:55 np0005625203.localdomain systemd[1]: tmp-crun.TGpxWb.mount: Deactivated successfully.
Feb 20 08:41:55 np0005625203.localdomain podman[97449]: 2026-02-20 08:41:55.779124598 +0000 UTC m=+0.093288209 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, tcib_managed=true, release=1766032510, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3)
Feb 20 08:41:55 np0005625203.localdomain podman[97449]: 2026-02-20 08:41:55.78756268 +0000 UTC m=+0.101726341 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, container_name=iscsid, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid, url=https://www.redhat.com, com.redhat.component=openstack-iscsid-container, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, tcib_managed=true, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, distribution-scope=public, vcs-type=git, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1)
Feb 20 08:41:55 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:41:55 np0005625203.localdomain podman[97448]: 2026-02-20 08:41:55.876460951 +0000 UTC m=+0.193288314 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., release=1766032510, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, version=17.1.13, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.openshift.expose-services=)
Feb 20 08:41:55 np0005625203.localdomain podman[97448]: 2026-02-20 08:41:55.885211442 +0000 UTC m=+0.202038815 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13)
Feb 20 08:41:55 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:42:01 np0005625203.localdomain sshd[97486]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:42:01 np0005625203.localdomain sshd[97486]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:42:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:42:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:42:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:42:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:42:02 np0005625203.localdomain podman[97488]: 2026-02-20 08:42:02.092102761 +0000 UTC m=+0.095133976 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.openshift.expose-services=, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public)
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: tmp-crun.cVuNuK.mount: Deactivated successfully.
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: tmp-crun.4Pv7wJ.mount: Deactivated successfully.
Feb 20 08:42:02 np0005625203.localdomain podman[97488]: 2026-02-20 08:42:02.206754561 +0000 UTC m=+0.209785796 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, io.openshift.expose-services=, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:42:02 np0005625203.localdomain podman[97534]: 2026-02-20 08:42:02.206930327 +0000 UTC m=+0.106112077 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.5, release=1766032510, version=17.1.13, architecture=x86_64, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible)
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:42:02 np0005625203.localdomain podman[97489]: 2026-02-20 08:42:02.161083123 +0000 UTC m=+0.160013560 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, managed_by=tripleo_ansible, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, config_id=tripleo_step4, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Feb 20 08:42:02 np0005625203.localdomain podman[97489]: 2026-02-20 08:42:02.243203763 +0000 UTC m=+0.242134170 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, distribution-scope=public, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Feb 20 08:42:02 np0005625203.localdomain podman[97534]: 2026-02-20 08:42:02.253032069 +0000 UTC m=+0.152213869 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.13, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, distribution-scope=public, batch=17.1_20260112.1, container_name=ovn_metadata_agent)
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:42:02 np0005625203.localdomain podman[97534]: unhealthy
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:42:02 np0005625203.localdomain podman[97496]: 2026-02-20 08:42:02.301688739 +0000 UTC m=+0.291712569 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, version=17.1.13, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:42:02 np0005625203.localdomain podman[97496]: 2026-02-20 08:42:02.340395691 +0000 UTC m=+0.330419501 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, version=17.1.13, architecture=x86_64, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1)
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:42:02 np0005625203.localdomain podman[97587]: 2026-02-20 08:42:02.351986761 +0000 UTC m=+0.133929150 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, architecture=x86_64, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:42:02 np0005625203.localdomain podman[97556]: 2026-02-20 08:42:02.439492178 +0000 UTC m=+0.287758296 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi)
Feb 20 08:42:02 np0005625203.localdomain podman[97490]: 2026-02-20 08:42:02.490625697 +0000 UTC m=+0.485611372 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, release=1766032510, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team)
Feb 20 08:42:02 np0005625203.localdomain podman[97490]: 2026-02-20 08:42:02.532207717 +0000 UTC m=+0.527193362 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, batch=17.1_20260112.1, distribution-scope=public, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64)
Feb 20 08:42:02 np0005625203.localdomain podman[97490]: unhealthy
Feb 20 08:42:02 np0005625203.localdomain podman[97556]: 2026-02-20 08:42:02.544701786 +0000 UTC m=+0.392967954 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, config_id=tripleo_step4, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, build-date=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:42:02 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:42:02 np0005625203.localdomain podman[97587]: 2026-02-20 08:42:02.602303514 +0000 UTC m=+0.384245903 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, release=1766032510, maintainer=OpenStack TripleO Team, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:42:03 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:42:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:42:09 np0005625203.localdomain podman[97648]: 2026-02-20 08:42:09.768716971 +0000 UTC m=+0.083604767 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, release=1766032510, distribution-scope=public, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, managed_by=tripleo_ansible, version=17.1.13, url=https://www.redhat.com, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git)
Feb 20 08:42:10 np0005625203.localdomain podman[97648]: 2026-02-20 08:42:10.169290141 +0000 UTC m=+0.484177967 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, tcib_managed=true, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Feb 20 08:42:10 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:42:15 np0005625203.localdomain sudo[97673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:42:15 np0005625203.localdomain sudo[97673]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:42:15 np0005625203.localdomain sudo[97673]: pam_unix(sudo:session): session closed for user root
Feb 20 08:42:15 np0005625203.localdomain sudo[97688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:42:15 np0005625203.localdomain sudo[97688]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:42:16 np0005625203.localdomain sudo[97688]: pam_unix(sudo:session): session closed for user root
Feb 20 08:42:16 np0005625203.localdomain sudo[97735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:42:16 np0005625203.localdomain sudo[97735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:42:16 np0005625203.localdomain sudo[97735]: pam_unix(sudo:session): session closed for user root
Feb 20 08:42:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:42:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:42:26 np0005625203.localdomain systemd[1]: tmp-crun.duF6VQ.mount: Deactivated successfully.
Feb 20 08:42:26 np0005625203.localdomain podman[97751]: 2026-02-20 08:42:26.788678206 +0000 UTC m=+0.098915833 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, config_id=tripleo_step3, managed_by=tripleo_ansible, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T22:34:43Z)
Feb 20 08:42:26 np0005625203.localdomain podman[97751]: 2026-02-20 08:42:26.801445073 +0000 UTC m=+0.111682740 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, distribution-scope=public, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, architecture=x86_64, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc., container_name=iscsid, managed_by=tripleo_ansible, batch=17.1_20260112.1)
Feb 20 08:42:26 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:42:26 np0005625203.localdomain podman[97750]: 2026-02-20 08:42:26.877633048 +0000 UTC m=+0.190003211 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, release=1766032510, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com)
Feb 20 08:42:26 np0005625203.localdomain podman[97750]: 2026-02-20 08:42:26.911376526 +0000 UTC m=+0.223746639 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, container_name=collectd, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-collectd-container)
Feb 20 08:42:26 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:42:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:42:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:42:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:42:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:42:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:42:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:42:32 np0005625203.localdomain systemd[1]: tmp-crun.cgpXyC.mount: Deactivated successfully.
Feb 20 08:42:32 np0005625203.localdomain systemd[1]: tmp-crun.6FPypy.mount: Deactivated successfully.
Feb 20 08:42:32 np0005625203.localdomain podman[97792]: 2026-02-20 08:42:32.842858751 +0000 UTC m=+0.147566552 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, url=https://www.redhat.com, architecture=x86_64, distribution-scope=public, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:42:32 np0005625203.localdomain podman[97792]: 2026-02-20 08:42:32.870268943 +0000 UTC m=+0.174976744 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:42:32 np0005625203.localdomain podman[97793]: 2026-02-20 08:42:32.893826134 +0000 UTC m=+0.195752459 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, tcib_managed=true, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, container_name=ovn_metadata_agent, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible)
Feb 20 08:42:32 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:42:32 np0005625203.localdomain podman[97793]: 2026-02-20 08:42:32.936916363 +0000 UTC m=+0.238842698 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.openshift.expose-services=, architecture=x86_64)
Feb 20 08:42:32 np0005625203.localdomain podman[97793]: unhealthy
Feb 20 08:42:32 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:42:32 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:42:32 np0005625203.localdomain podman[97789]: 2026-02-20 08:42:32.943442895 +0000 UTC m=+0.254657659 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=logrotate_crond, vendor=Red Hat, Inc., version=17.1.13)
Feb 20 08:42:33 np0005625203.localdomain podman[97791]: 2026-02-20 08:42:32.809841906 +0000 UTC m=+0.116929131 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, distribution-scope=public, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, url=https://www.redhat.com, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:42:33 np0005625203.localdomain podman[97789]: 2026-02-20 08:42:33.027422253 +0000 UTC m=+0.338637047 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Feb 20 08:42:33 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:42:33 np0005625203.localdomain podman[97791]: 2026-02-20 08:42:33.043357788 +0000 UTC m=+0.350445013 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, version=17.1.13, managed_by=tripleo_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:42:33 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:42:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:42:33 np0005625203.localdomain podman[97790]: 2026-02-20 08:42:32.996360488 +0000 UTC m=+0.305158917 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step5, architecture=x86_64)
Feb 20 08:42:33 np0005625203.localdomain podman[97802]: 2026-02-20 08:42:33.105337903 +0000 UTC m=+0.404131891 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, container_name=ovn_controller, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:42:33 np0005625203.localdomain podman[97802]: 2026-02-20 08:42:33.125345144 +0000 UTC m=+0.424139132 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z)
Feb 20 08:42:33 np0005625203.localdomain podman[97917]: 2026-02-20 08:42:33.165668256 +0000 UTC m=+0.081877064 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, version=17.1.13, release=1766032510, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, batch=17.1_20260112.1)
Feb 20 08:42:33 np0005625203.localdomain podman[97790]: 2026-02-20 08:42:33.177519305 +0000 UTC m=+0.486317774 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, vendor=Red Hat, Inc., release=1766032510, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible)
Feb 20 08:42:33 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:42:33 np0005625203.localdomain podman[97802]: unhealthy
Feb 20 08:42:33 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:42:33 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:42:33 np0005625203.localdomain podman[97917]: 2026-02-20 08:42:33.385954417 +0000 UTC m=+0.302163195 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, vcs-type=git, architecture=x86_64, container_name=metrics_qdr, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z)
Feb 20 08:42:33 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:42:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:42:40 np0005625203.localdomain podman[97953]: 2026-02-20 08:42:40.764767338 +0000 UTC m=+0.085008231 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, architecture=x86_64, version=17.1.13, config_id=tripleo_step4, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:42:41 np0005625203.localdomain podman[97953]: 2026-02-20 08:42:41.157260236 +0000 UTC m=+0.477501089 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, distribution-scope=public, architecture=x86_64, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, managed_by=tripleo_ansible)
Feb 20 08:42:41 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:42:43 np0005625203.localdomain sshd[97976]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:42:44 np0005625203.localdomain sshd[97976]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:42:45 np0005625203.localdomain sshd[97978]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:42:45 np0005625203.localdomain sshd[97978]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:42:56 np0005625203.localdomain sshd[97980]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:42:57 np0005625203.localdomain sshd[97980]: Invalid user yifan from 212.154.234.9 port 53518
Feb 20 08:42:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:42:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:42:57 np0005625203.localdomain systemd[1]: tmp-crun.nUjXsL.mount: Deactivated successfully.
Feb 20 08:42:57 np0005625203.localdomain podman[97983]: 2026-02-20 08:42:57.482778069 +0000 UTC m=+0.068677474 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, name=rhosp-rhel9/openstack-iscsid, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vcs-type=git, build-date=2026-01-12T22:34:43Z)
Feb 20 08:42:57 np0005625203.localdomain podman[97983]: 2026-02-20 08:42:57.494298757 +0000 UTC m=+0.080198222 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, release=1766032510, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, version=17.1.13, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Feb 20 08:42:57 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:42:57 np0005625203.localdomain podman[97982]: 2026-02-20 08:42:57.539455169 +0000 UTC m=+0.126455458 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, batch=17.1_20260112.1, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:42:57 np0005625203.localdomain sshd[97980]: Received disconnect from 212.154.234.9 port 53518:11: Bye Bye [preauth]
Feb 20 08:42:57 np0005625203.localdomain sshd[97980]: Disconnected from invalid user yifan 212.154.234.9 port 53518 [preauth]
Feb 20 08:42:57 np0005625203.localdomain podman[97982]: 2026-02-20 08:42:57.579320287 +0000 UTC m=+0.166320546 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Feb 20 08:42:57 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:43:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:43:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:43:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:43:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:43:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:43:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:43:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:43:03 np0005625203.localdomain podman[98021]: 2026-02-20 08:43:03.790198328 +0000 UTC m=+0.098163539 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:43:03 np0005625203.localdomain podman[98021]: 2026-02-20 08:43:03.795755931 +0000 UTC m=+0.103721172 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc.)
Feb 20 08:43:03 np0005625203.localdomain podman[98052]: 2026-02-20 08:43:03.802726798 +0000 UTC m=+0.084241487 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, managed_by=tripleo_ansible, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-type=git, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.openshift.expose-services=)
Feb 20 08:43:03 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:43:03 np0005625203.localdomain podman[98052]: 2026-02-20 08:43:03.836527297 +0000 UTC m=+0.118041976 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, url=https://www.redhat.com, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c)
Feb 20 08:43:03 np0005625203.localdomain podman[98052]: unhealthy
Feb 20 08:43:03 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:43:03 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:43:03 np0005625203.localdomain podman[98023]: 2026-02-20 08:43:03.842592185 +0000 UTC m=+0.142955210 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, batch=17.1_20260112.1, distribution-scope=public, architecture=x86_64, release=1766032510, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:43:03 np0005625203.localdomain podman[98028]: 2026-02-20 08:43:03.894993533 +0000 UTC m=+0.184620934 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:43:03 np0005625203.localdomain podman[98028]: 2026-02-20 08:43:03.975240064 +0000 UTC m=+0.264867465 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, vcs-type=git, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:43:03 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:43:03 np0005625203.localdomain podman[98022]: 2026-02-20 08:43:03.947168812 +0000 UTC m=+0.248460155 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-type=git, url=https://www.redhat.com, version=17.1.13, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Feb 20 08:43:04 np0005625203.localdomain podman[98034]: 2026-02-20 08:43:04.061180473 +0000 UTC m=+0.348850393 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, distribution-scope=public, config_id=tripleo_step1, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, name=rhosp-rhel9/openstack-qdrouterd)
Feb 20 08:43:04 np0005625203.localdomain podman[98041]: 2026-02-20 08:43:04.113487098 +0000 UTC m=+0.399226439 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, distribution-scope=public, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, batch=17.1_20260112.1, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, release=1766032510)
Feb 20 08:43:04 np0005625203.localdomain podman[98023]: 2026-02-20 08:43:04.128971918 +0000 UTC m=+0.429334983 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, io.buildah.version=1.41.5, architecture=x86_64, build-date=2026-01-12T23:07:47Z, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, config_id=tripleo_step4, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:43:04 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:43:04 np0005625203.localdomain podman[98041]: 2026-02-20 08:43:04.153228562 +0000 UTC m=+0.438967863 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, io.buildah.version=1.41.5, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20260112.1)
Feb 20 08:43:04 np0005625203.localdomain podman[98041]: unhealthy
Feb 20 08:43:04 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:43:04 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:43:04 np0005625203.localdomain podman[98022]: 2026-02-20 08:43:04.180449968 +0000 UTC m=+0.481741401 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, distribution-scope=public, batch=17.1_20260112.1, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, tcib_managed=true, vcs-type=git)
Feb 20 08:43:04 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:43:04 np0005625203.localdomain podman[98034]: 2026-02-20 08:43:04.291494255 +0000 UTC m=+0.579164235 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, vendor=Red Hat, Inc., config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Feb 20 08:43:04 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:43:04 np0005625203.localdomain systemd[1]: tmp-crun.rV3fF0.mount: Deactivated successfully.
Feb 20 08:43:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:43:11 np0005625203.localdomain systemd[1]: tmp-crun.5LBkLq.mount: Deactivated successfully.
Feb 20 08:43:11 np0005625203.localdomain podman[98186]: 2026-02-20 08:43:11.758988361 +0000 UTC m=+0.078860179 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, architecture=x86_64, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=)
Feb 20 08:43:12 np0005625203.localdomain podman[98186]: 2026-02-20 08:43:12.152024666 +0000 UTC m=+0.471896484 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step4, version=17.1.13, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64)
Feb 20 08:43:12 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:43:17 np0005625203.localdomain sudo[98207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:43:17 np0005625203.localdomain sudo[98207]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:43:17 np0005625203.localdomain sudo[98207]: pam_unix(sudo:session): session closed for user root
Feb 20 08:43:17 np0005625203.localdomain sudo[98222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 08:43:17 np0005625203.localdomain sudo[98222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:43:17 np0005625203.localdomain podman[98310]: 2026-02-20 08:43:17.962258557 +0000 UTC m=+0.103741892 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1770267347, CEPH_POINT_RELEASE=)
Feb 20 08:43:18 np0005625203.localdomain podman[98310]: 2026-02-20 08:43:18.073257684 +0000 UTC m=+0.214740959 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, GIT_BRANCH=main, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public)
Feb 20 08:43:18 np0005625203.localdomain sudo[98222]: pam_unix(sudo:session): session closed for user root
Feb 20 08:43:18 np0005625203.localdomain sudo[98374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:43:18 np0005625203.localdomain sudo[98374]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:43:18 np0005625203.localdomain sudo[98374]: pam_unix(sudo:session): session closed for user root
Feb 20 08:43:18 np0005625203.localdomain sudo[98389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:43:18 np0005625203.localdomain sudo[98389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:43:19 np0005625203.localdomain sudo[98389]: pam_unix(sudo:session): session closed for user root
Feb 20 08:43:19 np0005625203.localdomain sudo[98435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:43:19 np0005625203.localdomain sudo[98435]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:43:19 np0005625203.localdomain sudo[98435]: pam_unix(sudo:session): session closed for user root
Feb 20 08:43:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:43:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:43:27 np0005625203.localdomain podman[98451]: 2026-02-20 08:43:27.768660065 +0000 UTC m=+0.086519948 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid, vcs-type=git, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:43:27 np0005625203.localdomain podman[98451]: 2026-02-20 08:43:27.805364305 +0000 UTC m=+0.123224168 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, batch=17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, container_name=iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid)
Feb 20 08:43:27 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:43:27 np0005625203.localdomain podman[98450]: 2026-02-20 08:43:27.822125095 +0000 UTC m=+0.139951417 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.buildah.version=1.41.5, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:43:27 np0005625203.localdomain podman[98450]: 2026-02-20 08:43:27.835302335 +0000 UTC m=+0.153128697 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, version=17.1.13, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5)
Feb 20 08:43:27 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:43:30 np0005625203.localdomain sshd[98491]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:43:30 np0005625203.localdomain sshd[98491]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:43:34 np0005625203.localdomain recover_tripleo_nova_virtqemud[98540]: 62505
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: tmp-crun.fd31Pl.mount: Deactivated successfully.
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: tmp-crun.vI7zIm.mount: Deactivated successfully.
Feb 20 08:43:34 np0005625203.localdomain podman[98494]: 2026-02-20 08:43:34.781632037 +0000 UTC m=+0.084838735 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.13, url=https://www.redhat.com, container_name=nova_compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:43:34 np0005625203.localdomain podman[98514]: 2026-02-20 08:43:34.862117467 +0000 UTC m=+0.143890789 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., batch=17.1_20260112.1, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, tcib_managed=true, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, distribution-scope=public, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5)
Feb 20 08:43:34 np0005625203.localdomain podman[98494]: 2026-02-20 08:43:34.868315749 +0000 UTC m=+0.171522477 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, version=17.1.13, config_id=tripleo_step5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, release=1766032510, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, container_name=nova_compute)
Feb 20 08:43:34 np0005625203.localdomain podman[98514]: 2026-02-20 08:43:34.877093701 +0000 UTC m=+0.158867023 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, distribution-scope=public, io.openshift.expose-services=, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:43:34 np0005625203.localdomain podman[98514]: unhealthy
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:43:34 np0005625203.localdomain podman[98504]: 2026-02-20 08:43:34.918056724 +0000 UTC m=+0.202285193 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, vcs-type=git, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510)
Feb 20 08:43:34 np0005625203.localdomain podman[98493]: 2026-02-20 08:43:34.817829431 +0000 UTC m=+0.123090313 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:43:34 np0005625203.localdomain podman[98507]: 2026-02-20 08:43:34.972346879 +0000 UTC m=+0.254793273 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., vcs-type=git, config_id=tripleo_step1, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd)
Feb 20 08:43:34 np0005625203.localdomain podman[98504]: 2026-02-20 08:43:34.973270949 +0000 UTC m=+0.257499438 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510)
Feb 20 08:43:34 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:43:34 np0005625203.localdomain podman[98495]: 2026-02-20 08:43:34.839825694 +0000 UTC m=+0.136514260 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, distribution-scope=public, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, managed_by=tripleo_ansible, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64)
Feb 20 08:43:35 np0005625203.localdomain podman[98493]: 2026-02-20 08:43:35.003323381 +0000 UTC m=+0.308584253 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, com.redhat.component=openstack-cron-container, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, distribution-scope=public, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, name=rhosp-rhel9/openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:43:35 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:43:35 np0005625203.localdomain podman[98495]: 2026-02-20 08:43:35.076644368 +0000 UTC m=+0.373332954 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, url=https://www.redhat.com, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-compute-container)
Feb 20 08:43:35 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:43:35 np0005625203.localdomain podman[98512]: 2026-02-20 08:43:35.089612151 +0000 UTC m=+0.373539180 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, architecture=x86_64, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, build-date=2026-01-12T22:56:19Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, config_id=tripleo_step4)
Feb 20 08:43:35 np0005625203.localdomain podman[98512]: 2026-02-20 08:43:35.106201706 +0000 UTC m=+0.390128765 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, url=https://www.redhat.com, tcib_managed=true, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, container_name=ovn_metadata_agent, distribution-scope=public)
Feb 20 08:43:35 np0005625203.localdomain podman[98512]: unhealthy
Feb 20 08:43:35 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:43:35 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:43:35 np0005625203.localdomain podman[98507]: 2026-02-20 08:43:35.18133075 +0000 UTC m=+0.463777114 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=metrics_qdr, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.13, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, distribution-scope=public)
Feb 20 08:43:35 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:43:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:43:42 np0005625203.localdomain podman[98658]: 2026-02-20 08:43:42.740570104 +0000 UTC m=+0.063255805 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true)
Feb 20 08:43:43 np0005625203.localdomain podman[98658]: 2026-02-20 08:43:43.118852381 +0000 UTC m=+0.441538072 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1766032510, tcib_managed=true, url=https://www.redhat.com, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5)
Feb 20 08:43:43 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:43:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:43:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:43:58 np0005625203.localdomain systemd[1]: tmp-crun.DD7z6G.mount: Deactivated successfully.
Feb 20 08:43:58 np0005625203.localdomain podman[98680]: 2026-02-20 08:43:58.774176921 +0000 UTC m=+0.095840488 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, distribution-scope=public, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible)
Feb 20 08:43:58 np0005625203.localdomain podman[98680]: 2026-02-20 08:43:58.785531703 +0000 UTC m=+0.107195300 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git)
Feb 20 08:43:58 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:43:58 np0005625203.localdomain podman[98681]: 2026-02-20 08:43:58.873147444 +0000 UTC m=+0.187220445 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-iscsid-container, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, distribution-scope=public, tcib_managed=true, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:43:58 np0005625203.localdomain podman[98681]: 2026-02-20 08:43:58.881118312 +0000 UTC m=+0.195191283 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.buildah.version=1.41.5, io.openshift.expose-services=, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:43:58 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:44:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:44:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:44:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:44:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:44:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:44:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:44:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:44:05 np0005625203.localdomain podman[98733]: 2026-02-20 08:44:05.789464663 +0000 UTC m=+0.089176990 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible)
Feb 20 08:44:05 np0005625203.localdomain podman[98719]: 2026-02-20 08:44:05.765958454 +0000 UTC m=+0.079865832 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, vcs-type=git, version=17.1.13, build-date=2026-01-12T23:32:04Z, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, config_id=tripleo_step5)
Feb 20 08:44:05 np0005625203.localdomain podman[98719]: 2026-02-20 08:44:05.844837633 +0000 UTC m=+0.158745011 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, architecture=x86_64, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true)
Feb 20 08:44:05 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:44:05 np0005625203.localdomain podman[98740]: 2026-02-20 08:44:05.811522638 +0000 UTC m=+0.104093583 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, vendor=Red Hat, Inc., batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, version=17.1.13, distribution-scope=public, build-date=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Feb 20 08:44:05 np0005625203.localdomain podman[98727]: 2026-02-20 08:44:05.867680672 +0000 UTC m=+0.170009710 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, tcib_managed=true, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, managed_by=tripleo_ansible)
Feb 20 08:44:05 np0005625203.localdomain podman[98740]: 2026-02-20 08:44:05.890352696 +0000 UTC m=+0.182923741 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, container_name=ovn_controller, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, tcib_managed=true)
Feb 20 08:44:05 np0005625203.localdomain podman[98740]: unhealthy
Feb 20 08:44:05 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:44:05 np0005625203.localdomain podman[98721]: 2026-02-20 08:44:05.903750953 +0000 UTC m=+0.206642579 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:44:05 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:44:05 np0005625203.localdomain podman[98721]: 2026-02-20 08:44:05.921362979 +0000 UTC m=+0.224254595 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, config_id=tripleo_step4, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.component=openstack-ceilometer-ipmi-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, container_name=ceilometer_agent_ipmi, distribution-scope=public, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64)
Feb 20 08:44:05 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:44:05 np0005625203.localdomain podman[98720]: 2026-02-20 08:44:05.974271912 +0000 UTC m=+0.281449501 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, config_id=tripleo_step4)
Feb 20 08:44:05 np0005625203.localdomain podman[98718]: 2026-02-20 08:44:05.993929073 +0000 UTC m=+0.305326413 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:44:06 np0005625203.localdomain podman[98718]: 2026-02-20 08:44:06.003565572 +0000 UTC m=+0.314962952 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, name=rhosp-rhel9/openstack-cron, batch=17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_id=tripleo_step4, release=1766032510, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:44:06 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:44:06 np0005625203.localdomain podman[98733]: 2026-02-20 08:44:06.023718698 +0000 UTC m=+0.323431095 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, distribution-scope=public, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13)
Feb 20 08:44:06 np0005625203.localdomain podman[98733]: unhealthy
Feb 20 08:44:06 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:44:06 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:44:06 np0005625203.localdomain podman[98720]: 2026-02-20 08:44:06.079917883 +0000 UTC m=+0.387095472 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-ceilometer-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, distribution-scope=public, config_id=tripleo_step4)
Feb 20 08:44:06 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:44:06 np0005625203.localdomain podman[98727]: 2026-02-20 08:44:06.098006675 +0000 UTC m=+0.400335743 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-type=git, url=https://www.redhat.com)
Feb 20 08:44:06 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:44:06 np0005625203.localdomain systemd[1]: tmp-crun.7ic11K.mount: Deactivated successfully.
Feb 20 08:44:08 np0005625203.localdomain sshd[98882]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:44:09 np0005625203.localdomain sshd[98882]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:44:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:44:13 np0005625203.localdomain podman[98884]: 2026-02-20 08:44:13.777714559 +0000 UTC m=+0.088629703 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, build-date=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1766032510, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Feb 20 08:44:14 np0005625203.localdomain podman[98884]: 2026-02-20 08:44:14.141831766 +0000 UTC m=+0.452746920 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:44:14 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:44:14 np0005625203.localdomain sshd[98907]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:44:14 np0005625203.localdomain sshd[98907]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:44:19 np0005625203.localdomain sudo[98909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:44:19 np0005625203.localdomain sudo[98909]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:44:19 np0005625203.localdomain sudo[98909]: pam_unix(sudo:session): session closed for user root
Feb 20 08:44:19 np0005625203.localdomain sudo[98924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:44:19 np0005625203.localdomain sudo[98924]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:44:20 np0005625203.localdomain sudo[98924]: pam_unix(sudo:session): session closed for user root
Feb 20 08:44:21 np0005625203.localdomain sudo[98971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:44:21 np0005625203.localdomain sudo[98971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:44:21 np0005625203.localdomain sudo[98971]: pam_unix(sudo:session): session closed for user root
Feb 20 08:44:23 np0005625203.localdomain sshd[98986]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:44:24 np0005625203.localdomain sshd[98986]: Invalid user yifan from 102.210.148.92 port 53814
Feb 20 08:44:25 np0005625203.localdomain sshd[98986]: Received disconnect from 102.210.148.92 port 53814:11: Bye Bye [preauth]
Feb 20 08:44:25 np0005625203.localdomain sshd[98986]: Disconnected from invalid user yifan 102.210.148.92 port 53814 [preauth]
Feb 20 08:44:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:44:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:44:29 np0005625203.localdomain podman[98988]: 2026-02-20 08:44:29.775965229 +0000 UTC m=+0.089275024 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, distribution-scope=public, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:44:29 np0005625203.localdomain podman[98988]: 2026-02-20 08:44:29.78728416 +0000 UTC m=+0.100593915 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, batch=17.1_20260112.1, version=17.1.13, container_name=collectd, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, vcs-type=git, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:44:29 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:44:29 np0005625203.localdomain podman[98989]: 2026-02-20 08:44:29.882713083 +0000 UTC m=+0.192953703 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, release=1766032510, io.buildah.version=1.41.5, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:44:29 np0005625203.localdomain podman[98989]: 2026-02-20 08:44:29.892321122 +0000 UTC m=+0.202561732 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, tcib_managed=true, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, container_name=iscsid, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.buildah.version=1.41.5)
Feb 20 08:44:29 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:44:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:44:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:44:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:44:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:44:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:44:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:44:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:44:36 np0005625203.localdomain podman[99043]: 2026-02-20 08:44:36.784107848 +0000 UTC m=+0.085356752 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, build-date=2026-01-12T22:56:19Z, release=1766032510, batch=17.1_20260112.1, vcs-type=git, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4)
Feb 20 08:44:36 np0005625203.localdomain podman[99028]: 2026-02-20 08:44:36.768024138 +0000 UTC m=+0.084397481 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, url=https://www.redhat.com, architecture=x86_64, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:44:36 np0005625203.localdomain podman[99043]: 2026-02-20 08:44:36.824170882 +0000 UTC m=+0.125419756 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, config_id=tripleo_step4, vcs-type=git, container_name=ovn_metadata_agent, tcib_managed=true, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13)
Feb 20 08:44:36 np0005625203.localdomain podman[99043]: unhealthy
Feb 20 08:44:36 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:44:36 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:44:36 np0005625203.localdomain podman[99027]: 2026-02-20 08:44:36.888055506 +0000 UTC m=+0.205301896 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, version=17.1.13, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, config_id=tripleo_step4, release=1766032510, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Feb 20 08:44:36 np0005625203.localdomain podman[99029]: 2026-02-20 08:44:36.898130809 +0000 UTC m=+0.208268319 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, version=17.1.13, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, url=https://www.redhat.com, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, vcs-type=git)
Feb 20 08:44:36 np0005625203.localdomain podman[99028]: 2026-02-20 08:44:36.903825866 +0000 UTC m=+0.220199229 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, build-date=2026-01-12T23:32:04Z, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, managed_by=tripleo_ansible, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5)
Feb 20 08:44:36 np0005625203.localdomain podman[99030]: 2026-02-20 08:44:36.829463536 +0000 UTC m=+0.139304097 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., version=17.1.13, io.openshift.expose-services=, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, vcs-type=git, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1)
Feb 20 08:44:36 np0005625203.localdomain podman[99046]: 2026-02-20 08:44:36.935918962 +0000 UTC m=+0.237474845 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z)
Feb 20 08:44:36 np0005625203.localdomain podman[99029]: 2026-02-20 08:44:36.948188313 +0000 UTC m=+0.258325843 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true)
Feb 20 08:44:36 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:44:36 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:44:36 np0005625203.localdomain podman[99030]: 2026-02-20 08:44:36.964787418 +0000 UTC m=+0.274627979 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, architecture=x86_64, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13)
Feb 20 08:44:36 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:44:36 np0005625203.localdomain podman[99046]: 2026-02-20 08:44:36.998848457 +0000 UTC m=+0.300404340 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:36:40Z, container_name=ovn_controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, vendor=Red Hat, Inc., config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5)
Feb 20 08:44:37 np0005625203.localdomain podman[99046]: unhealthy
Feb 20 08:44:37 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:44:37 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:44:37 np0005625203.localdomain podman[99035]: 2026-02-20 08:44:37.077360355 +0000 UTC m=+0.384004437 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, release=1766032510, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, container_name=metrics_qdr)
Feb 20 08:44:37 np0005625203.localdomain podman[99027]: 2026-02-20 08:44:37.101508244 +0000 UTC m=+0.418754654 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vcs-type=git, name=rhosp-rhel9/openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_id=tripleo_step4, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:44:37 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:44:37 np0005625203.localdomain podman[99035]: 2026-02-20 08:44:37.282258098 +0000 UTC m=+0.588902200 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, distribution-scope=public, vendor=Red Hat, Inc., container_name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, architecture=x86_64, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:44:37 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:44:37 np0005625203.localdomain systemd[1]: tmp-crun.SEfLmk.mount: Deactivated successfully.
Feb 20 08:44:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:44:44 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:44:44 np0005625203.localdomain recover_tripleo_nova_virtqemud[99203]: 62505
Feb 20 08:44:44 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:44:44 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:44:44 np0005625203.localdomain podman[99196]: 2026-02-20 08:44:44.771929603 +0000 UTC m=+0.088013644 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, distribution-scope=public, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vcs-type=git, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc.)
Feb 20 08:44:45 np0005625203.localdomain podman[99196]: 2026-02-20 08:44:45.164294977 +0000 UTC m=+0.480379008 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, release=1766032510, config_id=tripleo_step4, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container)
Feb 20 08:44:45 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:45:00 np0005625203.localdomain sshd[99221]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:45:00 np0005625203.localdomain sshd[99221]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:45:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:45:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:45:00 np0005625203.localdomain podman[99223]: 2026-02-20 08:45:00.303693385 +0000 UTC m=+0.086142816 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.expose-services=, container_name=collectd, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.13, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, release=1766032510)
Feb 20 08:45:00 np0005625203.localdomain podman[99223]: 2026-02-20 08:45:00.310922339 +0000 UTC m=+0.093371700 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, vcs-type=git, build-date=2026-01-12T22:10:15Z, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, release=1766032510, container_name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:45:00 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:45:00 np0005625203.localdomain systemd[1]: tmp-crun.laEKib.mount: Deactivated successfully.
Feb 20 08:45:00 np0005625203.localdomain podman[99224]: 2026-02-20 08:45:00.365145072 +0000 UTC m=+0.145504378 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, batch=17.1_20260112.1, container_name=iscsid, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z)
Feb 20 08:45:00 np0005625203.localdomain podman[99224]: 2026-02-20 08:45:00.402497923 +0000 UTC m=+0.182857219 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, com.redhat.component=openstack-iscsid-container, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, vcs-type=git)
Feb 20 08:45:00 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: tmp-crun.CXw402.mount: Deactivated successfully.
Feb 20 08:45:07 np0005625203.localdomain podman[99277]: 2026-02-20 08:45:07.796543277 +0000 UTC m=+0.094519448 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-type=git, container_name=ovn_metadata_agent, url=https://www.redhat.com, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Feb 20 08:45:07 np0005625203.localdomain podman[99277]: 2026-02-20 08:45:07.81209383 +0000 UTC m=+0.110070041 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, tcib_managed=true, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Feb 20 08:45:07 np0005625203.localdomain podman[99277]: unhealthy
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:45:07 np0005625203.localdomain podman[99272]: 2026-02-20 08:45:07.77338408 +0000 UTC m=+0.078181023 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.buildah.version=1.41.5, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., release=1766032510, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, tcib_managed=true, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:45:07 np0005625203.localdomain podman[99283]: 2026-02-20 08:45:07.825147583 +0000 UTC m=+0.123886468 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, batch=17.1_20260112.1, release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., version=17.1.13, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:45:07 np0005625203.localdomain podman[99263]: 2026-02-20 08:45:07.883826411 +0000 UTC m=+0.193008609 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute)
Feb 20 08:45:07 np0005625203.localdomain podman[99283]: 2026-02-20 08:45:07.9044331 +0000 UTC m=+0.203171995 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, architecture=x86_64, container_name=ovn_controller, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, batch=17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team)
Feb 20 08:45:07 np0005625203.localdomain podman[99283]: unhealthy
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:45:07 np0005625203.localdomain podman[99263]: 2026-02-20 08:45:07.923241882 +0000 UTC m=+0.232424100 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_compute, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z)
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:45:07 np0005625203.localdomain podman[99272]: 2026-02-20 08:45:07.974233512 +0000 UTC m=+0.279030505 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, version=17.1.13, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:45:07 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:45:08 np0005625203.localdomain podman[99262]: 2026-02-20 08:45:07.978473383 +0000 UTC m=+0.292722188 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, release=1766032510, com.redhat.component=openstack-nova-compute-container, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true)
Feb 20 08:45:08 np0005625203.localdomain podman[99261]: 2026-02-20 08:45:08.047573174 +0000 UTC m=+0.363839452 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, vcs-type=git, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, version=17.1.13, io.buildah.version=1.41.5, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, container_name=logrotate_crond, batch=17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:45:08 np0005625203.localdomain podman[99262]: 2026-02-20 08:45:08.058701289 +0000 UTC m=+0.372950134 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, config_id=tripleo_step5, architecture=x86_64, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, vcs-type=git, batch=17.1_20260112.1)
Feb 20 08:45:08 np0005625203.localdomain podman[99261]: 2026-02-20 08:45:08.059026219 +0000 UTC m=+0.375292487 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, name=rhosp-rhel9/openstack-cron, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:45:08 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:45:08 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:45:08 np0005625203.localdomain podman[99264]: 2026-02-20 08:45:08.196139786 +0000 UTC m=+0.494986584 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:45:08 np0005625203.localdomain podman[99264]: 2026-02-20 08:45:08.250346175 +0000 UTC m=+0.549192973 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, architecture=x86_64, version=17.1.13, io.buildah.version=1.41.5, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_ipmi)
Feb 20 08:45:08 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:45:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:45:15 np0005625203.localdomain podman[99429]: 2026-02-20 08:45:15.761754899 +0000 UTC m=+0.081719083 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., release=1766032510, vcs-type=git, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4)
Feb 20 08:45:16 np0005625203.localdomain podman[99429]: 2026-02-20 08:45:16.126418405 +0000 UTC m=+0.446382549 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, distribution-scope=public, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, container_name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, config_id=tripleo_step4, url=https://www.redhat.com)
Feb 20 08:45:16 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:45:21 np0005625203.localdomain sudo[99450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:45:21 np0005625203.localdomain sudo[99450]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:45:21 np0005625203.localdomain sudo[99450]: pam_unix(sudo:session): session closed for user root
Feb 20 08:45:21 np0005625203.localdomain sudo[99465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:45:21 np0005625203.localdomain sudo[99465]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:45:22 np0005625203.localdomain sudo[99465]: pam_unix(sudo:session): session closed for user root
Feb 20 08:45:23 np0005625203.localdomain sudo[99512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:45:23 np0005625203.localdomain sudo[99512]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:45:23 np0005625203.localdomain sudo[99512]: pam_unix(sudo:session): session closed for user root
Feb 20 08:45:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:45:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:45:30 np0005625203.localdomain podman[99528]: 2026-02-20 08:45:30.780534443 +0000 UTC m=+0.095777929 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.buildah.version=1.41.5, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, container_name=iscsid)
Feb 20 08:45:30 np0005625203.localdomain systemd[1]: tmp-crun.FJwc5H.mount: Deactivated successfully.
Feb 20 08:45:30 np0005625203.localdomain podman[99527]: 2026-02-20 08:45:30.819736678 +0000 UTC m=+0.137215923 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, config_id=tripleo_step3, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.buildah.version=1.41.5, url=https://www.redhat.com, tcib_managed=true, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:45:30 np0005625203.localdomain podman[99527]: 2026-02-20 08:45:30.834599438 +0000 UTC m=+0.152078693 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, distribution-scope=public, config_id=tripleo_step3, url=https://www.redhat.com, version=17.1.13, container_name=collectd, io.buildah.version=1.41.5, architecture=x86_64, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, release=1766032510, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team)
Feb 20 08:45:30 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:45:30 np0005625203.localdomain podman[99528]: 2026-02-20 08:45:30.872240354 +0000 UTC m=+0.187483830 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, architecture=x86_64, distribution-scope=public, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13)
Feb 20 08:45:30 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:45:36 np0005625203.localdomain sshd[99567]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:45:36 np0005625203.localdomain sshd[99567]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:45:37 np0005625203.localdomain sshd[99569]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:45:38 np0005625203.localdomain sshd[99569]: Invalid user n8n from 103.200.25.162 port 38826
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: tmp-crun.WbIzIt.mount: Deactivated successfully.
Feb 20 08:45:38 np0005625203.localdomain podman[99573]: 2026-02-20 08:45:38.503641867 +0000 UTC m=+0.101458204 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, container_name=ceilometer_agent_compute, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64)
Feb 20 08:45:38 np0005625203.localdomain podman[99574]: 2026-02-20 08:45:38.541454348 +0000 UTC m=+0.138159460 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, container_name=ceilometer_agent_ipmi, vcs-type=git, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:45:38 np0005625203.localdomain podman[99582]: 2026-02-20 08:45:38.552922513 +0000 UTC m=+0.138887483 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, tcib_managed=true)
Feb 20 08:45:38 np0005625203.localdomain podman[99573]: 2026-02-20 08:45:38.557169135 +0000 UTC m=+0.154985492 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, io.buildah.version=1.41.5, tcib_managed=true, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, version=17.1.13, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:45:38 np0005625203.localdomain podman[99574]: 2026-02-20 08:45:38.570294591 +0000 UTC m=+0.166999763 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:45:38 np0005625203.localdomain podman[99571]: 2026-02-20 08:45:38.610975812 +0000 UTC m=+0.213594437 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, version=17.1.13)
Feb 20 08:45:38 np0005625203.localdomain podman[99571]: 2026-02-20 08:45:38.620371932 +0000 UTC m=+0.222990557 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, version=17.1.13, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, release=1766032510)
Feb 20 08:45:38 np0005625203.localdomain sshd[99569]: Received disconnect from 103.200.25.162 port 38826:11: Bye Bye [preauth]
Feb 20 08:45:38 np0005625203.localdomain sshd[99569]: Disconnected from invalid user n8n 103.200.25.162 port 38826 [preauth]
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:45:38 np0005625203.localdomain podman[99583]: 2026-02-20 08:45:38.713351262 +0000 UTC m=+0.301816699 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, config_id=tripleo_step4)
Feb 20 08:45:38 np0005625203.localdomain podman[99582]: 2026-02-20 08:45:38.75328485 +0000 UTC m=+0.339249890 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.expose-services=, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git)
Feb 20 08:45:38 np0005625203.localdomain podman[99572]: 2026-02-20 08:45:38.766012134 +0000 UTC m=+0.358488476 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., version=17.1.13)
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:45:38 np0005625203.localdomain podman[99583]: 2026-02-20 08:45:38.787320074 +0000 UTC m=+0.375785491 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, maintainer=OpenStack TripleO Team, distribution-scope=public, batch=17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13)
Feb 20 08:45:38 np0005625203.localdomain podman[99583]: unhealthy
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:45:38 np0005625203.localdomain podman[99572]: 2026-02-20 08:45:38.826470297 +0000 UTC m=+0.418946669 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git)
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:45:38 np0005625203.localdomain podman[99594]: 2026-02-20 08:45:38.869491949 +0000 UTC m=+0.453572051 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:45:38 np0005625203.localdomain podman[99594]: 2026-02-20 08:45:38.893337208 +0000 UTC m=+0.477417320 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, release=1766032510, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller)
Feb 20 08:45:38 np0005625203.localdomain podman[99594]: unhealthy
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:45:38 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:45:39 np0005625203.localdomain systemd[1]: tmp-crun.90eRWs.mount: Deactivated successfully.
Feb 20 08:45:44 np0005625203.localdomain sshd[99738]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:45:44 np0005625203.localdomain sshd[99738]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:45:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:45:46 np0005625203.localdomain podman[99740]: 2026-02-20 08:45:46.78952748 +0000 UTC m=+0.109794492 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:45:47 np0005625203.localdomain podman[99740]: 2026-02-20 08:45:47.164299969 +0000 UTC m=+0.484566991 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, distribution-scope=public, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container)
Feb 20 08:45:47 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:45:59 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:45:59 np0005625203.localdomain recover_tripleo_nova_virtqemud[99763]: 62505
Feb 20 08:45:59 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:45:59 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:46:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:46:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:46:01 np0005625203.localdomain systemd[1]: tmp-crun.TgUcIf.mount: Deactivated successfully.
Feb 20 08:46:01 np0005625203.localdomain podman[99765]: 2026-02-20 08:46:01.781747772 +0000 UTC m=+0.096948194 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public)
Feb 20 08:46:01 np0005625203.localdomain podman[99765]: 2026-02-20 08:46:01.796292953 +0000 UTC m=+0.111493345 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, config_id=tripleo_step3, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:46:01 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:46:01 np0005625203.localdomain systemd[1]: tmp-crun.bwScHt.mount: Deactivated successfully.
Feb 20 08:46:02 np0005625203.localdomain podman[99764]: 2026-02-20 08:46:02.053358956 +0000 UTC m=+0.369974692 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, batch=17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=collectd, maintainer=OpenStack TripleO Team, tcib_managed=true, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Feb 20 08:46:02 np0005625203.localdomain podman[99764]: 2026-02-20 08:46:02.088615918 +0000 UTC m=+0.405231674 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, managed_by=tripleo_ansible, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, version=17.1.13, io.buildah.version=1.41.5, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, container_name=collectd, release=1766032510)
Feb 20 08:46:02 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:46:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:46:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:46:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:46:08 np0005625203.localdomain systemd[1]: tmp-crun.HA9ArP.mount: Deactivated successfully.
Feb 20 08:46:08 np0005625203.localdomain podman[99803]: 2026-02-20 08:46:08.770769466 +0000 UTC m=+0.084932192 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., release=1766032510, config_id=tripleo_step4, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, version=17.1.13, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible)
Feb 20 08:46:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:46:08 np0005625203.localdomain podman[99803]: 2026-02-20 08:46:08.807291917 +0000 UTC m=+0.121454643 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:46:08 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:46:08 np0005625203.localdomain podman[99804]: 2026-02-20 08:46:08.83159639 +0000 UTC m=+0.144082065 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, batch=17.1_20260112.1, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, vcs-type=git, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, url=https://www.redhat.com)
Feb 20 08:46:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:46:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:46:08 np0005625203.localdomain podman[99804]: 2026-02-20 08:46:08.867862954 +0000 UTC m=+0.180348659 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:46:08 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:46:08 np0005625203.localdomain podman[99805]: 2026-02-20 08:46:08.87002239 +0000 UTC m=+0.179412089 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:30Z, distribution-scope=public, version=17.1.13, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:46:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:46:08 np0005625203.localdomain podman[99844]: 2026-02-20 08:46:08.950914326 +0000 UTC m=+0.142321720 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.5, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:46:08 np0005625203.localdomain podman[99862]: 2026-02-20 08:46:08.916439698 +0000 UTC m=+0.061770534 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:46:08 np0005625203.localdomain podman[99864]: 2026-02-20 08:46:08.993539946 +0000 UTC m=+0.132175855 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, container_name=nova_compute, version=17.1.13, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, release=1766032510, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, config_id=tripleo_step5, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:46:08 np0005625203.localdomain podman[99862]: 2026-02-20 08:46:08.999335196 +0000 UTC m=+0.144666032 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, release=1766032510, version=17.1.13, config_id=tripleo_step4, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, tcib_managed=true, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, url=https://www.redhat.com)
Feb 20 08:46:09 np0005625203.localdomain podman[99862]: unhealthy
Feb 20 08:46:09 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:46:09 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:46:09 np0005625203.localdomain podman[99864]: 2026-02-20 08:46:09.022547575 +0000 UTC m=+0.161183444 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, config_id=tripleo_step5, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=)
Feb 20 08:46:09 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:46:09 np0005625203.localdomain podman[99921]: 2026-02-20 08:46:09.001869124 +0000 UTC m=+0.047779391 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, container_name=ovn_controller, architecture=x86_64, batch=17.1_20260112.1, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, version=17.1.13, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible)
Feb 20 08:46:09 np0005625203.localdomain podman[99921]: 2026-02-20 08:46:09.088477057 +0000 UTC m=+0.134387284 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, config_id=tripleo_step4, tcib_managed=true, url=https://www.redhat.com, version=17.1.13, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc.)
Feb 20 08:46:09 np0005625203.localdomain podman[99921]: unhealthy
Feb 20 08:46:09 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:46:09 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:46:09 np0005625203.localdomain podman[99805]: 2026-02-20 08:46:09.104297637 +0000 UTC m=+0.413687376 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-type=git, build-date=2026-01-12T23:07:30Z, distribution-scope=public, config_id=tripleo_step4, url=https://www.redhat.com, managed_by=tripleo_ansible, io.buildah.version=1.41.5, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:46:09 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:46:09 np0005625203.localdomain podman[99844]: 2026-02-20 08:46:09.151315344 +0000 UTC m=+0.342722758 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, architecture=x86_64, build-date=2026-01-12T22:10:14Z, url=https://www.redhat.com, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:46:09 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:46:09 np0005625203.localdomain systemd[1]: tmp-crun.0KTfOf.mount: Deactivated successfully.
Feb 20 08:46:10 np0005625203.localdomain sshd[99969]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:46:10 np0005625203.localdomain sshd[99969]: Received disconnect from 43.245.222.27 port 58572:11: Bye Bye [preauth]
Feb 20 08:46:10 np0005625203.localdomain sshd[99969]: Disconnected from 43.245.222.27 port 58572 [preauth]
Feb 20 08:46:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:46:17 np0005625203.localdomain podman[99971]: 2026-02-20 08:46:17.767679245 +0000 UTC m=+0.088426109 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, release=1766032510, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com)
Feb 20 08:46:18 np0005625203.localdomain podman[99971]: 2026-02-20 08:46:18.16522826 +0000 UTC m=+0.485975054 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, version=17.1.13, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, url=https://www.redhat.com, distribution-scope=public, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:46:18 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:46:23 np0005625203.localdomain sudo[99994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:46:23 np0005625203.localdomain sudo[99994]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:46:23 np0005625203.localdomain sudo[99994]: pam_unix(sudo:session): session closed for user root
Feb 20 08:46:23 np0005625203.localdomain sudo[100009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:46:23 np0005625203.localdomain sudo[100009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:46:23 np0005625203.localdomain sudo[100009]: pam_unix(sudo:session): session closed for user root
Feb 20 08:46:24 np0005625203.localdomain sudo[100055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:46:24 np0005625203.localdomain sudo[100055]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:46:24 np0005625203.localdomain sudo[100055]: pam_unix(sudo:session): session closed for user root
Feb 20 08:46:29 np0005625203.localdomain sshd[100070]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:46:29 np0005625203.localdomain sshd[100070]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:46:29 np0005625203.localdomain sshd[100072]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:46:30 np0005625203.localdomain sshd[100072]: Invalid user claude from 212.154.234.9 port 10574
Feb 20 08:46:30 np0005625203.localdomain sshd[100072]: Received disconnect from 212.154.234.9 port 10574:11: Bye Bye [preauth]
Feb 20 08:46:30 np0005625203.localdomain sshd[100072]: Disconnected from invalid user claude 212.154.234.9 port 10574 [preauth]
Feb 20 08:46:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:46:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:46:32 np0005625203.localdomain podman[100075]: 2026-02-20 08:46:32.789969839 +0000 UTC m=+0.098905155 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:46:32 np0005625203.localdomain podman[100075]: 2026-02-20 08:46:32.827321595 +0000 UTC m=+0.136256871 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-iscsid-container, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, config_id=tripleo_step3, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=)
Feb 20 08:46:32 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:46:32 np0005625203.localdomain podman[100074]: 2026-02-20 08:46:32.881738381 +0000 UTC m=+0.188146348 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, vcs-type=git, build-date=2026-01-12T22:10:15Z, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., architecture=x86_64, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=)
Feb 20 08:46:32 np0005625203.localdomain podman[100074]: 2026-02-20 08:46:32.897239031 +0000 UTC m=+0.203647008 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, container_name=collectd, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5)
Feb 20 08:46:32 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:46:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:46:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:46:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:46:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:46:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:46:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:46:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:46:39 np0005625203.localdomain systemd[1]: tmp-crun.upH4lC.mount: Deactivated successfully.
Feb 20 08:46:39 np0005625203.localdomain podman[100126]: 2026-02-20 08:46:39.844070659 +0000 UTC m=+0.139739620 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, vcs-type=git, url=https://www.redhat.com, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vendor=Red Hat, Inc., release=1766032510, config_id=tripleo_step1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:46:39 np0005625203.localdomain podman[100114]: 2026-02-20 08:46:39.799466427 +0000 UTC m=+0.103891379 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, io.openshift.expose-services=, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, container_name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z)
Feb 20 08:46:39 np0005625203.localdomain podman[100114]: 2026-02-20 08:46:39.883257773 +0000 UTC m=+0.187682765 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, tcib_managed=true, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:47Z, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible)
Feb 20 08:46:39 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:46:39 np0005625203.localdomain podman[100113]: 2026-02-20 08:46:39.894814421 +0000 UTC m=+0.203024310 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.buildah.version=1.41.5, config_id=tripleo_step5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:46:39 np0005625203.localdomain podman[100112]: 2026-02-20 08:46:39.947631227 +0000 UTC m=+0.260528041 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, release=1766032510, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:46:39 np0005625203.localdomain podman[100139]: 2026-02-20 08:46:39.817352141 +0000 UTC m=+0.102735163 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, vcs-type=git, container_name=ovn_controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:46:39 np0005625203.localdomain podman[100113]: 2026-02-20 08:46:39.999533444 +0000 UTC m=+0.307743343 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, config_id=tripleo_step5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:46:40 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:46:40 np0005625203.localdomain podman[100120]: 2026-02-20 08:46:40.017107819 +0000 UTC m=+0.315595187 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:46:40 np0005625203.localdomain podman[100120]: 2026-02-20 08:46:40.047201881 +0000 UTC m=+0.345689249 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:30Z, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, managed_by=tripleo_ansible, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:46:40 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:46:40 np0005625203.localdomain podman[100132]: 2026-02-20 08:46:40.066823189 +0000 UTC m=+0.358088773 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:46:40 np0005625203.localdomain podman[100132]: 2026-02-20 08:46:40.081243946 +0000 UTC m=+0.372509560 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., batch=17.1_20260112.1, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z)
Feb 20 08:46:40 np0005625203.localdomain podman[100132]: unhealthy
Feb 20 08:46:40 np0005625203.localdomain podman[100126]: 2026-02-20 08:46:40.089017276 +0000 UTC m=+0.384686257 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step1, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, release=1766032510, maintainer=OpenStack TripleO Team)
Feb 20 08:46:40 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:46:40 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:46:40 np0005625203.localdomain podman[100139]: 2026-02-20 08:46:40.103157914 +0000 UTC m=+0.388540986 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, build-date=2026-01-12T22:36:40Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4)
Feb 20 08:46:40 np0005625203.localdomain podman[100139]: unhealthy
Feb 20 08:46:40 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:46:40 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:46:40 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:46:40 np0005625203.localdomain podman[100112]: 2026-02-20 08:46:40.134492716 +0000 UTC m=+0.447389560 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, tcib_managed=true, container_name=logrotate_crond, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, build-date=2026-01-12T22:10:15Z, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, config_id=tripleo_step4)
Feb 20 08:46:40 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:46:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:46:48 np0005625203.localdomain podman[100279]: 2026-02-20 08:46:48.76646544 +0000 UTC m=+0.083010612 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, release=1766032510, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Feb 20 08:46:49 np0005625203.localdomain podman[100279]: 2026-02-20 08:46:49.128866866 +0000 UTC m=+0.445412008 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., container_name=nova_migration_target, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:46:49 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:46:58 np0005625203.localdomain sshd[100301]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:46:59 np0005625203.localdomain sshd[100301]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:47:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:47:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:47:03 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:47:03 np0005625203.localdomain recover_tripleo_nova_virtqemud[100316]: 62505
Feb 20 08:47:03 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:47:03 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:47:03 np0005625203.localdomain podman[100304]: 2026-02-20 08:47:03.752063465 +0000 UTC m=+0.066476669 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, release=1766032510, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, distribution-scope=public, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:47:03 np0005625203.localdomain podman[100304]: 2026-02-20 08:47:03.766115751 +0000 UTC m=+0.080528955 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, vcs-type=git, release=1766032510, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1)
Feb 20 08:47:03 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:47:03 np0005625203.localdomain podman[100303]: 2026-02-20 08:47:03.816680957 +0000 UTC m=+0.132964239 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, release=1766032510, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, architecture=x86_64)
Feb 20 08:47:03 np0005625203.localdomain podman[100303]: 2026-02-20 08:47:03.832302231 +0000 UTC m=+0.148585473 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, architecture=x86_64, config_id=tripleo_step3, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, version=17.1.13, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:47:03 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:47:03 np0005625203.localdomain sshd[36473]: Received disconnect from 192.168.122.100 port 48952:11: disconnected by user
Feb 20 08:47:03 np0005625203.localdomain sshd[36473]: Disconnected from user tripleo-admin 192.168.122.100 port 48952
Feb 20 08:47:03 np0005625203.localdomain sshd[36453]: pam_unix(sshd:session): session closed for user tripleo-admin
Feb 20 08:47:03 np0005625203.localdomain systemd[1]: session-28.scope: Deactivated successfully.
Feb 20 08:47:03 np0005625203.localdomain systemd[1]: session-28.scope: Consumed 7min 15.268s CPU time.
Feb 20 08:47:03 np0005625203.localdomain systemd-logind[759]: Session 28 logged out. Waiting for processes to exit.
Feb 20 08:47:03 np0005625203.localdomain systemd-logind[759]: Removed session 28.
Feb 20 08:47:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:47:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:47:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:47:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:47:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:47:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:47:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:47:10 np0005625203.localdomain systemd[1]: tmp-crun.MQ61xl.mount: Deactivated successfully.
Feb 20 08:47:10 np0005625203.localdomain podman[100344]: 2026-02-20 08:47:10.784458683 +0000 UTC m=+0.098523313 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, release=1766032510, config_id=tripleo_step4, container_name=logrotate_crond, vcs-type=git, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:47:10 np0005625203.localdomain podman[100345]: 2026-02-20 08:47:10.839091805 +0000 UTC m=+0.147667305 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, architecture=x86_64, tcib_managed=true, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute, distribution-scope=public)
Feb 20 08:47:10 np0005625203.localdomain podman[100354]: 2026-02-20 08:47:10.895707759 +0000 UTC m=+0.194305760 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_id=tripleo_step4, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public)
Feb 20 08:47:10 np0005625203.localdomain podman[100346]: 2026-02-20 08:47:10.950029712 +0000 UTC m=+0.257872089 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, batch=17.1_20260112.1, version=17.1.13, io.openshift.expose-services=, architecture=x86_64, release=1766032510, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4)
Feb 20 08:47:10 np0005625203.localdomain podman[100354]: 2026-02-20 08:47:10.964524331 +0000 UTC m=+0.263122342 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4)
Feb 20 08:47:10 np0005625203.localdomain podman[100363]: 2026-02-20 08:47:10.813242115 +0000 UTC m=+0.108729070 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, container_name=ovn_controller, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, io.buildah.version=1.41.5, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, architecture=x86_64, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true)
Feb 20 08:47:11 np0005625203.localdomain podman[100347]: 2026-02-20 08:47:11.004846479 +0000 UTC m=+0.309270390 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:47:11 np0005625203.localdomain podman[100346]: 2026-02-20 08:47:11.00903957 +0000 UTC m=+0.316882017 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, container_name=ceilometer_agent_compute, io.openshift.expose-services=, build-date=2026-01-12T23:07:47Z, release=1766032510, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:47:11 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:47:11 np0005625203.localdomain podman[100354]: unhealthy
Feb 20 08:47:11 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:47:11 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:47:11 np0005625203.localdomain podman[100347]: 2026-02-20 08:47:11.064487058 +0000 UTC m=+0.368910989 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step4, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z)
Feb 20 08:47:11 np0005625203.localdomain podman[100345]: 2026-02-20 08:47:11.073521627 +0000 UTC m=+0.382097137 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z)
Feb 20 08:47:11 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:47:11 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:47:11 np0005625203.localdomain podman[100351]: 2026-02-20 08:47:11.15305156 +0000 UTC m=+0.451805136 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, config_id=tripleo_step1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, architecture=x86_64, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1)
Feb 20 08:47:11 np0005625203.localdomain podman[100363]: 2026-02-20 08:47:11.169639874 +0000 UTC m=+0.465126809 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, container_name=ovn_controller, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller)
Feb 20 08:47:11 np0005625203.localdomain podman[100363]: unhealthy
Feb 20 08:47:11 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:47:11 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:47:11 np0005625203.localdomain podman[100344]: 2026-02-20 08:47:11.22243781 +0000 UTC m=+0.536502460 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, architecture=x86_64, vcs-type=git, distribution-scope=public, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, name=rhosp-rhel9/openstack-cron, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond)
Feb 20 08:47:11 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:47:11 np0005625203.localdomain podman[100351]: 2026-02-20 08:47:11.38384511 +0000 UTC m=+0.682598686 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.openshift.expose-services=, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:47:11 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:47:13 np0005625203.localdomain sshd[100506]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:47:13 np0005625203.localdomain sshd[100506]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:47:14 np0005625203.localdomain systemd[1]: Stopping User Manager for UID 1003...
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Activating special unit Exit the Session...
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Removed slice User Background Tasks Slice.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Stopped target Main User Target.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Stopped target Basic System.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Stopped target Paths.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Stopped target Sockets.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Stopped target Timers.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Closed D-Bus User Message Bus Socket.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Stopped Create User's Volatile Files and Directories.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Removed slice User Application Slice.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Reached target Shutdown.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Finished Exit the Session.
Feb 20 08:47:14 np0005625203.localdomain systemd[36457]: Reached target Exit the Session.
Feb 20 08:47:14 np0005625203.localdomain systemd[1]: user@1003.service: Deactivated successfully.
Feb 20 08:47:14 np0005625203.localdomain systemd[1]: Stopped User Manager for UID 1003.
Feb 20 08:47:14 np0005625203.localdomain systemd[1]: user@1003.service: Consumed 4.503s CPU time, read 0B from disk, written 7.0K to disk.
Feb 20 08:47:14 np0005625203.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1003...
Feb 20 08:47:14 np0005625203.localdomain systemd[1]: run-user-1003.mount: Deactivated successfully.
Feb 20 08:47:14 np0005625203.localdomain systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Feb 20 08:47:14 np0005625203.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1003.
Feb 20 08:47:14 np0005625203.localdomain systemd[1]: Removed slice User Slice of UID 1003.
Feb 20 08:47:14 np0005625203.localdomain systemd[1]: user-1003.slice: Consumed 7min 19.796s CPU time.
Feb 20 08:47:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:47:19 np0005625203.localdomain podman[100511]: 2026-02-20 08:47:19.781470047 +0000 UTC m=+0.089636838 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, distribution-scope=public, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., version=17.1.13, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container)
Feb 20 08:47:20 np0005625203.localdomain podman[100511]: 2026-02-20 08:47:20.152363476 +0000 UTC m=+0.460530217 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, batch=17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, container_name=nova_migration_target, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, vcs-type=git)
Feb 20 08:47:20 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:47:24 np0005625203.localdomain sudo[100534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:47:24 np0005625203.localdomain sudo[100534]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:47:24 np0005625203.localdomain sudo[100534]: pam_unix(sudo:session): session closed for user root
Feb 20 08:47:24 np0005625203.localdomain sudo[100549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:47:24 np0005625203.localdomain sudo[100549]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:47:25 np0005625203.localdomain sudo[100549]: pam_unix(sudo:session): session closed for user root
Feb 20 08:47:26 np0005625203.localdomain sudo[100595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:47:26 np0005625203.localdomain sudo[100595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:47:26 np0005625203.localdomain sudo[100595]: pam_unix(sudo:session): session closed for user root
Feb 20 08:47:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:47:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:47:34 np0005625203.localdomain podman[100611]: 2026-02-20 08:47:34.770244271 +0000 UTC m=+0.086733318 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, architecture=x86_64, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, maintainer=OpenStack TripleO Team, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:47:34 np0005625203.localdomain podman[100611]: 2026-02-20 08:47:34.80832999 +0000 UTC m=+0.124819077 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, container_name=iscsid, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, vcs-type=git)
Feb 20 08:47:34 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:47:34 np0005625203.localdomain podman[100610]: 2026-02-20 08:47:34.815904255 +0000 UTC m=+0.132597568 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container)
Feb 20 08:47:34 np0005625203.localdomain podman[100610]: 2026-02-20 08:47:34.89934594 +0000 UTC m=+0.216039243 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, io.openshift.expose-services=)
Feb 20 08:47:34 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:47:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:47:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:47:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:47:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:47:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:47:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:47:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:47:41 np0005625203.localdomain podman[100650]: 2026-02-20 08:47:41.789998855 +0000 UTC m=+0.093044283 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_id=tripleo_step4, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, release=1766032510, batch=17.1_20260112.1, tcib_managed=true, build-date=2026-01-12T23:07:47Z)
Feb 20 08:47:41 np0005625203.localdomain podman[100649]: 2026-02-20 08:47:41.848627061 +0000 UTC m=+0.151087411 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, release=1766032510, version=17.1.13, com.redhat.component=openstack-nova-compute-container, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, io.openshift.expose-services=, batch=17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true)
Feb 20 08:47:41 np0005625203.localdomain podman[100649]: 2026-02-20 08:47:41.881290283 +0000 UTC m=+0.183750583 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, release=1766032510, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:47:41 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:47:41 np0005625203.localdomain podman[100653]: 2026-02-20 08:47:41.895464152 +0000 UTC m=+0.194105013 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., batch=17.1_20260112.1, distribution-scope=public, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, release=1766032510, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true)
Feb 20 08:47:41 np0005625203.localdomain podman[100657]: 2026-02-20 08:47:41.957769202 +0000 UTC m=+0.251549142 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=metrics_qdr, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, batch=17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, distribution-scope=public, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 08:47:41 np0005625203.localdomain podman[100650]: 2026-02-20 08:47:41.969536597 +0000 UTC m=+0.272582025 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, architecture=x86_64, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1766032510, url=https://www.redhat.com, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.openshift.expose-services=, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:47:42 np0005625203.localdomain podman[100648]: 2026-02-20 08:47:42.01160332 +0000 UTC m=+0.319425765 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Feb 20 08:47:42 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:47:42 np0005625203.localdomain podman[100648]: 2026-02-20 08:47:42.045942524 +0000 UTC m=+0.353764949 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1)
Feb 20 08:47:42 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:47:42 np0005625203.localdomain podman[100669]: 2026-02-20 08:47:42.066980415 +0000 UTC m=+0.354446710 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, distribution-scope=public, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, architecture=x86_64, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git)
Feb 20 08:47:42 np0005625203.localdomain podman[100653]: 2026-02-20 08:47:42.076300624 +0000 UTC m=+0.374941465 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, batch=17.1_20260112.1, release=1766032510, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:47:42 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:47:42 np0005625203.localdomain podman[100669]: 2026-02-20 08:47:42.110276557 +0000 UTC m=+0.397742812 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, vendor=Red Hat, Inc., version=17.1.13, io.buildah.version=1.41.5, config_id=tripleo_step4, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:47:42 np0005625203.localdomain podman[100669]: unhealthy
Feb 20 08:47:42 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:47:42 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:47:42 np0005625203.localdomain podman[100657]: 2026-02-20 08:47:42.162415471 +0000 UTC m=+0.456195431 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, tcib_managed=true, container_name=metrics_qdr, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd)
Feb 20 08:47:42 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:47:42 np0005625203.localdomain podman[100674]: 2026-02-20 08:47:42.164356022 +0000 UTC m=+0.447676219 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, build-date=2026-01-12T22:36:40Z, release=1766032510, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, architecture=x86_64, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, vcs-type=git, version=17.1.13)
Feb 20 08:47:42 np0005625203.localdomain podman[100674]: 2026-02-20 08:47:42.244853835 +0000 UTC m=+0.528174022 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, tcib_managed=true, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z)
Feb 20 08:47:42 np0005625203.localdomain podman[100674]: unhealthy
Feb 20 08:47:42 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:47:42 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:47:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:47:50 np0005625203.localdomain podman[100813]: 2026-02-20 08:47:50.769204977 +0000 UTC m=+0.087158752 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, container_name=nova_migration_target)
Feb 20 08:47:51 np0005625203.localdomain podman[100813]: 2026-02-20 08:47:51.194373358 +0000 UTC m=+0.512327143 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, tcib_managed=true, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, io.buildah.version=1.41.5)
Feb 20 08:47:51 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:47:58 np0005625203.localdomain sshd[100837]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:47:58 np0005625203.localdomain sshd[100837]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:48:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:48:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:48:05 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:48:05 np0005625203.localdomain recover_tripleo_nova_virtqemud[100852]: 62505
Feb 20 08:48:05 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:48:05 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:48:05 np0005625203.localdomain systemd[1]: tmp-crun.tmbxBk.mount: Deactivated successfully.
Feb 20 08:48:05 np0005625203.localdomain podman[100839]: 2026-02-20 08:48:05.781647794 +0000 UTC m=+0.095710366 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, vcs-type=git, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, config_id=tripleo_step3, managed_by=tripleo_ansible, release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com)
Feb 20 08:48:05 np0005625203.localdomain podman[100839]: 2026-02-20 08:48:05.821661514 +0000 UTC m=+0.135724096 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, config_id=tripleo_step3, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z)
Feb 20 08:48:05 np0005625203.localdomain podman[100840]: 2026-02-20 08:48:05.820006132 +0000 UTC m=+0.134464837 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, release=1766032510, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, maintainer=OpenStack TripleO Team)
Feb 20 08:48:05 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:48:05 np0005625203.localdomain podman[100840]: 2026-02-20 08:48:05.908278027 +0000 UTC m=+0.222736692 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, batch=17.1_20260112.1, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, managed_by=tripleo_ansible, io.buildah.version=1.41.5, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, version=17.1.13, architecture=x86_64)
Feb 20 08:48:05 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:48:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:48:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:48:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:48:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:48:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:48:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:48:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:48:12 np0005625203.localdomain podman[100884]: 2026-02-20 08:48:12.789465358 +0000 UTC m=+0.097329466 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vcs-type=git, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public)
Feb 20 08:48:12 np0005625203.localdomain podman[100883]: 2026-02-20 08:48:12.828176897 +0000 UTC m=+0.141067070 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-type=git, config_id=tripleo_step4)
Feb 20 08:48:12 np0005625203.localdomain podman[100881]: 2026-02-20 08:48:12.846163814 +0000 UTC m=+0.160980157 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, architecture=x86_64, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, release=1766032510, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:48:12 np0005625203.localdomain podman[100881]: 2026-02-20 08:48:12.881342864 +0000 UTC m=+0.196159217 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, distribution-scope=public, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, config_id=tripleo_step4, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, vcs-type=git, url=https://www.redhat.com, com.redhat.component=openstack-cron-container, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=)
Feb 20 08:48:12 np0005625203.localdomain podman[100883]: 2026-02-20 08:48:12.89089113 +0000 UTC m=+0.203781243 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, distribution-scope=public, vcs-type=git, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true)
Feb 20 08:48:12 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:48:12 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:48:12 np0005625203.localdomain podman[100889]: 2026-02-20 08:48:12.893040507 +0000 UTC m=+0.197437187 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=metrics_qdr, url=https://www.redhat.com, release=1766032510, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, config_id=tripleo_step1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 08:48:12 np0005625203.localdomain podman[100907]: 2026-02-20 08:48:12.957151393 +0000 UTC m=+0.248584282 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, com.redhat.component=openstack-ovn-controller-container, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc.)
Feb 20 08:48:12 np0005625203.localdomain podman[100907]: 2026-02-20 08:48:12.975217073 +0000 UTC m=+0.266649962 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Feb 20 08:48:12 np0005625203.localdomain podman[100907]: unhealthy
Feb 20 08:48:12 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:48:12 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:48:12 np0005625203.localdomain podman[100900]: 2026-02-20 08:48:12.990849036 +0000 UTC m=+0.287664032 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, version=17.1.13, config_id=tripleo_step4, tcib_managed=true)
Feb 20 08:48:13 np0005625203.localdomain podman[100884]: 2026-02-20 08:48:13.020860946 +0000 UTC m=+0.328725054 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc.)
Feb 20 08:48:13 np0005625203.localdomain podman[100900]: 2026-02-20 08:48:13.032231879 +0000 UTC m=+0.329046815 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, architecture=x86_64, io.buildah.version=1.41.5, version=17.1.13, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public)
Feb 20 08:48:13 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:48:13 np0005625203.localdomain podman[100900]: unhealthy
Feb 20 08:48:13 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:48:13 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:48:13 np0005625203.localdomain podman[100882]: 2026-02-20 08:48:13.030653289 +0000 UTC m=+0.344734580 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, distribution-scope=public, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, config_id=tripleo_step5, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:48:13 np0005625203.localdomain podman[100889]: 2026-02-20 08:48:13.090213945 +0000 UTC m=+0.394610565 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, container_name=metrics_qdr, vendor=Red Hat, Inc., release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, distribution-scope=public, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:48:13 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:48:13 np0005625203.localdomain podman[100882]: 2026-02-20 08:48:13.112194615 +0000 UTC m=+0.426275876 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.expose-services=, managed_by=tripleo_ansible, architecture=x86_64, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_compute, vcs-type=git, io.buildah.version=1.41.5, tcib_managed=true, config_id=tripleo_step5, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:48:13 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:48:17 np0005625203.localdomain sshd[101050]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:48:17 np0005625203.localdomain sshd[101050]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:48:18 np0005625203.localdomain sshd[101052]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:48:20 np0005625203.localdomain sshd[101052]: Invalid user user from 102.210.148.92 port 40196
Feb 20 08:48:20 np0005625203.localdomain sshd[101052]: Received disconnect from 102.210.148.92 port 40196:11: Bye Bye [preauth]
Feb 20 08:48:20 np0005625203.localdomain sshd[101052]: Disconnected from invalid user user 102.210.148.92 port 40196 [preauth]
Feb 20 08:48:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:48:21 np0005625203.localdomain podman[101054]: 2026-02-20 08:48:21.762414276 +0000 UTC m=+0.077625175 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-type=git, build-date=2026-01-12T23:32:04Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, url=https://www.redhat.com, container_name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:48:22 np0005625203.localdomain podman[101054]: 2026-02-20 08:48:22.113202732 +0000 UTC m=+0.428413631 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, vcs-type=git, architecture=x86_64, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., version=17.1.13)
Feb 20 08:48:22 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:48:26 np0005625203.localdomain sudo[101077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:48:26 np0005625203.localdomain sudo[101077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:48:26 np0005625203.localdomain sudo[101077]: pam_unix(sudo:session): session closed for user root
Feb 20 08:48:27 np0005625203.localdomain sudo[101092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:48:27 np0005625203.localdomain sudo[101092]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:48:27 np0005625203.localdomain sudo[101092]: pam_unix(sudo:session): session closed for user root
Feb 20 08:48:28 np0005625203.localdomain sudo[101140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:48:28 np0005625203.localdomain sudo[101140]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:48:28 np0005625203.localdomain sudo[101140]: pam_unix(sudo:session): session closed for user root
Feb 20 08:48:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:48:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:48:36 np0005625203.localdomain podman[101155]: 2026-02-20 08:48:36.765536586 +0000 UTC m=+0.080361011 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, tcib_managed=true, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, name=rhosp-rhel9/openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=collectd)
Feb 20 08:48:36 np0005625203.localdomain podman[101155]: 2026-02-20 08:48:36.802532922 +0000 UTC m=+0.117357367 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, managed_by=tripleo_ansible, container_name=collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, io.buildah.version=1.41.5)
Feb 20 08:48:36 np0005625203.localdomain systemd[1]: tmp-crun.OdyYVJ.mount: Deactivated successfully.
Feb 20 08:48:36 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:48:36 np0005625203.localdomain podman[101156]: 2026-02-20 08:48:36.828739973 +0000 UTC m=+0.143099963 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, distribution-scope=public, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, url=https://www.redhat.com, version=17.1.13, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid)
Feb 20 08:48:36 np0005625203.localdomain podman[101156]: 2026-02-20 08:48:36.839161046 +0000 UTC m=+0.153521016 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, vcs-type=git, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, version=17.1.13, tcib_managed=true, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:48:36 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:48:42 np0005625203.localdomain sshd[101195]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:48:42 np0005625203.localdomain sshd[101195]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:48:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:48:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:48:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:48:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:48:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:48:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:48:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:48:43 np0005625203.localdomain podman[101198]: 2026-02-20 08:48:43.814269128 +0000 UTC m=+0.098077489 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1)
Feb 20 08:48:43 np0005625203.localdomain podman[101211]: 2026-02-20 08:48:43.825239848 +0000 UTC m=+0.097862382 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:48:43 np0005625203.localdomain podman[101198]: 2026-02-20 08:48:43.867396154 +0000 UTC m=+0.151204565 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, url=https://www.redhat.com, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, io.openshift.expose-services=, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:48:43 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:48:43 np0005625203.localdomain podman[101199]: 2026-02-20 08:48:43.910287703 +0000 UTC m=+0.192911337 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, container_name=ceilometer_agent_compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, release=1766032510, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team)
Feb 20 08:48:43 np0005625203.localdomain podman[101200]: 2026-02-20 08:48:43.870700786 +0000 UTC m=+0.147528751 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.openshift.expose-services=, managed_by=tripleo_ansible, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, release=1766032510, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13)
Feb 20 08:48:43 np0005625203.localdomain podman[101219]: 2026-02-20 08:48:43.93311641 +0000 UTC m=+0.200822781 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.created=2026-01-12T22:36:40Z, build-date=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, version=17.1.13, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller)
Feb 20 08:48:43 np0005625203.localdomain podman[101211]: 2026-02-20 08:48:43.941694095 +0000 UTC m=+0.214316689 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, config_id=tripleo_step4, io.buildah.version=1.41.5, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true, vcs-type=git, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:48:43 np0005625203.localdomain podman[101211]: unhealthy
Feb 20 08:48:43 np0005625203.localdomain podman[101219]: 2026-02-20 08:48:43.95085966 +0000 UTC m=+0.218566031 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, managed_by=tripleo_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, architecture=x86_64, container_name=ovn_controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, config_id=tripleo_step4, version=17.1.13, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., distribution-scope=public)
Feb 20 08:48:43 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:48:43 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:48:43 np0005625203.localdomain podman[101219]: unhealthy
Feb 20 08:48:43 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:48:43 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:48:43 np0005625203.localdomain podman[101199]: 2026-02-20 08:48:43.995376359 +0000 UTC m=+0.277999993 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-compute, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, io.buildah.version=1.41.5, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container)
Feb 20 08:48:44 np0005625203.localdomain podman[101200]: 2026-02-20 08:48:44.005315906 +0000 UTC m=+0.282143861 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-type=git, tcib_managed=true, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, distribution-scope=public, container_name=ceilometer_agent_ipmi, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, architecture=x86_64, build-date=2026-01-12T23:07:30Z)
Feb 20 08:48:44 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:48:44 np0005625203.localdomain podman[101201]: 2026-02-20 08:48:44.023447348 +0000 UTC m=+0.298017343 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, version=17.1.13, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:48:44 np0005625203.localdomain podman[101197]: 2026-02-20 08:48:43.854435642 +0000 UTC m=+0.137858481 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., tcib_managed=true, description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, url=https://www.redhat.com, version=17.1.13, release=1766032510, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:48:44 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:48:44 np0005625203.localdomain podman[101197]: 2026-02-20 08:48:44.089320748 +0000 UTC m=+0.372743587 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, distribution-scope=public, io.openshift.expose-services=, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-cron-container, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, release=1766032510, vcs-type=git)
Feb 20 08:48:44 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:48:44 np0005625203.localdomain podman[101201]: 2026-02-20 08:48:44.252237365 +0000 UTC m=+0.526807390 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, maintainer=OpenStack TripleO Team, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, release=1766032510)
Feb 20 08:48:44 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:48:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:48:52 np0005625203.localdomain podman[101358]: 2026-02-20 08:48:52.764067811 +0000 UTC m=+0.085827990 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, build-date=2026-01-12T23:32:04Z)
Feb 20 08:48:53 np0005625203.localdomain podman[101358]: 2026-02-20 08:48:53.156392884 +0000 UTC m=+0.478153083 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:48:53 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:49:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:49:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:49:07 np0005625203.localdomain podman[101381]: 2026-02-20 08:49:07.763785815 +0000 UTC m=+0.084343264 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, vcs-type=git, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, config_id=tripleo_step3, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, container_name=collectd, com.redhat.component=openstack-collectd-container, architecture=x86_64)
Feb 20 08:49:07 np0005625203.localdomain podman[101381]: 2026-02-20 08:49:07.798410937 +0000 UTC m=+0.118968376 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, url=https://www.redhat.com, vcs-type=git, container_name=collectd, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:49:07 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:49:07 np0005625203.localdomain podman[101382]: 2026-02-20 08:49:07.819593974 +0000 UTC m=+0.135038294 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, architecture=x86_64, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, container_name=iscsid, version=17.1.13, name=rhosp-rhel9/openstack-iscsid, io.buildah.version=1.41.5, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, url=https://www.redhat.com)
Feb 20 08:49:07 np0005625203.localdomain podman[101382]: 2026-02-20 08:49:07.860397968 +0000 UTC m=+0.175842258 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, container_name=iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, io.buildah.version=1.41.5, batch=17.1_20260112.1, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_id=tripleo_step3, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Feb 20 08:49:07 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: tmp-crun.32NTUk.mount: Deactivated successfully.
Feb 20 08:49:14 np0005625203.localdomain podman[101420]: 2026-02-20 08:49:14.776087249 +0000 UTC m=+0.094234469 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-type=git, container_name=logrotate_crond, version=17.1.13, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64)
Feb 20 08:49:14 np0005625203.localdomain podman[101421]: 2026-02-20 08:49:14.835869372 +0000 UTC m=+0.149491112 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., version=17.1.13, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:49:14 np0005625203.localdomain podman[101440]: 2026-02-20 08:49:14.795847261 +0000 UTC m=+0.093024172 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, tcib_managed=true, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, distribution-scope=public, architecture=x86_64, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=)
Feb 20 08:49:14 np0005625203.localdomain podman[101429]: 2026-02-20 08:49:14.86778381 +0000 UTC m=+0.174664961 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, architecture=x86_64, io.buildah.version=1.41.5, vendor=Red Hat, Inc., release=1766032510, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, config_id=tripleo_step1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13)
Feb 20 08:49:14 np0005625203.localdomain podman[101440]: 2026-02-20 08:49:14.8797193 +0000 UTC m=+0.176896181 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, version=17.1.13, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Feb 20 08:49:14 np0005625203.localdomain podman[101440]: unhealthy
Feb 20 08:49:14 np0005625203.localdomain podman[101446]: 2026-02-20 08:49:14.88422686 +0000 UTC m=+0.179692638 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:49:14 np0005625203.localdomain podman[101420]: 2026-02-20 08:49:14.910419971 +0000 UTC m=+0.228567271 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=logrotate_crond, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, release=1766032510, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:49:14 np0005625203.localdomain podman[101422]: 2026-02-20 08:49:14.923248988 +0000 UTC m=+0.236079463 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible)
Feb 20 08:49:14 np0005625203.localdomain podman[101421]: 2026-02-20 08:49:14.934125605 +0000 UTC m=+0.247747265 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, url=https://www.redhat.com, distribution-scope=public, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, version=17.1.13)
Feb 20 08:49:14 np0005625203.localdomain podman[101446]: 2026-02-20 08:49:14.949680357 +0000 UTC m=+0.245146155 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., distribution-scope=public, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, release=1766032510, managed_by=tripleo_ansible, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git)
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:49:14 np0005625203.localdomain podman[101446]: unhealthy
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:49:14 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:49:14 np0005625203.localdomain podman[101423]: 2026-02-20 08:49:14.997971072 +0000 UTC m=+0.303457930 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, distribution-scope=public, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vcs-type=git, config_id=tripleo_step4, io.openshift.expose-services=)
Feb 20 08:49:15 np0005625203.localdomain podman[101422]: 2026-02-20 08:49:15.002626557 +0000 UTC m=+0.315457102 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z)
Feb 20 08:49:15 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:49:15 np0005625203.localdomain podman[101423]: 2026-02-20 08:49:15.020277123 +0000 UTC m=+0.325764021 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, batch=17.1_20260112.1, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, url=https://www.redhat.com, config_id=tripleo_step4, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:49:15 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:49:15 np0005625203.localdomain podman[101429]: 2026-02-20 08:49:15.078205888 +0000 UTC m=+0.385087039 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, container_name=metrics_qdr, architecture=x86_64, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, release=1766032510, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, tcib_managed=true)
Feb 20 08:49:15 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:49:15 np0005625203.localdomain systemd[1]: tmp-crun.IoOEwT.mount: Deactivated successfully.
Feb 20 08:49:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:49:23 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:49:23 np0005625203.localdomain recover_tripleo_nova_virtqemud[101592]: 62505
Feb 20 08:49:23 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:49:23 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:49:23 np0005625203.localdomain podman[101586]: 2026-02-20 08:49:23.771779392 +0000 UTC m=+0.086576162 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.5, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vcs-type=git, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:49:24 np0005625203.localdomain podman[101586]: 2026-02-20 08:49:24.163394653 +0000 UTC m=+0.478191433 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.buildah.version=1.41.5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Feb 20 08:49:24 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:49:25 np0005625203.localdomain sshd[101613]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:49:25 np0005625203.localdomain sshd[101613]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:49:26 np0005625203.localdomain sshd[101615]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:49:26 np0005625203.localdomain sshd[101615]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:49:28 np0005625203.localdomain sudo[101617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:49:28 np0005625203.localdomain sudo[101617]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:49:28 np0005625203.localdomain sudo[101617]: pam_unix(sudo:session): session closed for user root
Feb 20 08:49:28 np0005625203.localdomain sudo[101632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:49:28 np0005625203.localdomain sudo[101632]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:49:29 np0005625203.localdomain sudo[101632]: pam_unix(sudo:session): session closed for user root
Feb 20 08:49:30 np0005625203.localdomain sudo[101678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:49:30 np0005625203.localdomain sudo[101678]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:49:30 np0005625203.localdomain sudo[101678]: pam_unix(sudo:session): session closed for user root
Feb 20 08:49:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:49:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:49:38 np0005625203.localdomain systemd[1]: tmp-crun.oxlH1R.mount: Deactivated successfully.
Feb 20 08:49:38 np0005625203.localdomain podman[101694]: 2026-02-20 08:49:38.789422753 +0000 UTC m=+0.098710018 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, com.redhat.component=openstack-iscsid-container, distribution-scope=public, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, container_name=iscsid, tcib_managed=true, architecture=x86_64, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.)
Feb 20 08:49:38 np0005625203.localdomain podman[101694]: 2026-02-20 08:49:38.831110915 +0000 UTC m=+0.140398220 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, config_id=tripleo_step3, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, container_name=iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, vcs-type=git, vendor=Red Hat, Inc., release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:49:38 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:49:38 np0005625203.localdomain podman[101693]: 2026-02-20 08:49:38.877626726 +0000 UTC m=+0.187474338 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:15Z, version=17.1.13, tcib_managed=true, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, io.buildah.version=1.41.5, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com)
Feb 20 08:49:38 np0005625203.localdomain podman[101693]: 2026-02-20 08:49:38.918483821 +0000 UTC m=+0.228331423 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, version=17.1.13, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=)
Feb 20 08:49:38 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:49:44 np0005625203.localdomain sshd[101734]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: tmp-crun.a14uqQ.mount: Deactivated successfully.
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: tmp-crun.2wDDtC.mount: Deactivated successfully.
Feb 20 08:49:45 np0005625203.localdomain podman[101738]: 2026-02-20 08:49:45.822289035 +0000 UTC m=+0.132197137 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:49:45 np0005625203.localdomain podman[101743]: 2026-02-20 08:49:45.834358329 +0000 UTC m=+0.137729267 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, build-date=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team)
Feb 20 08:49:45 np0005625203.localdomain podman[101752]: 2026-02-20 08:49:45.790343435 +0000 UTC m=+0.095354985 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1766032510, batch=17.1_20260112.1, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64)
Feb 20 08:49:45 np0005625203.localdomain podman[101737]: 2026-02-20 08:49:45.761901964 +0000 UTC m=+0.076942364 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team)
Feb 20 08:49:45 np0005625203.localdomain sshd[101734]: Invalid user oracle from 103.200.25.162 port 45206
Feb 20 08:49:45 np0005625203.localdomain podman[101752]: 2026-02-20 08:49:45.872335436 +0000 UTC m=+0.177347016 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, url=https://www.redhat.com, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, vendor=Red Hat, Inc.)
Feb 20 08:49:45 np0005625203.localdomain podman[101752]: unhealthy
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:49:45 np0005625203.localdomain podman[101737]: 2026-02-20 08:49:45.898235008 +0000 UTC m=+0.213275428 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, version=17.1.13, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z)
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:49:45 np0005625203.localdomain podman[101738]: 2026-02-20 08:49:45.927690039 +0000 UTC m=+0.237598181 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, vcs-type=git, managed_by=tripleo_ansible, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-compute)
Feb 20 08:49:45 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:49:45 np0005625203.localdomain podman[101736]: 2026-02-20 08:49:45.980391242 +0000 UTC m=+0.295890477 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, vcs-type=git, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:49:46 np0005625203.localdomain podman[101736]: 2026-02-20 08:49:46.016328546 +0000 UTC m=+0.331827811 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, name=rhosp-rhel9/openstack-cron, tcib_managed=true, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, release=1766032510, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, managed_by=tripleo_ansible, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z)
Feb 20 08:49:46 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:49:46 np0005625203.localdomain podman[101743]: 2026-02-20 08:49:46.0284391 +0000 UTC m=+0.331810028 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:49:46 np0005625203.localdomain podman[101739]: 2026-02-20 08:49:46.035275163 +0000 UTC m=+0.339838759 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, architecture=x86_64, version=17.1.13, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:49:46 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:49:46 np0005625203.localdomain podman[101739]: 2026-02-20 08:49:46.06815575 +0000 UTC m=+0.372719336 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, container_name=ceilometer_agent_ipmi, vcs-type=git)
Feb 20 08:49:46 np0005625203.localdomain podman[101753]: 2026-02-20 08:49:46.076515059 +0000 UTC m=+0.378387762 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z)
Feb 20 08:49:46 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:49:46 np0005625203.localdomain podman[101753]: 2026-02-20 08:49:46.116972363 +0000 UTC m=+0.418845066 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.openshift.expose-services=, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com)
Feb 20 08:49:46 np0005625203.localdomain podman[101753]: unhealthy
Feb 20 08:49:46 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:49:46 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:49:46 np0005625203.localdomain sshd[101734]: Received disconnect from 103.200.25.162 port 45206:11: Bye Bye [preauth]
Feb 20 08:49:46 np0005625203.localdomain sshd[101734]: Disconnected from invalid user oracle 103.200.25.162 port 45206 [preauth]
Feb 20 08:49:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 08:49:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4200.1 total, 600.0 interval
                                                          Cumulative writes: 4844 writes, 21K keys, 4844 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4844 writes, 660 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 08:49:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 08:49:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4200.1 total, 600.0 interval
                                                          Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5843 writes, 764 syncs, 7.65 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 08:49:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:49:54 np0005625203.localdomain podman[101901]: 2026-02-20 08:49:54.760544437 +0000 UTC m=+0.081671671 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, batch=17.1_20260112.1, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc.)
Feb 20 08:49:55 np0005625203.localdomain podman[101901]: 2026-02-20 08:49:55.151655973 +0000 UTC m=+0.472783177 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, vcs-type=git, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, container_name=nova_migration_target)
Feb 20 08:49:55 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:50:00 np0005625203.localdomain sshd[101925]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:50:01 np0005625203.localdomain sshd[101925]: Received disconnect from 212.154.234.9 port 18027:11: Bye Bye [preauth]
Feb 20 08:50:01 np0005625203.localdomain sshd[101925]: Disconnected from authenticating user root 212.154.234.9 port 18027 [preauth]
Feb 20 08:50:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:50:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:50:09 np0005625203.localdomain systemd[1]: tmp-crun.arjWBw.mount: Deactivated successfully.
Feb 20 08:50:09 np0005625203.localdomain podman[101927]: 2026-02-20 08:50:09.79342113 +0000 UTC m=+0.101081953 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, config_id=tripleo_step3, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-type=git, container_name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vendor=Red Hat, Inc.)
Feb 20 08:50:09 np0005625203.localdomain podman[101927]: 2026-02-20 08:50:09.832249692 +0000 UTC m=+0.139910565 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, version=17.1.13, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, io.buildah.version=1.41.5, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:50:09 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:50:09 np0005625203.localdomain podman[101928]: 2026-02-20 08:50:09.88639952 +0000 UTC m=+0.193777353 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.13, managed_by=tripleo_ansible, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, tcib_managed=true, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z)
Feb 20 08:50:09 np0005625203.localdomain podman[101928]: 2026-02-20 08:50:09.898638259 +0000 UTC m=+0.206016072 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, version=17.1.13, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, url=https://www.redhat.com, io.buildah.version=1.41.5, container_name=iscsid)
Feb 20 08:50:09 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:50:10 np0005625203.localdomain sshd[101965]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:50:10 np0005625203.localdomain sshd[101965]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:50:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:50:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:50:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:50:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:50:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:50:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:50:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:50:16 np0005625203.localdomain systemd[1]: tmp-crun.viGw37.mount: Deactivated successfully.
Feb 20 08:50:16 np0005625203.localdomain podman[101987]: 2026-02-20 08:50:16.824366792 +0000 UTC m=+0.108660806 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510, vcs-type=git, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn)
Feb 20 08:50:16 np0005625203.localdomain podman[101987]: 2026-02-20 08:50:16.863316718 +0000 UTC m=+0.147610722 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:50:16 np0005625203.localdomain podman[101987]: unhealthy
Feb 20 08:50:16 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:50:16 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:50:16 np0005625203.localdomain podman[101969]: 2026-02-20 08:50:16.795679833 +0000 UTC m=+0.096900083 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20260112.1, vcs-type=git, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, io.buildah.version=1.41.5, config_id=tripleo_step4, release=1766032510, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:50:16 np0005625203.localdomain podman[101968]: 2026-02-20 08:50:16.906092963 +0000 UTC m=+0.207609252 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, managed_by=tripleo_ansible, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:50:16 np0005625203.localdomain podman[101991]: 2026-02-20 08:50:16.873512705 +0000 UTC m=+0.152948370 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, version=17.1.13, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.expose-services=, batch=17.1_20260112.1, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, tcib_managed=true, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Feb 20 08:50:16 np0005625203.localdomain podman[101976]: 2026-02-20 08:50:16.954590606 +0000 UTC m=+0.244352520 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, container_name=metrics_qdr, managed_by=tripleo_ansible, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, distribution-scope=public, version=17.1.13, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container)
Feb 20 08:50:16 np0005625203.localdomain podman[101968]: 2026-02-20 08:50:16.959099125 +0000 UTC m=+0.260615374 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step5, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=)
Feb 20 08:50:16 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:50:17 np0005625203.localdomain podman[101991]: 2026-02-20 08:50:17.004849243 +0000 UTC m=+0.284284858 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, tcib_managed=true, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c)
Feb 20 08:50:17 np0005625203.localdomain podman[101991]: unhealthy
Feb 20 08:50:17 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:50:17 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:50:17 np0005625203.localdomain podman[101970]: 2026-02-20 08:50:17.020186748 +0000 UTC m=+0.314976418 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, name=rhosp-rhel9/openstack-ceilometer-ipmi, version=17.1.13, architecture=x86_64, vcs-type=git, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, tcib_managed=true)
Feb 20 08:50:17 np0005625203.localdomain podman[101969]: 2026-02-20 08:50:17.026989669 +0000 UTC m=+0.328209909 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, release=1766032510, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true)
Feb 20 08:50:17 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:50:17 np0005625203.localdomain podman[101970]: 2026-02-20 08:50:17.052450417 +0000 UTC m=+0.347240057 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, vcs-type=git, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-ipmi-container)
Feb 20 08:50:17 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:50:17 np0005625203.localdomain podman[101967]: 2026-02-20 08:50:17.102404315 +0000 UTC m=+0.408729132 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z)
Feb 20 08:50:17 np0005625203.localdomain podman[101976]: 2026-02-20 08:50:17.137178162 +0000 UTC m=+0.426940046 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, url=https://www.redhat.com, distribution-scope=public, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step1, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:50:17 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:50:17 np0005625203.localdomain podman[101967]: 2026-02-20 08:50:17.192235278 +0000 UTC m=+0.498560155 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, version=17.1.13, config_id=tripleo_step4, release=1766032510, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-cron-container, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, tcib_managed=true, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.openshift.expose-services=)
Feb 20 08:50:17 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:50:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:50:25 np0005625203.localdomain systemd[1]: tmp-crun.17MyvT.mount: Deactivated successfully.
Feb 20 08:50:25 np0005625203.localdomain podman[102131]: 2026-02-20 08:50:25.779355705 +0000 UTC m=+0.094712775 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z)
Feb 20 08:50:26 np0005625203.localdomain podman[102131]: 2026-02-20 08:50:26.14656164 +0000 UTC m=+0.461918670 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vcs-type=git, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Feb 20 08:50:26 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:50:30 np0005625203.localdomain sudo[102155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:50:30 np0005625203.localdomain sudo[102155]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:50:30 np0005625203.localdomain sudo[102155]: pam_unix(sudo:session): session closed for user root
Feb 20 08:50:30 np0005625203.localdomain sudo[102170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:50:30 np0005625203.localdomain sudo[102170]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:50:30 np0005625203.localdomain sudo[102170]: pam_unix(sudo:session): session closed for user root
Feb 20 08:50:33 np0005625203.localdomain sudo[102216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:50:33 np0005625203.localdomain sudo[102216]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:50:33 np0005625203.localdomain sudo[102216]: pam_unix(sudo:session): session closed for user root
Feb 20 08:50:35 np0005625203.localdomain sshd[102231]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:50:35 np0005625203.localdomain sshd[102231]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:50:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:50:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:50:40 np0005625203.localdomain podman[102233]: 2026-02-20 08:50:40.771299261 +0000 UTC m=+0.089692780 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, distribution-scope=public, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, vendor=Red Hat, Inc., config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, io.buildah.version=1.41.5)
Feb 20 08:50:40 np0005625203.localdomain podman[102233]: 2026-02-20 08:50:40.789255007 +0000 UTC m=+0.107648516 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, distribution-scope=public, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, container_name=collectd, url=https://www.redhat.com, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:50:40 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:50:40 np0005625203.localdomain podman[102234]: 2026-02-20 08:50:40.871454382 +0000 UTC m=+0.186530918 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, org.opencontainers.image.created=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, container_name=iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.openshift.expose-services=, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64)
Feb 20 08:50:40 np0005625203.localdomain podman[102234]: 2026-02-20 08:50:40.911360639 +0000 UTC m=+0.226437125 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, release=1766032510, build-date=2026-01-12T22:34:43Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, io.buildah.version=1.41.5, io.openshift.expose-services=, container_name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:50:40 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:50:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:50:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:50:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:50:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:50:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:50:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:50:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:50:47 np0005625203.localdomain podman[102279]: 2026-02-20 08:50:47.786573666 +0000 UTC m=+0.094353644 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, vendor=Red Hat, Inc., release=1766032510, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, url=https://www.redhat.com, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:30Z, version=17.1.13, config_id=tripleo_step4, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:50:47 np0005625203.localdomain podman[102275]: 2026-02-20 08:50:47.765457602 +0000 UTC m=+0.078461532 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, vcs-type=git, batch=17.1_20260112.1, container_name=ceilometer_agent_compute, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, tcib_managed=true, config_id=tripleo_step4, io.buildah.version=1.41.5, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, vendor=Red Hat, Inc., distribution-scope=public)
Feb 20 08:50:47 np0005625203.localdomain podman[102285]: 2026-02-20 08:50:47.83122595 +0000 UTC m=+0.134419205 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, vcs-type=git, managed_by=tripleo_ansible, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, batch=17.1_20260112.1, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']})
Feb 20 08:50:47 np0005625203.localdomain podman[102279]: 2026-02-20 08:50:47.866222113 +0000 UTC m=+0.174002121 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, url=https://www.redhat.com, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, tcib_managed=true, vcs-type=git, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510)
Feb 20 08:50:47 np0005625203.localdomain podman[102292]: 2026-02-20 08:50:47.878529215 +0000 UTC m=+0.178961225 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20260112.1, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']})
Feb 20 08:50:47 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:50:47 np0005625203.localdomain podman[102292]: 2026-02-20 08:50:47.893140637 +0000 UTC m=+0.193572637 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-type=git, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, batch=17.1_20260112.1, build-date=2026-01-12T22:56:19Z, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, version=17.1.13, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:50:47 np0005625203.localdomain podman[102292]: unhealthy
Feb 20 08:50:47 np0005625203.localdomain podman[102275]: 2026-02-20 08:50:47.898465062 +0000 UTC m=+0.211468972 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:07:47Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, tcib_managed=true, container_name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, config_id=tripleo_step4, batch=17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Feb 20 08:50:47 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:50:47 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:50:47 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:50:47 np0005625203.localdomain podman[102274]: 2026-02-20 08:50:47.933392364 +0000 UTC m=+0.246875138 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, build-date=2026-01-12T23:32:04Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, batch=17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:50:47 np0005625203.localdomain podman[102274]: 2026-02-20 08:50:47.959146101 +0000 UTC m=+0.272628865 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, container_name=nova_compute, config_id=tripleo_step5, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:50:47 np0005625203.localdomain podman[102273]: 2026-02-20 08:50:47.866803621 +0000 UTC m=+0.183390751 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=logrotate_crond, release=1766032510, managed_by=tripleo_ansible, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:50:47 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:50:48 np0005625203.localdomain podman[102273]: 2026-02-20 08:50:48.000438911 +0000 UTC m=+0.317026111 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.openshift.expose-services=, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, release=1766032510, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:50:48 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:50:48 np0005625203.localdomain podman[102285]: 2026-02-20 08:50:48.037683764 +0000 UTC m=+0.340876959 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, version=17.1.13, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., container_name=metrics_qdr, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true)
Feb 20 08:50:48 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:50:48 np0005625203.localdomain podman[102299]: 2026-02-20 08:50:48.042304008 +0000 UTC m=+0.340034744 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, architecture=x86_64, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:50:48 np0005625203.localdomain podman[102299]: 2026-02-20 08:50:48.125263818 +0000 UTC m=+0.422994524 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., architecture=x86_64, tcib_managed=true)
Feb 20 08:50:48 np0005625203.localdomain podman[102299]: unhealthy
Feb 20 08:50:48 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:50:48 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:50:56 np0005625203.localdomain sshd[102438]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:50:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:50:56 np0005625203.localdomain sshd[102438]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:50:56 np0005625203.localdomain podman[102440]: 2026-02-20 08:50:56.744732107 +0000 UTC m=+0.068953287 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, vcs-type=git, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:50:57 np0005625203.localdomain podman[102440]: 2026-02-20 08:50:57.141813997 +0000 UTC m=+0.466035187 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, architecture=x86_64, container_name=nova_migration_target, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:50:57 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:51:09 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:51:09 np0005625203.localdomain recover_tripleo_nova_virtqemud[102464]: 62505
Feb 20 08:51:09 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:51:09 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:51:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:51:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:51:11 np0005625203.localdomain systemd[1]: tmp-crun.Cisrio.mount: Deactivated successfully.
Feb 20 08:51:11 np0005625203.localdomain podman[102465]: 2026-02-20 08:51:11.782405236 +0000 UTC m=+0.096929563 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, tcib_managed=true, name=rhosp-rhel9/openstack-collectd, container_name=collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Feb 20 08:51:11 np0005625203.localdomain podman[102465]: 2026-02-20 08:51:11.796438911 +0000 UTC m=+0.110963238 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, url=https://www.redhat.com, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, com.redhat.component=openstack-collectd-container, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd)
Feb 20 08:51:11 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:51:11 np0005625203.localdomain podman[102466]: 2026-02-20 08:51:11.879431692 +0000 UTC m=+0.190089349 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, build-date=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Feb 20 08:51:11 np0005625203.localdomain podman[102466]: 2026-02-20 08:51:11.914943742 +0000 UTC m=+0.225601449 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vendor=Red Hat, Inc., release=1766032510, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, io.openshift.expose-services=, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid)
Feb 20 08:51:11 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:51:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:51:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:51:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:51:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:51:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:51:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:51:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:51:18 np0005625203.localdomain podman[102501]: 2026-02-20 08:51:18.782339907 +0000 UTC m=+0.095815009 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, url=https://www.redhat.com, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vcs-type=git, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, container_name=logrotate_crond, config_id=tripleo_step4, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:51:18 np0005625203.localdomain podman[102501]: 2026-02-20 08:51:18.788982044 +0000 UTC m=+0.102457146 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-cron, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Feb 20 08:51:18 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:51:18 np0005625203.localdomain podman[102502]: 2026-02-20 08:51:18.839509989 +0000 UTC m=+0.153420194 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, tcib_managed=true, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, version=17.1.13, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:51:18 np0005625203.localdomain podman[102503]: 2026-02-20 08:51:18.896157533 +0000 UTC m=+0.202802224 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, config_id=tripleo_step4, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, distribution-scope=public, url=https://www.redhat.com, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, vcs-type=git, version=17.1.13, container_name=ceilometer_agent_compute, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:51:18 np0005625203.localdomain podman[102503]: 2026-02-20 08:51:18.930241739 +0000 UTC m=+0.236886470 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_id=tripleo_step4, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true)
Feb 20 08:51:18 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:51:18 np0005625203.localdomain podman[102516]: 2026-02-20 08:51:18.951494928 +0000 UTC m=+0.250872793 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, tcib_managed=true, release=1766032510, io.buildah.version=1.41.5)
Feb 20 08:51:18 np0005625203.localdomain podman[102516]: 2026-02-20 08:51:18.991423944 +0000 UTC m=+0.290801849 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true)
Feb 20 08:51:18 np0005625203.localdomain podman[102516]: unhealthy
Feb 20 08:51:19 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:51:19 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:51:19 np0005625203.localdomain podman[102512]: 2026-02-20 08:51:19.004462668 +0000 UTC m=+0.306922839 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, tcib_managed=true, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, version=17.1.13, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible)
Feb 20 08:51:19 np0005625203.localdomain podman[102504]: 2026-02-20 08:51:19.057329905 +0000 UTC m=+0.359721414 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, vcs-type=git)
Feb 20 08:51:19 np0005625203.localdomain podman[102502]: 2026-02-20 08:51:19.070734681 +0000 UTC m=+0.384644876 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:51:19 np0005625203.localdomain podman[102519]: 2026-02-20 08:51:19.105228189 +0000 UTC m=+0.401164588 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, batch=17.1_20260112.1, vendor=Red Hat, Inc., container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, url=https://www.redhat.com)
Feb 20 08:51:19 np0005625203.localdomain podman[102504]: 2026-02-20 08:51:19.11333597 +0000 UTC m=+0.415727399 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.buildah.version=1.41.5, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']})
Feb 20 08:51:19 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:51:19 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:51:19 np0005625203.localdomain podman[102519]: 2026-02-20 08:51:19.151336918 +0000 UTC m=+0.447273307 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, container_name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, distribution-scope=public, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4)
Feb 20 08:51:19 np0005625203.localdomain podman[102519]: unhealthy
Feb 20 08:51:19 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:51:19 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:51:19 np0005625203.localdomain podman[102512]: 2026-02-20 08:51:19.223230815 +0000 UTC m=+0.525690966 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.openshift.expose-services=, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible)
Feb 20 08:51:19 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:51:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:51:27 np0005625203.localdomain systemd[1]: tmp-crun.1BTL2Q.mount: Deactivated successfully.
Feb 20 08:51:27 np0005625203.localdomain podman[102670]: 2026-02-20 08:51:27.775490821 +0000 UTC m=+0.090965838 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, batch=17.1_20260112.1, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=)
Feb 20 08:51:28 np0005625203.localdomain podman[102670]: 2026-02-20 08:51:28.158998521 +0000 UTC m=+0.474473448 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git)
Feb 20 08:51:28 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:51:34 np0005625203.localdomain sudo[102693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:51:34 np0005625203.localdomain sudo[102693]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:51:34 np0005625203.localdomain sudo[102693]: pam_unix(sudo:session): session closed for user root
Feb 20 08:51:34 np0005625203.localdomain sudo[102708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 08:51:34 np0005625203.localdomain sudo[102708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:51:34 np0005625203.localdomain sudo[102708]: pam_unix(sudo:session): session closed for user root
Feb 20 08:51:34 np0005625203.localdomain sudo[102743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:51:34 np0005625203.localdomain sudo[102743]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:51:34 np0005625203.localdomain sudo[102743]: pam_unix(sudo:session): session closed for user root
Feb 20 08:51:34 np0005625203.localdomain sudo[102758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:51:34 np0005625203.localdomain sudo[102758]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:51:35 np0005625203.localdomain sudo[102758]: pam_unix(sudo:session): session closed for user root
Feb 20 08:51:36 np0005625203.localdomain sudo[102806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:51:36 np0005625203.localdomain sudo[102806]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:51:36 np0005625203.localdomain sudo[102806]: pam_unix(sudo:session): session closed for user root
Feb 20 08:51:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:51:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:51:42 np0005625203.localdomain podman[102821]: 2026-02-20 08:51:42.771050236 +0000 UTC m=+0.085307393 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, build-date=2026-01-12T22:10:15Z, vcs-type=git)
Feb 20 08:51:42 np0005625203.localdomain podman[102821]: 2026-02-20 08:51:42.781193631 +0000 UTC m=+0.095450728 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.component=openstack-collectd-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.openshift.expose-services=)
Feb 20 08:51:42 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:51:42 np0005625203.localdomain podman[102822]: 2026-02-20 08:51:42.825243146 +0000 UTC m=+0.138134741 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=iscsid, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-type=git, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:51:42 np0005625203.localdomain podman[102822]: 2026-02-20 08:51:42.858745263 +0000 UTC m=+0.171636858 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, com.redhat.component=openstack-iscsid-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, build-date=2026-01-12T22:34:43Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, vcs-type=git, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510)
Feb 20 08:51:42 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:51:43 np0005625203.localdomain sshd[102860]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:51:43 np0005625203.localdomain sshd[102860]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:51:44 np0005625203.localdomain sshd[102862]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:51:44 np0005625203.localdomain sshd[102862]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:51:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:51:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:51:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:51:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:51:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:51:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:51:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:51:49 np0005625203.localdomain podman[102886]: 2026-02-20 08:51:49.787941583 +0000 UTC m=+0.078346768 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, build-date=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn)
Feb 20 08:51:49 np0005625203.localdomain podman[102886]: 2026-02-20 08:51:49.79914887 +0000 UTC m=+0.089554125 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, build-date=2026-01-12T22:56:19Z, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, tcib_managed=true, batch=17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step4, release=1766032510, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, vendor=Red Hat, Inc.)
Feb 20 08:51:49 np0005625203.localdomain podman[102886]: unhealthy
Feb 20 08:51:49 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:51:49 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:51:49 np0005625203.localdomain podman[102864]: 2026-02-20 08:51:49.886010401 +0000 UTC m=+0.197530390 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, distribution-scope=public, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, config_id=tripleo_step4, architecture=x86_64, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:10:15Z, container_name=logrotate_crond, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:51:49 np0005625203.localdomain podman[102864]: 2026-02-20 08:51:49.89827478 +0000 UTC m=+0.209794799 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, config_id=tripleo_step4, tcib_managed=true, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=logrotate_crond, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team)
Feb 20 08:51:49 np0005625203.localdomain podman[102876]: 2026-02-20 08:51:49.90601212 +0000 UTC m=+0.201029558 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, maintainer=OpenStack TripleO Team, config_id=tripleo_step1, io.buildah.version=1.41.5, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, tcib_managed=true, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:51:49 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:51:49 np0005625203.localdomain podman[102866]: 2026-02-20 08:51:49.837901431 +0000 UTC m=+0.144361774 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.5, vendor=Red Hat, Inc., architecture=x86_64, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, url=https://www.redhat.com, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible, version=17.1.13)
Feb 20 08:51:49 np0005625203.localdomain podman[102870]: 2026-02-20 08:51:49.945730131 +0000 UTC m=+0.247998413 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, config_id=tripleo_step4, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, vcs-type=git, tcib_managed=true, vendor=Red Hat, Inc.)
Feb 20 08:51:49 np0005625203.localdomain podman[102865]: 2026-02-20 08:51:49.995851643 +0000 UTC m=+0.302136460 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, release=1766032510, tcib_managed=true, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vcs-type=git, architecture=x86_64)
Feb 20 08:51:50 np0005625203.localdomain podman[102890]: 2026-02-20 08:51:49.867028273 +0000 UTC m=+0.152558867 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, managed_by=tripleo_ansible, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, release=1766032510, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, batch=17.1_20260112.1, container_name=ovn_controller, org.opencontainers.image.created=2026-01-12T22:36:40Z)
Feb 20 08:51:50 np0005625203.localdomain podman[102866]: 2026-02-20 08:51:50.024636506 +0000 UTC m=+0.331096829 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, batch=17.1_20260112.1, build-date=2026-01-12T23:07:47Z, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-compute, url=https://www.redhat.com)
Feb 20 08:51:50 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:51:50 np0005625203.localdomain podman[102870]: 2026-02-20 08:51:50.04643401 +0000 UTC m=+0.348702282 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, container_name=ceilometer_agent_ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, version=17.1.13, url=https://www.redhat.com, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi)
Feb 20 08:51:50 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:51:50 np0005625203.localdomain podman[102865]: 2026-02-20 08:51:50.071522907 +0000 UTC m=+0.377807724 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, maintainer=OpenStack TripleO Team, vcs-type=git, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, version=17.1.13, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:51:50 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:51:50 np0005625203.localdomain podman[102890]: 2026-02-20 08:51:50.098679549 +0000 UTC m=+0.384210153 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., build-date=2026-01-12T22:36:40Z, version=17.1.13, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container)
Feb 20 08:51:50 np0005625203.localdomain podman[102890]: unhealthy
Feb 20 08:51:50 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:51:50 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:51:50 np0005625203.localdomain podman[102876]: 2026-02-20 08:51:50.124315113 +0000 UTC m=+0.419332521 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1, io.buildah.version=1.41.5, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, build-date=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, tcib_managed=true, release=1766032510, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 08:51:50 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:51:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:51:58 np0005625203.localdomain podman[103032]: 2026-02-20 08:51:58.756205995 +0000 UTC m=+0.077074298 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510)
Feb 20 08:51:59 np0005625203.localdomain podman[103032]: 2026-02-20 08:51:59.143427481 +0000 UTC m=+0.464295784 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:51:59 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:52:12 np0005625203.localdomain sshd[103055]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:52:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:52:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:52:13 np0005625203.localdomain podman[103058]: 2026-02-20 08:52:13.772616148 +0000 UTC m=+0.085437538 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, tcib_managed=true, config_id=tripleo_step3, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, container_name=iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, vcs-type=git, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid)
Feb 20 08:52:13 np0005625203.localdomain podman[103058]: 2026-02-20 08:52:13.809594773 +0000 UTC m=+0.122416173 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, architecture=x86_64, name=rhosp-rhel9/openstack-iscsid, com.redhat.component=openstack-iscsid-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, distribution-scope=public, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, io.openshift.expose-services=, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, tcib_managed=true, version=17.1.13, batch=17.1_20260112.1)
Feb 20 08:52:13 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:52:13 np0005625203.localdomain podman[103057]: 2026-02-20 08:52:13.817383355 +0000 UTC m=+0.134406885 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, batch=17.1_20260112.1, version=17.1.13, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, vendor=Red Hat, Inc., config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.expose-services=, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container)
Feb 20 08:52:13 np0005625203.localdomain podman[103057]: 2026-02-20 08:52:13.900399086 +0000 UTC m=+0.217422586 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, container_name=collectd, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, release=1766032510, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, distribution-scope=public, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, batch=17.1_20260112.1)
Feb 20 08:52:13 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:52:14 np0005625203.localdomain sshd[103055]: Invalid user ts1 from 102.210.148.92 port 35420
Feb 20 08:52:14 np0005625203.localdomain sshd[103055]: Received disconnect from 102.210.148.92 port 35420:11: Bye Bye [preauth]
Feb 20 08:52:14 np0005625203.localdomain sshd[103055]: Disconnected from invalid user ts1 102.210.148.92 port 35420 [preauth]
Feb 20 08:52:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:52:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:52:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:52:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:52:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:52:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:52:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:52:20 np0005625203.localdomain podman[103112]: 2026-02-20 08:52:20.785222451 +0000 UTC m=+0.086428368 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, batch=17.1_20260112.1, container_name=ovn_metadata_agent, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5, release=1766032510, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:52:20 np0005625203.localdomain podman[103112]: 2026-02-20 08:52:20.823668562 +0000 UTC m=+0.124874459 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1766032510, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, tcib_managed=true, url=https://www.redhat.com, vcs-type=git)
Feb 20 08:52:20 np0005625203.localdomain podman[103112]: unhealthy
Feb 20 08:52:20 np0005625203.localdomain systemd[1]: tmp-crun.dR9kE3.mount: Deactivated successfully.
Feb 20 08:52:20 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:52:20 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:52:20 np0005625203.localdomain podman[103096]: 2026-02-20 08:52:20.844822917 +0000 UTC m=+0.160516743 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, vcs-type=git, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 cron)
Feb 20 08:52:20 np0005625203.localdomain podman[103096]: 2026-02-20 08:52:20.881204794 +0000 UTC m=+0.196898610 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., container_name=logrotate_crond, io.buildah.version=1.41.5, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, release=1766032510, description=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, architecture=x86_64)
Feb 20 08:52:20 np0005625203.localdomain podman[103097]: 2026-02-20 08:52:20.885988923 +0000 UTC m=+0.199509301 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, config_id=tripleo_step5, url=https://www.redhat.com, container_name=nova_compute, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:52:20 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:52:20 np0005625203.localdomain podman[103119]: 2026-02-20 08:52:20.913272048 +0000 UTC m=+0.205999283 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.openshift.expose-services=, url=https://www.redhat.com, vcs-type=git, version=17.1.13, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., release=1766032510, io.buildah.version=1.41.5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:52:20 np0005625203.localdomain podman[103097]: 2026-02-20 08:52:20.93723501 +0000 UTC m=+0.250755418 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, tcib_managed=true, build-date=2026-01-12T23:32:04Z, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, version=17.1.13, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, vcs-type=git, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Feb 20 08:52:20 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:52:20 np0005625203.localdomain podman[103099]: 2026-02-20 08:52:20.990382696 +0000 UTC m=+0.294449172 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:30Z, release=1766032510, io.openshift.expose-services=, vendor=Red Hat, Inc., tcib_managed=true, version=17.1.13, io.buildah.version=1.41.5, distribution-scope=public, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, architecture=x86_64, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Feb 20 08:52:20 np0005625203.localdomain podman[103098]: 2026-02-20 08:52:20.939722487 +0000 UTC m=+0.250471489 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.openshift.expose-services=, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, architecture=x86_64, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, managed_by=tripleo_ansible, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:52:21 np0005625203.localdomain podman[103105]: 2026-02-20 08:52:21.046078781 +0000 UTC m=+0.350881089 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, build-date=2026-01-12T22:10:14Z, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, container_name=metrics_qdr, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:52:21 np0005625203.localdomain podman[103099]: 2026-02-20 08:52:21.070801617 +0000 UTC m=+0.374868073 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, tcib_managed=true, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:52:21 np0005625203.localdomain podman[103098]: 2026-02-20 08:52:21.073182001 +0000 UTC m=+0.383930943 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2026-01-12T23:07:47Z, release=1766032510, tcib_managed=true, architecture=x86_64, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-compute-container, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, managed_by=tripleo_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:52:21 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:52:21 np0005625203.localdomain podman[103119]: 2026-02-20 08:52:21.099238408 +0000 UTC m=+0.391965643 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, io.buildah.version=1.41.5, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, vcs-type=git, distribution-scope=public, batch=17.1_20260112.1, container_name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true)
Feb 20 08:52:21 np0005625203.localdomain podman[103119]: unhealthy
Feb 20 08:52:21 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:52:21 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:52:21 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:52:21 np0005625203.localdomain podman[103105]: 2026-02-20 08:52:21.219230235 +0000 UTC m=+0.524032543 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, tcib_managed=true, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-qdrouterd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, managed_by=tripleo_ansible, container_name=metrics_qdr, distribution-scope=public, url=https://www.redhat.com, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z, name=rhosp-rhel9/openstack-qdrouterd, config_id=tripleo_step1)
Feb 20 08:52:21 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:52:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:52:29 np0005625203.localdomain podman[103259]: 2026-02-20 08:52:29.762753421 +0000 UTC m=+0.081135005 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, tcib_managed=true, distribution-scope=public, build-date=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, version=17.1.13, container_name=nova_migration_target, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:52:30 np0005625203.localdomain podman[103259]: 2026-02-20 08:52:30.162503633 +0000 UTC m=+0.480885227 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, version=17.1.13, distribution-scope=public, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-type=git, maintainer=OpenStack TripleO Team, architecture=x86_64, managed_by=tripleo_ansible, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:52:30 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:52:30 np0005625203.localdomain sshd[103282]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:52:30 np0005625203.localdomain sshd[103282]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:52:36 np0005625203.localdomain sudo[103284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:52:36 np0005625203.localdomain sudo[103284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:52:36 np0005625203.localdomain sudo[103284]: pam_unix(sudo:session): session closed for user root
Feb 20 08:52:36 np0005625203.localdomain sudo[103299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:52:36 np0005625203.localdomain sudo[103299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:52:36 np0005625203.localdomain sudo[103299]: pam_unix(sudo:session): session closed for user root
Feb 20 08:52:37 np0005625203.localdomain sudo[103347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:52:37 np0005625203.localdomain sudo[103347]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:52:37 np0005625203.localdomain sudo[103347]: pam_unix(sudo:session): session closed for user root
Feb 20 08:52:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:52:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:52:44 np0005625203.localdomain podman[103362]: 2026-02-20 08:52:44.778644327 +0000 UTC m=+0.090290448 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, vcs-type=git, com.redhat.component=openstack-collectd-container, vendor=Red Hat, Inc., batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, url=https://www.redhat.com, distribution-scope=public, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, build-date=2026-01-12T22:10:15Z)
Feb 20 08:52:44 np0005625203.localdomain podman[103363]: 2026-02-20 08:52:44.823528367 +0000 UTC m=+0.134413954 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.buildah.version=1.41.5, distribution-scope=public, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, container_name=iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, org.opencontainers.image.created=2026-01-12T22:34:43Z, batch=17.1_20260112.1, vcs-type=git, url=https://www.redhat.com)
Feb 20 08:52:44 np0005625203.localdomain podman[103363]: 2026-02-20 08:52:44.861328138 +0000 UTC m=+0.172213705 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:34:43Z, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=iscsid, batch=17.1_20260112.1, com.redhat.component=openstack-iscsid-container, architecture=x86_64, version=17.1.13, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, io.buildah.version=1.41.5)
Feb 20 08:52:44 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:52:44 np0005625203.localdomain podman[103362]: 2026-02-20 08:52:44.875716303 +0000 UTC m=+0.187362384 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, name=rhosp-rhel9/openstack-collectd, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.buildah.version=1.41.5)
Feb 20 08:52:44 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: tmp-crun.N9TwKk.mount: Deactivated successfully.
Feb 20 08:52:51 np0005625203.localdomain podman[103405]: 2026-02-20 08:52:51.787933938 +0000 UTC m=+0.091454764 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, vcs-type=git, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:52:51 np0005625203.localdomain podman[103397]: 2026-02-20 08:52:51.830777005 +0000 UTC m=+0.149562034 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, architecture=x86_64, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, release=1766032510, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, vcs-type=git, io.openshift.expose-services=, distribution-scope=public)
Feb 20 08:52:51 np0005625203.localdomain podman[103398]: 2026-02-20 08:52:51.844968004 +0000 UTC m=+0.154858887 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-type=git, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:52:51 np0005625203.localdomain podman[103397]: 2026-02-20 08:52:51.861516607 +0000 UTC m=+0.180301646 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, tcib_managed=true, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, version=17.1.13, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public)
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:52:51 np0005625203.localdomain podman[103399]: 2026-02-20 08:52:51.878704529 +0000 UTC m=+0.186745335 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, tcib_managed=true, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, distribution-scope=public, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:52:51 np0005625203.localdomain podman[103398]: 2026-02-20 08:52:51.899159794 +0000 UTC m=+0.209050647 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, release=1766032510, container_name=nova_compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, url=https://www.redhat.com)
Feb 20 08:52:51 np0005625203.localdomain podman[103399]: 2026-02-20 08:52:51.905250052 +0000 UTC m=+0.213290848 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, distribution-scope=public, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.buildah.version=1.41.5, vcs-type=git, release=1766032510, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, managed_by=tripleo_ansible, io.openshift.expose-services=, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, url=https://www.redhat.com)
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:52:51 np0005625203.localdomain podman[103407]: 2026-02-20 08:52:51.944959222 +0000 UTC m=+0.244491494 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, tcib_managed=true, version=17.1.13, release=1766032510, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, architecture=x86_64, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=)
Feb 20 08:52:51 np0005625203.localdomain podman[103407]: 2026-02-20 08:52:51.963084123 +0000 UTC m=+0.262616415 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, tcib_managed=true, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container)
Feb 20 08:52:51 np0005625203.localdomain podman[103407]: unhealthy
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:52:51 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:52:51 np0005625203.localdomain podman[103416]: 2026-02-20 08:52:51.997794859 +0000 UTC m=+0.293945157 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, url=https://www.redhat.com, release=1766032510, build-date=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, config_id=tripleo_step4, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.41.5, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, container_name=ovn_controller, batch=17.1_20260112.1)
Feb 20 08:52:52 np0005625203.localdomain podman[103416]: 2026-02-20 08:52:52.011507834 +0000 UTC m=+0.307658182 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, config_id=tripleo_step4, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, version=17.1.13, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:52:52 np0005625203.localdomain podman[103406]: 2026-02-20 08:52:52.051146271 +0000 UTC m=+0.354705469 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, container_name=metrics_qdr, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, distribution-scope=public, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, vcs-type=git)
Feb 20 08:52:52 np0005625203.localdomain podman[103416]: unhealthy
Feb 20 08:52:52 np0005625203.localdomain podman[103405]: 2026-02-20 08:52:52.072058579 +0000 UTC m=+0.375579445 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-ipmi, release=1766032510, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, version=17.1.13, managed_by=tripleo_ansible, tcib_managed=true, distribution-scope=public, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:52:52 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:52:52 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:52:52 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:52:52 np0005625203.localdomain podman[103406]: 2026-02-20 08:52:52.238104463 +0000 UTC m=+0.541663721 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, distribution-scope=public, config_id=tripleo_step1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, architecture=x86_64, release=1766032510, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, tcib_managed=true, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 08:52:52 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:52:54 np0005625203.localdomain sshd[103561]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:52:54 np0005625203.localdomain sshd[103561]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:52:54 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:52:54 np0005625203.localdomain recover_tripleo_nova_virtqemud[103564]: 62505
Feb 20 08:52:54 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:52:54 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:53:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:53:00 np0005625203.localdomain podman[103565]: 2026-02-20 08:53:00.745931552 +0000 UTC m=+0.070026720 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, tcib_managed=true, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:53:01 np0005625203.localdomain podman[103565]: 2026-02-20 08:53:01.08321782 +0000 UTC m=+0.407312958 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, version=17.1.13, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, io.openshift.expose-services=, url=https://www.redhat.com, batch=17.1_20260112.1, managed_by=tripleo_ansible, distribution-scope=public)
Feb 20 08:53:01 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:53:15 np0005625203.localdomain sshd[103588]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:53:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:53:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:53:15 np0005625203.localdomain sshd[103588]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:53:15 np0005625203.localdomain podman[103590]: 2026-02-20 08:53:15.776221693 +0000 UTC m=+0.090719942 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, version=17.1.13, io.buildah.version=1.41.5, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container, release=1766032510, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=collectd, tcib_managed=true, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Feb 20 08:53:15 np0005625203.localdomain podman[103590]: 2026-02-20 08:53:15.818655277 +0000 UTC m=+0.133153526 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_id=tripleo_step3, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-collectd-container, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:53:15 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:53:15 np0005625203.localdomain podman[103591]: 2026-02-20 08:53:15.823357613 +0000 UTC m=+0.133787656 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, tcib_managed=true, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, managed_by=tripleo_ansible, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.5, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, batch=17.1_20260112.1, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510)
Feb 20 08:53:15 np0005625203.localdomain podman[103591]: 2026-02-20 08:53:15.90526612 +0000 UTC m=+0.215696173 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.buildah.version=1.41.5, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vcs-type=git, distribution-scope=public, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vendor=Red Hat, Inc., release=1766032510, com.redhat.component=openstack-iscsid-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T22:34:43Z)
Feb 20 08:53:15 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:53:22 np0005625203.localdomain podman[103646]: 2026-02-20 08:53:22.796655828 +0000 UTC m=+0.100532227 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, distribution-scope=public, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., container_name=ovn_controller, io.buildah.version=1.41.5, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:53:22 np0005625203.localdomain podman[103633]: 2026-02-20 08:53:22.855292614 +0000 UTC m=+0.159443871 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, url=https://www.redhat.com, vendor=Red Hat, Inc., config_id=tripleo_step1, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, batch=17.1_20260112.1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, architecture=x86_64, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd)
Feb 20 08:53:22 np0005625203.localdomain podman[103631]: 2026-02-20 08:53:22.815742149 +0000 UTC m=+0.126520681 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, url=https://www.redhat.com, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true, build-date=2026-01-12T23:07:47Z, com.redhat.component=openstack-ceilometer-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ceilometer_agent_compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510)
Feb 20 08:53:22 np0005625203.localdomain podman[103629]: 2026-02-20 08:53:22.781382374 +0000 UTC m=+0.098956167 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, vcs-type=git, architecture=x86_64, name=rhosp-rhel9/openstack-cron, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vendor=Red Hat, Inc., container_name=logrotate_crond, com.redhat.component=openstack-cron-container, description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:53:22 np0005625203.localdomain podman[103630]: 2026-02-20 08:53:22.835374097 +0000 UTC m=+0.153673613 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, io.buildah.version=1.41.5, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, build-date=2026-01-12T23:32:04Z, tcib_managed=true, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team)
Feb 20 08:53:22 np0005625203.localdomain podman[103645]: 2026-02-20 08:53:22.89295834 +0000 UTC m=+0.192210065 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, managed_by=tripleo_ansible)
Feb 20 08:53:22 np0005625203.localdomain podman[103631]: 2026-02-20 08:53:22.898272295 +0000 UTC m=+0.209050867 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-ceilometer-compute-container, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:53:22 np0005625203.localdomain podman[103629]: 2026-02-20 08:53:22.91325377 +0000 UTC m=+0.230827523 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, io.buildah.version=1.41.5, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, container_name=logrotate_crond, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z)
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:53:22 np0005625203.localdomain podman[103632]: 2026-02-20 08:53:22.928294335 +0000 UTC m=+0.238876951 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible)
Feb 20 08:53:22 np0005625203.localdomain podman[103646]: 2026-02-20 08:53:22.928942835 +0000 UTC m=+0.232819234 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, container_name=ovn_controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, url=https://www.redhat.com, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, io.buildah.version=1.41.5, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller)
Feb 20 08:53:22 np0005625203.localdomain podman[103632]: 2026-02-20 08:53:22.948249393 +0000 UTC m=+0.258832049 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, managed_by=tripleo_ansible, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, distribution-scope=public, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ceilometer-ipmi-container, version=17.1.13, container_name=ceilometer_agent_ipmi)
Feb 20 08:53:22 np0005625203.localdomain podman[103645]: 2026-02-20 08:53:22.957990955 +0000 UTC m=+0.257242700 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, architecture=x86_64, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, vcs-type=git, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, batch=17.1_20260112.1, vendor=Red Hat, Inc., com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn)
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:53:22 np0005625203.localdomain podman[103645]: unhealthy
Feb 20 08:53:22 np0005625203.localdomain podman[103630]: 2026-02-20 08:53:22.964061974 +0000 UTC m=+0.282361520 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, architecture=x86_64, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, container_name=nova_compute)
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:53:22 np0005625203.localdomain podman[103646]: unhealthy
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:53:22 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:53:23 np0005625203.localdomain podman[103633]: 2026-02-20 08:53:23.09028546 +0000 UTC m=+0.394436747 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-type=git, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:14Z, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible)
Feb 20 08:53:23 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:53:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:53:31 np0005625203.localdomain systemd[1]: tmp-crun.8OMcGA.mount: Deactivated successfully.
Feb 20 08:53:31 np0005625203.localdomain podman[103794]: 2026-02-20 08:53:31.763528087 +0000 UTC m=+0.081017490 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, io.openshift.expose-services=, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, container_name=nova_migration_target, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., managed_by=tripleo_ansible)
Feb 20 08:53:32 np0005625203.localdomain podman[103794]: 2026-02-20 08:53:32.143316341 +0000 UTC m=+0.460805814 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, container_name=nova_migration_target, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc.)
Feb 20 08:53:32 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:53:32 np0005625203.localdomain sshd[103817]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:53:33 np0005625203.localdomain sshd[103817]: Invalid user ts1 from 212.154.234.9 port 12032
Feb 20 08:53:33 np0005625203.localdomain sshd[103817]: Received disconnect from 212.154.234.9 port 12032:11: Bye Bye [preauth]
Feb 20 08:53:33 np0005625203.localdomain sshd[103817]: Disconnected from invalid user ts1 212.154.234.9 port 12032 [preauth]
Feb 20 08:53:37 np0005625203.localdomain sudo[103819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:53:37 np0005625203.localdomain sudo[103819]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:53:37 np0005625203.localdomain sudo[103819]: pam_unix(sudo:session): session closed for user root
Feb 20 08:53:37 np0005625203.localdomain sudo[103834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 08:53:37 np0005625203.localdomain sudo[103834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:53:38 np0005625203.localdomain podman[103922]: 2026-02-20 08:53:38.759468635 +0000 UTC m=+0.098196222 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 08:53:38 np0005625203.localdomain podman[103922]: 2026-02-20 08:53:38.892431802 +0000 UTC m=+0.231159369 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, release=1770267347, build-date=2026-02-09T10:25:24Z, version=7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git)
Feb 20 08:53:39 np0005625203.localdomain sudo[103834]: pam_unix(sudo:session): session closed for user root
Feb 20 08:53:39 np0005625203.localdomain sudo[103988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:53:39 np0005625203.localdomain sudo[103988]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:53:39 np0005625203.localdomain sudo[103988]: pam_unix(sudo:session): session closed for user root
Feb 20 08:53:39 np0005625203.localdomain sudo[104003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:53:39 np0005625203.localdomain sudo[104003]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:53:39 np0005625203.localdomain sudo[104003]: pam_unix(sudo:session): session closed for user root
Feb 20 08:53:40 np0005625203.localdomain sudo[104051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:53:40 np0005625203.localdomain sudo[104051]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:53:40 np0005625203.localdomain sudo[104051]: pam_unix(sudo:session): session closed for user root
Feb 20 08:53:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:53:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:53:46 np0005625203.localdomain podman[104067]: 2026-02-20 08:53:46.764109658 +0000 UTC m=+0.082817153 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, container_name=iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, url=https://www.redhat.com, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, version=17.1.13)
Feb 20 08:53:46 np0005625203.localdomain podman[104067]: 2026-02-20 08:53:46.774910645 +0000 UTC m=+0.093618170 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, release=1766032510, batch=17.1_20260112.1, version=17.1.13, tcib_managed=true, com.redhat.component=openstack-iscsid-container, vcs-type=git, build-date=2026-01-12T22:34:43Z)
Feb 20 08:53:46 np0005625203.localdomain podman[104066]: 2026-02-20 08:53:46.812797287 +0000 UTC m=+0.130981326 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, managed_by=tripleo_ansible, batch=17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, url=https://www.redhat.com, version=17.1.13, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, container_name=collectd)
Feb 20 08:53:46 np0005625203.localdomain podman[104066]: 2026-02-20 08:53:46.823569703 +0000 UTC m=+0.141753762 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, io.openshift.expose-services=, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-collectd, config_id=tripleo_step3, description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, batch=17.1_20260112.1, container_name=collectd, release=1766032510, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:53:46 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:53:46 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:53:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:53:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:53:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:53:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:53:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:53:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:53:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:53:53 np0005625203.localdomain systemd[1]: tmp-crun.hIlfSl.mount: Deactivated successfully.
Feb 20 08:53:53 np0005625203.localdomain podman[104107]: 2026-02-20 08:53:53.782823546 +0000 UTC m=+0.091418012 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:47Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, url=https://www.redhat.com, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:53:53 np0005625203.localdomain podman[104106]: 2026-02-20 08:53:53.830532673 +0000 UTC m=+0.144512877 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, io.openshift.expose-services=, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_compute)
Feb 20 08:53:53 np0005625203.localdomain podman[104107]: 2026-02-20 08:53:53.838247484 +0000 UTC m=+0.146841960 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, distribution-scope=public, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, name=rhosp-rhel9/openstack-ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, io.openshift.expose-services=, config_id=tripleo_step4, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, managed_by=tripleo_ansible)
Feb 20 08:53:53 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:53:53 np0005625203.localdomain podman[104106]: 2026-02-20 08:53:53.886292842 +0000 UTC m=+0.200273026 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step5, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1)
Feb 20 08:53:53 np0005625203.localdomain podman[104105]: 2026-02-20 08:53:53.88910868 +0000 UTC m=+0.204962772 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, name=rhosp-rhel9/openstack-cron, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-cron-container, config_id=tripleo_step4, distribution-scope=public, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, release=1766032510, summary=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, vendor=Red Hat, Inc., url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:53:53 np0005625203.localdomain podman[104105]: 2026-02-20 08:53:53.89583479 +0000 UTC m=+0.211688872 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.buildah.version=1.41.5, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, version=17.1.13, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, container_name=logrotate_crond, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z)
Feb 20 08:53:53 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:53:53 np0005625203.localdomain podman[104108]: 2026-02-20 08:53:53.933021149 +0000 UTC m=+0.237596129 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, vcs-type=git, io.openshift.expose-services=, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, tcib_managed=true, batch=17.1_20260112.1, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_id=tripleo_step4, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.buildah.version=1.41.5, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:53:53 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:53:53 np0005625203.localdomain podman[104108]: 2026-02-20 08:53:53.981295605 +0000 UTC m=+0.285870655 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, distribution-scope=public, build-date=2026-01-12T23:07:30Z, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible)
Feb 20 08:53:53 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:53:53 np0005625203.localdomain podman[104116]: 2026-02-20 08:53:53.993921378 +0000 UTC m=+0.298275572 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, config_id=tripleo_step1, architecture=x86_64, build-date=2026-01-12T22:10:14Z, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, managed_by=tripleo_ansible, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, container_name=metrics_qdr, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 08:53:54 np0005625203.localdomain podman[104130]: 2026-02-20 08:53:54.053448804 +0000 UTC m=+0.348820728 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-type=git, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, batch=17.1_20260112.1, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5)
Feb 20 08:53:54 np0005625203.localdomain podman[104130]: 2026-02-20 08:53:54.098360486 +0000 UTC m=+0.393732400 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, version=17.1.13, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, tcib_managed=true, container_name=ovn_controller, vcs-type=git, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']})
Feb 20 08:53:54 np0005625203.localdomain podman[104130]: unhealthy
Feb 20 08:53:54 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:53:54 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:53:54 np0005625203.localdomain podman[104120]: 2026-02-20 08:53:54.120596868 +0000 UTC m=+0.419886053 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1)
Feb 20 08:53:54 np0005625203.localdomain podman[104120]: 2026-02-20 08:53:54.133640505 +0000 UTC m=+0.432929710 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, architecture=x86_64, managed_by=tripleo_ansible, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.5, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:53:54 np0005625203.localdomain podman[104120]: unhealthy
Feb 20 08:53:54 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:53:54 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:53:54 np0005625203.localdomain sshd[104269]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:53:54 np0005625203.localdomain podman[104116]: 2026-02-20 08:53:54.24347054 +0000 UTC m=+0.547824734 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.buildah.version=1.41.5, container_name=metrics_qdr, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, batch=17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:14Z, build-date=2026-01-12T22:10:14Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=rhosp-rhel9/openstack-qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible)
Feb 20 08:53:54 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:53:55 np0005625203.localdomain sshd[104269]: Invalid user builder from 103.200.25.162 port 53556
Feb 20 08:53:56 np0005625203.localdomain sshd[104269]: Received disconnect from 103.200.25.162 port 53556:11: Bye Bye [preauth]
Feb 20 08:53:56 np0005625203.localdomain sshd[104269]: Disconnected from invalid user builder 103.200.25.162 port 53556 [preauth]
Feb 20 08:54:00 np0005625203.localdomain sshd[104272]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:54:00 np0005625203.localdomain sshd[104272]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:54:03 np0005625203.localdomain sshd[104274]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:54:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:54:03 np0005625203.localdomain podman[104275]: 2026-02-20 08:54:03.745118312 +0000 UTC m=+0.071902503 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, config_id=tripleo_step4, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:54:03 np0005625203.localdomain sshd[104274]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:54:04 np0005625203.localdomain podman[104275]: 2026-02-20 08:54:04.121387585 +0000 UTC m=+0.448171786 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13)
Feb 20 08:54:04 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:54:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:54:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:54:17 np0005625203.localdomain podman[104300]: 2026-02-20 08:54:17.783537926 +0000 UTC m=+0.089433380 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, vcs-type=git, io.buildah.version=1.41.5, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.openshift.expose-services=, version=17.1.13, name=rhosp-rhel9/openstack-collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, architecture=x86_64, com.redhat.component=openstack-collectd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible)
Feb 20 08:54:17 np0005625203.localdomain podman[104300]: 2026-02-20 08:54:17.793053143 +0000 UTC m=+0.098948607 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, url=https://www.redhat.com, com.redhat.component=openstack-collectd-container, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, release=1766032510, vendor=Red Hat, Inc., tcib_managed=true, container_name=collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:54:17 np0005625203.localdomain podman[104301]: 2026-02-20 08:54:17.826244388 +0000 UTC m=+0.129290324 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3, container_name=iscsid, name=rhosp-rhel9/openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, url=https://www.redhat.com, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public)
Feb 20 08:54:17 np0005625203.localdomain podman[104301]: 2026-02-20 08:54:17.839181521 +0000 UTC m=+0.142227457 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, vendor=Red Hat, Inc., vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.13, container_name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, config_id=tripleo_step3)
Feb 20 08:54:17 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:54:17 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:54:24 np0005625203.localdomain recover_tripleo_nova_virtqemud[104385]: 62505
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:54:24 np0005625203.localdomain podman[104341]: 2026-02-20 08:54:24.799808567 +0000 UTC m=+0.112883601 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, vendor=Red Hat, Inc., version=17.1.13, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step5, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, release=1766032510, url=https://www.redhat.com, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true)
Feb 20 08:54:24 np0005625203.localdomain podman[104341]: 2026-02-20 08:54:24.824124635 +0000 UTC m=+0.137199689 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, release=1766032510, version=17.1.13, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, tcib_managed=true, container_name=nova_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Feb 20 08:54:24 np0005625203.localdomain podman[104340]: 2026-02-20 08:54:24.781621609 +0000 UTC m=+0.101875837 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, io.buildah.version=1.41.5, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:54:24 np0005625203.localdomain podman[104343]: 2026-02-20 08:54:24.899924619 +0000 UTC m=+0.209786133 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, build-date=2026-01-12T23:07:47Z, tcib_managed=true, maintainer=OpenStack TripleO Team, release=1766032510, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., batch=17.1_20260112.1, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_compute)
Feb 20 08:54:24 np0005625203.localdomain podman[104356]: 2026-02-20 08:54:24.951957351 +0000 UTC m=+0.254064444 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:56:19Z, vcs-type=git, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, distribution-scope=public, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, config_id=tripleo_step4, vendor=Red Hat, Inc., release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:54:24 np0005625203.localdomain podman[104343]: 2026-02-20 08:54:24.959401683 +0000 UTC m=+0.269263167 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, vcs-type=git, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, tcib_managed=true, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:47Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.openshift.expose-services=)
Feb 20 08:54:24 np0005625203.localdomain podman[104356]: 2026-02-20 08:54:24.974164103 +0000 UTC m=+0.276271186 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, vcs-type=git, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, distribution-scope=public, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, build-date=2026-01-12T22:56:19Z)
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:54:24 np0005625203.localdomain podman[104356]: unhealthy
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:54:24 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:54:25 np0005625203.localdomain podman[104349]: 2026-02-20 08:54:24.914804943 +0000 UTC m=+0.212354193 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-qdrouterd, architecture=x86_64, config_id=tripleo_step1, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:54:25 np0005625203.localdomain podman[104340]: 2026-02-20 08:54:25.027233548 +0000 UTC m=+0.347487776 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, container_name=logrotate_crond, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, url=https://www.redhat.com, name=rhosp-rhel9/openstack-cron, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true)
Feb 20 08:54:25 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:54:25 np0005625203.localdomain podman[104362]: 2026-02-20 08:54:25.118816995 +0000 UTC m=+0.415751236 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, version=17.1.13, tcib_managed=true, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, managed_by=tripleo_ansible, io.openshift.expose-services=)
Feb 20 08:54:25 np0005625203.localdomain podman[104362]: 2026-02-20 08:54:25.13920619 +0000 UTC m=+0.436140431 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, version=17.1.13, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, build-date=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-type=git, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=tripleo_ansible)
Feb 20 08:54:25 np0005625203.localdomain podman[104362]: unhealthy
Feb 20 08:54:25 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:54:25 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:54:25 np0005625203.localdomain podman[104348]: 2026-02-20 08:54:25.15202137 +0000 UTC m=+0.453488072 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, release=1766032510, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, config_id=tripleo_step4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:54:25 np0005625203.localdomain podman[104349]: 2026-02-20 08:54:25.173166289 +0000 UTC m=+0.470715519 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.component=openstack-qdrouterd-container, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, io.buildah.version=1.41.5, vendor=Red Hat, Inc., architecture=x86_64, config_id=tripleo_step1, vcs-type=git, tcib_managed=true)
Feb 20 08:54:25 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:54:25 np0005625203.localdomain podman[104348]: 2026-02-20 08:54:25.203281108 +0000 UTC m=+0.504747880 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, build-date=2026-01-12T23:07:30Z, name=rhosp-rhel9/openstack-ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, version=17.1.13, architecture=x86_64)
Feb 20 08:54:25 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:54:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:54:34 np0005625203.localdomain podman[104510]: 2026-02-20 08:54:34.756099045 +0000 UTC m=+0.076825397 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, tcib_managed=true, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container)
Feb 20 08:54:35 np0005625203.localdomain podman[104510]: 2026-02-20 08:54:35.149232794 +0000 UTC m=+0.469959146 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, architecture=x86_64, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, tcib_managed=true, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, release=1766032510, com.redhat.component=openstack-nova-compute-container, io.openshift.expose-services=, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:54:35 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:54:40 np0005625203.localdomain sudo[104531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:54:40 np0005625203.localdomain sudo[104531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:54:40 np0005625203.localdomain sudo[104531]: pam_unix(sudo:session): session closed for user root
Feb 20 08:54:40 np0005625203.localdomain sudo[104546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:54:40 np0005625203.localdomain sudo[104546]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:54:41 np0005625203.localdomain sudo[104546]: pam_unix(sudo:session): session closed for user root
Feb 20 08:54:42 np0005625203.localdomain sudo[104593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:54:42 np0005625203.localdomain sudo[104593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:54:42 np0005625203.localdomain sudo[104593]: pam_unix(sudo:session): session closed for user root
Feb 20 08:54:43 np0005625203.localdomain sshd[104608]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:54:43 np0005625203.localdomain sshd[104608]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:54:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:54:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:54:48 np0005625203.localdomain podman[104611]: 2026-02-20 08:54:48.773421862 +0000 UTC m=+0.084766025 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, tcib_managed=true, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, version=17.1.13, url=https://www.redhat.com, container_name=iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, architecture=x86_64, build-date=2026-01-12T22:34:43Z)
Feb 20 08:54:48 np0005625203.localdomain podman[104611]: 2026-02-20 08:54:48.786322084 +0000 UTC m=+0.097666227 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.buildah.version=1.41.5, io.openshift.expose-services=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, maintainer=OpenStack TripleO Team, vcs-ref=705339545363fec600102567c4e923938e0f43b3, version=17.1.13, com.redhat.component=openstack-iscsid-container, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:34:43Z, container_name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-iscsid, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vendor=Red Hat, Inc., build-date=2026-01-12T22:34:43Z, architecture=x86_64, url=https://www.redhat.com, release=1766032510, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1)
Feb 20 08:54:48 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:54:48 np0005625203.localdomain podman[104610]: 2026-02-20 08:54:48.880112119 +0000 UTC m=+0.193787905 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, version=17.1.13, distribution-scope=public, tcib_managed=true, url=https://www.redhat.com, io.buildah.version=1.41.5, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., vcs-type=git, name=rhosp-rhel9/openstack-collectd, container_name=collectd, build-date=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd)
Feb 20 08:54:48 np0005625203.localdomain podman[104610]: 2026-02-20 08:54:48.89169344 +0000 UTC m=+0.205369186 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, tcib_managed=true, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, version=17.1.13, vcs-type=git, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com)
Feb 20 08:54:48 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:54:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:54:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:54:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:54:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:54:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:54:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:54:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:54:55 np0005625203.localdomain systemd[1]: tmp-crun.exyFlW.mount: Deactivated successfully.
Feb 20 08:54:55 np0005625203.localdomain podman[104662]: 2026-02-20 08:54:55.820297134 +0000 UTC m=+0.111170948 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, architecture=x86_64, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, vendor=Red Hat, Inc., distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-qdrouterd-container, org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, managed_by=tripleo_ansible, vcs-type=git)
Feb 20 08:54:55 np0005625203.localdomain podman[104650]: 2026-02-20 08:54:55.781293397 +0000 UTC m=+0.090968707 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, vendor=Red Hat, Inc., tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public)
Feb 20 08:54:55 np0005625203.localdomain podman[104651]: 2026-02-20 08:54:55.849113902 +0000 UTC m=+0.156007675 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_id=tripleo_step4, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, io.openshift.expose-services=, tcib_managed=true, batch=17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-type=git, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13)
Feb 20 08:54:55 np0005625203.localdomain podman[104650]: 2026-02-20 08:54:55.864196203 +0000 UTC m=+0.173871483 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, config_id=tripleo_step5, managed_by=tripleo_ansible, container_name=nova_compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, version=17.1.13, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64)
Feb 20 08:54:55 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Deactivated successfully.
Feb 20 08:54:55 np0005625203.localdomain podman[104649]: 2026-02-20 08:54:55.963166569 +0000 UTC m=+0.276751701 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, container_name=logrotate_crond, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 cron, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, version=17.1.13, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.buildah.version=1.41.5, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:54:56 np0005625203.localdomain podman[104649]: 2026-02-20 08:54:56.001256587 +0000 UTC m=+0.314841719 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, vcs-type=git, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Feb 20 08:54:56 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:54:56 np0005625203.localdomain podman[104669]: 2026-02-20 08:54:56.052366071 +0000 UTC m=+0.345224976 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, architecture=x86_64, build-date=2026-01-12T22:56:19Z, tcib_managed=true, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, container_name=ovn_metadata_agent, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, distribution-scope=public)
Feb 20 08:54:56 np0005625203.localdomain podman[104656]: 2026-02-20 08:54:56.004013863 +0000 UTC m=+0.306217440 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:07:30Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, build-date=2026-01-12T23:07:30Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:54:56 np0005625203.localdomain podman[104669]: 2026-02-20 08:54:56.065241222 +0000 UTC m=+0.358100107 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, architecture=x86_64, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.buildah.version=1.41.5)
Feb 20 08:54:56 np0005625203.localdomain podman[104656]: 2026-02-20 08:54:56.088344383 +0000 UTC m=+0.390548040 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, build-date=2026-01-12T23:07:30Z, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, config_id=tripleo_step4, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, container_name=ceilometer_agent_ipmi)
Feb 20 08:54:56 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:54:56 np0005625203.localdomain podman[104675]: 2026-02-20 08:54:56.107850361 +0000 UTC m=+0.395321768 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:54:56 np0005625203.localdomain podman[104675]: 2026-02-20 08:54:56.119760302 +0000 UTC m=+0.407231679 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, vcs-type=git, container_name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, name=rhosp-rhel9/openstack-ovn-controller, distribution-scope=public, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z)
Feb 20 08:54:56 np0005625203.localdomain podman[104669]: unhealthy
Feb 20 08:54:56 np0005625203.localdomain podman[104662]: 2026-02-20 08:54:56.125113369 +0000 UTC m=+0.415987183 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, com.redhat.component=openstack-qdrouterd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, build-date=2026-01-12T22:10:14Z, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:14Z, distribution-scope=public, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.expose-services=, io.buildah.version=1.41.5, architecture=x86_64, container_name=metrics_qdr, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd)
Feb 20 08:54:56 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:54:56 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:54:56 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:54:56 np0005625203.localdomain podman[104675]: unhealthy
Feb 20 08:54:56 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:54:56 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:54:56 np0005625203.localdomain podman[104651]: 2026-02-20 08:54:56.227157371 +0000 UTC m=+0.534051144 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:07:47Z, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute)
Feb 20 08:54:56 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:55:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:55:05 np0005625203.localdomain podman[104814]: 2026-02-20 08:55:05.770856056 +0000 UTC m=+0.084943519 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.expose-services=, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, distribution-scope=public)
Feb 20 08:55:06 np0005625203.localdomain podman[104814]: 2026-02-20 08:55:06.213340894 +0000 UTC m=+0.527428357 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, tcib_managed=true, name=rhosp-rhel9/openstack-nova-compute, io.buildah.version=1.41.5, config_id=tripleo_step4, architecture=x86_64, vendor=Red Hat, Inc., release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, version=17.1.13, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T23:32:04Z, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:55:06 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:55:12 np0005625203.localdomain sshd[104839]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:55:13 np0005625203.localdomain sshd[104839]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:55:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:55:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:55:19 np0005625203.localdomain podman[104842]: 2026-02-20 08:55:19.775804736 +0000 UTC m=+0.084517466 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.component=openstack-iscsid-container, summary=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, release=1766032510, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, name=rhosp-rhel9/openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, batch=17.1_20260112.1, distribution-scope=public, url=https://www.redhat.com, vcs-ref=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible)
Feb 20 08:55:19 np0005625203.localdomain podman[104842]: 2026-02-20 08:55:19.810516379 +0000 UTC m=+0.119229059 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.openshift.expose-services=, build-date=2026-01-12T22:34:43Z, summary=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, config_id=tripleo_step3, vendor=Red Hat, Inc., release=1766032510)
Feb 20 08:55:19 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:55:19 np0005625203.localdomain podman[104841]: 2026-02-20 08:55:19.814882665 +0000 UTC m=+0.126151115 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., container_name=collectd, name=rhosp-rhel9/openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step3, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, io.buildah.version=1.41.5, version=17.1.13, architecture=x86_64, distribution-scope=public, release=1766032510, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git)
Feb 20 08:55:19 np0005625203.localdomain podman[104841]: 2026-02-20 08:55:19.897597254 +0000 UTC m=+0.208865644 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, batch=17.1_20260112.1, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 collectd, vcs-type=git, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, container_name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, com.redhat.component=openstack-collectd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, release=1766032510, io.buildah.version=1.41.5, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:55:19 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:55:25 np0005625203.localdomain sshd[104879]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:55:25 np0005625203.localdomain sshd[104879]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:55:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:55:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:55:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:55:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:55:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:55:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:55:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:55:26 np0005625203.localdomain podman[104883]: 2026-02-20 08:55:26.804560356 +0000 UTC m=+0.108718251 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, container_name=ceilometer_agent_compute, vendor=Red Hat, Inc., config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-ceilometer-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vcs-type=git, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.component=openstack-ceilometer-compute-container, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute)
Feb 20 08:55:26 np0005625203.localdomain systemd[1]: tmp-crun.UJamDk.mount: Deactivated successfully.
Feb 20 08:55:26 np0005625203.localdomain podman[104882]: 2026-02-20 08:55:26.831419053 +0000 UTC m=+0.141566755 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, maintainer=OpenStack TripleO Team, architecture=x86_64, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., config_id=tripleo_step5, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:55:26 np0005625203.localdomain podman[104883]: 2026-02-20 08:55:26.837678118 +0000 UTC m=+0.141836043 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.component=openstack-ceilometer-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, vendor=Red Hat, Inc., io.buildah.version=1.41.5, io.openshift.expose-services=, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, tcib_managed=true, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-type=git, architecture=x86_64, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:47Z, release=1766032510, org.opencontainers.image.created=2026-01-12T23:07:47Z, version=17.1.13, config_id=tripleo_step4, container_name=ceilometer_agent_compute, distribution-scope=public, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:55:26 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:55:26 np0005625203.localdomain podman[104890]: 2026-02-20 08:55:26.852295764 +0000 UTC m=+0.150315718 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_id=tripleo_step1, version=17.1.13, vcs-type=git, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=metrics_qdr, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible)
Feb 20 08:55:26 np0005625203.localdomain podman[104882]: 2026-02-20 08:55:26.852260673 +0000 UTC m=+0.162408435 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-type=git, version=17.1.13, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_compute)
Feb 20 08:55:26 np0005625203.localdomain podman[104882]: unhealthy
Feb 20 08:55:26 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:55:26 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:55:26 np0005625203.localdomain podman[104881]: 2026-02-20 08:55:26.923255187 +0000 UTC m=+0.236847307 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., container_name=logrotate_crond, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step4, batch=17.1_20260112.1, vcs-type=git, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Feb 20 08:55:26 np0005625203.localdomain podman[104884]: 2026-02-20 08:55:26.95895411 +0000 UTC m=+0.256891371 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, batch=17.1_20260112.1, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, config_id=tripleo_step4, distribution-scope=public, vendor=Red Hat, Inc., url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, managed_by=tripleo_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, com.redhat.component=openstack-ceilometer-ipmi-container, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:55:26 np0005625203.localdomain podman[104881]: 2026-02-20 08:55:26.959923241 +0000 UTC m=+0.273515341 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 cron, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, container_name=logrotate_crond, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, com.redhat.component=openstack-cron-container, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, url=https://www.redhat.com, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, description=Red Hat OpenStack Platform 17.1 cron, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:55:26 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:55:27 np0005625203.localdomain podman[104896]: 2026-02-20 08:55:27.003188689 +0000 UTC m=+0.298905461 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, version=17.1.13, distribution-scope=public, io.buildah.version=1.41.5, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, managed_by=tripleo_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.openshift.expose-services=, vcs-type=git, container_name=ovn_metadata_agent, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, maintainer=OpenStack TripleO Team, tcib_managed=true, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:55:27 np0005625203.localdomain podman[104896]: 2026-02-20 08:55:27.017219827 +0000 UTC m=+0.312936609 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, release=1766032510, distribution-scope=public, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f)
Feb 20 08:55:27 np0005625203.localdomain podman[104896]: unhealthy
Feb 20 08:55:27 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:55:27 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:55:27 np0005625203.localdomain podman[104890]: 2026-02-20 08:55:27.05068272 +0000 UTC m=+0.348702814 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, batch=17.1_20260112.1, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, container_name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, release=1766032510, io.openshift.expose-services=, distribution-scope=public, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:55:27 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:55:27 np0005625203.localdomain podman[104902]: 2026-02-20 08:55:27.06702936 +0000 UTC m=+0.353613467 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, tcib_managed=true, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, version=17.1.13, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, maintainer=OpenStack TripleO Team, container_name=ovn_controller, name=rhosp-rhel9/openstack-ovn-controller)
Feb 20 08:55:27 np0005625203.localdomain podman[104902]: 2026-02-20 08:55:27.082136281 +0000 UTC m=+0.368720438 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, batch=17.1_20260112.1, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, managed_by=tripleo_ansible, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:36:40Z, vcs-type=git, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, distribution-scope=public)
Feb 20 08:55:27 np0005625203.localdomain podman[104902]: unhealthy
Feb 20 08:55:27 np0005625203.localdomain podman[104884]: 2026-02-20 08:55:27.08880409 +0000 UTC m=+0.386741381 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:07:30Z, architecture=x86_64, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, tcib_managed=true, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, version=17.1.13, config_id=tripleo_step4, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, release=1766032510, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi)
Feb 20 08:55:27 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:55:27 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:55:27 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:55:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:55:36 np0005625203.localdomain systemd[1]: tmp-crun.DrHHJM.mount: Deactivated successfully.
Feb 20 08:55:36 np0005625203.localdomain podman[105042]: 2026-02-20 08:55:36.759342256 +0000 UTC m=+0.080970265 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.expose-services=, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, url=https://www.redhat.com, release=1766032510, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, config_id=tripleo_step4, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, version=17.1.13)
Feb 20 08:55:37 np0005625203.localdomain podman[105042]: 2026-02-20 08:55:37.142283438 +0000 UTC m=+0.463911427 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, io.openshift.expose-services=, container_name=nova_migration_target, maintainer=OpenStack TripleO Team, build-date=2026-01-12T23:32:04Z, vcs-type=git, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, batch=17.1_20260112.1, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, managed_by=tripleo_ansible, version=17.1.13, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:55:37 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:55:42 np0005625203.localdomain sudo[105066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:55:42 np0005625203.localdomain sudo[105066]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:55:42 np0005625203.localdomain sudo[105066]: pam_unix(sudo:session): session closed for user root
Feb 20 08:55:42 np0005625203.localdomain sudo[105081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:55:42 np0005625203.localdomain sudo[105081]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:55:43 np0005625203.localdomain sudo[105081]: pam_unix(sudo:session): session closed for user root
Feb 20 08:55:43 np0005625203.localdomain sudo[105128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:55:43 np0005625203.localdomain sudo[105128]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:55:43 np0005625203.localdomain sudo[105128]: pam_unix(sudo:session): session closed for user root
Feb 20 08:55:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:55:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:55:50 np0005625203.localdomain podman[105143]: 2026-02-20 08:55:50.780384109 +0000 UTC m=+0.093416824 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, tcib_managed=true, distribution-scope=public, com.redhat.component=openstack-collectd-container, container_name=collectd, vendor=Red Hat, Inc., io.buildah.version=1.41.5, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, url=https://www.redhat.com, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510)
Feb 20 08:55:50 np0005625203.localdomain systemd[1]: tmp-crun.WRFFEU.mount: Deactivated successfully.
Feb 20 08:55:50 np0005625203.localdomain podman[105144]: 2026-02-20 08:55:50.825720292 +0000 UTC m=+0.138198130 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:34:43Z, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, io.openshift.expose-services=, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, io.buildah.version=1.41.5, container_name=iscsid, tcib_managed=true, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, com.redhat.component=openstack-iscsid-container, architecture=x86_64)
Feb 20 08:55:50 np0005625203.localdomain podman[105144]: 2026-02-20 08:55:50.839287625 +0000 UTC m=+0.151765483 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, batch=17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, vcs-type=git, tcib_managed=true, managed_by=tripleo_ansible, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, release=1766032510, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, name=rhosp-rhel9/openstack-iscsid)
Feb 20 08:55:50 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:55:50 np0005625203.localdomain podman[105143]: 2026-02-20 08:55:50.894713044 +0000 UTC m=+0.207745759 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, config_id=tripleo_step3, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, architecture=x86_64, vendor=Red Hat, Inc., build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, io.buildah.version=1.41.5, com.redhat.component=openstack-collectd-container, container_name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team)
Feb 20 08:55:50 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:55:57 np0005625203.localdomain recover_tripleo_nova_virtqemud[105224]: 62505
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:55:57 np0005625203.localdomain podman[105183]: 2026-02-20 08:55:57.795818803 +0000 UTC m=+0.107033089 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, architecture=x86_64, maintainer=OpenStack TripleO Team, container_name=nova_compute, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, vendor=Red Hat, Inc., config_id=tripleo_step5, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:55:57 np0005625203.localdomain podman[105183]: 2026-02-20 08:55:57.846317947 +0000 UTC m=+0.157532223 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, version=17.1.13, io.buildah.version=1.41.5, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_compute, url=https://www.redhat.com, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc.)
Feb 20 08:55:57 np0005625203.localdomain podman[105183]: unhealthy
Feb 20 08:55:57 np0005625203.localdomain podman[105190]: 2026-02-20 08:55:57.861424659 +0000 UTC m=+0.154233310 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, config_id=tripleo_step4, managed_by=tripleo_ansible, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, tcib_managed=true, vcs-type=git, release=1766032510, build-date=2026-01-12T23:07:30Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-ipmi)
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:55:57 np0005625203.localdomain podman[105182]: 2026-02-20 08:55:57.846179973 +0000 UTC m=+0.159028630 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, com.redhat.component=openstack-cron-container, maintainer=OpenStack TripleO Team, container_name=logrotate_crond, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-type=git, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, name=rhosp-rhel9/openstack-cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:15Z)
Feb 20 08:55:57 np0005625203.localdomain podman[105197]: 2026-02-20 08:55:57.910083516 +0000 UTC m=+0.206771289 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, build-date=2026-01-12T22:10:14Z, com.redhat.component=openstack-qdrouterd-container, version=17.1.13, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, name=rhosp-rhel9/openstack-qdrouterd, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, managed_by=tripleo_ansible, config_id=tripleo_step1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5)
Feb 20 08:55:57 np0005625203.localdomain podman[105190]: 2026-02-20 08:55:57.922381839 +0000 UTC m=+0.215190560 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, version=17.1.13, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:07:30Z, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, tcib_managed=true, config_id=tripleo_step4, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.component=openstack-ceilometer-ipmi-container, container_name=ceilometer_agent_ipmi, batch=17.1_20260112.1, url=https://www.redhat.com)
Feb 20 08:55:57 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:55:58 np0005625203.localdomain podman[105184]: 2026-02-20 08:55:58.01190148 +0000 UTC m=+0.315531729 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-ceilometer-compute-container, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:47Z, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, container_name=ceilometer_agent_compute, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_id=tripleo_step4, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:47Z)
Feb 20 08:55:58 np0005625203.localdomain podman[105182]: 2026-02-20 08:55:58.030758389 +0000 UTC m=+0.343607106 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, tcib_managed=true, com.redhat.component=openstack-cron-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_id=tripleo_step4, vcs-type=git, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, url=https://www.redhat.com, batch=17.1_20260112.1, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=logrotate_crond, name=rhosp-rhel9/openstack-cron, summary=Red Hat OpenStack Platform 17.1 cron, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:10:15Z, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public)
Feb 20 08:55:58 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:55:58 np0005625203.localdomain podman[105184]: 2026-02-20 08:55:58.049354829 +0000 UTC m=+0.352985068 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:47Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.component=openstack-ceilometer-compute-container, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.expose-services=, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, version=17.1.13, distribution-scope=public, vendor=Red Hat, Inc., container_name=ceilometer_agent_compute)
Feb 20 08:55:58 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:55:58 np0005625203.localdomain podman[105203]: 2026-02-20 08:55:57.976815177 +0000 UTC m=+0.259849894 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, distribution-scope=public, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, release=1766032510, container_name=ovn_controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, managed_by=tripleo_ansible, io.openshift.expose-services=, build-date=2026-01-12T22:36:40Z, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc.)
Feb 20 08:55:58 np0005625203.localdomain podman[105198]: 2026-02-20 08:55:58.107804202 +0000 UTC m=+0.402857864 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.openshift.expose-services=, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, config_id=tripleo_step4)
Feb 20 08:55:58 np0005625203.localdomain podman[105203]: 2026-02-20 08:55:58.110334 +0000 UTC m=+0.393368697 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, io.openshift.expose-services=, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, release=1766032510, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:36:40Z, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z)
Feb 20 08:55:58 np0005625203.localdomain podman[105203]: unhealthy
Feb 20 08:55:58 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:55:58 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:55:58 np0005625203.localdomain podman[105198]: 2026-02-20 08:55:58.12829091 +0000 UTC m=+0.423344592 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.5, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, io.openshift.expose-services=, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent)
Feb 20 08:55:58 np0005625203.localdomain podman[105198]: unhealthy
Feb 20 08:55:58 np0005625203.localdomain podman[105197]: 2026-02-20 08:55:58.153528327 +0000 UTC m=+0.450216090 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, version=17.1.13, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=metrics_qdr, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:55:58 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:55:58 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:55:58 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:55:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58722 DF PROTO=TCP SPT=47744 DPT=9882 SEQ=4195159286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E6502C0000000001030307) 
Feb 20 08:55:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48542 DF PROTO=TCP SPT=57236 DPT=9102 SEQ=1568550058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E651640000000001030307) 
Feb 20 08:56:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58723 DF PROTO=TCP SPT=47744 DPT=9882 SEQ=4195159286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E654400000000001030307) 
Feb 20 08:56:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48543 DF PROTO=TCP SPT=57236 DPT=9102 SEQ=1568550058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E655800000000001030307) 
Feb 20 08:56:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58724 DF PROTO=TCP SPT=47744 DPT=9882 SEQ=4195159286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E65C400000000001030307) 
Feb 20 08:56:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48544 DF PROTO=TCP SPT=57236 DPT=9102 SEQ=1568550058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E65D810000000001030307) 
Feb 20 08:56:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58725 DF PROTO=TCP SPT=47744 DPT=9882 SEQ=4195159286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E66C000000000001030307) 
Feb 20 08:56:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48545 DF PROTO=TCP SPT=57236 DPT=9102 SEQ=1568550058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E66D410000000001030307) 
Feb 20 08:56:07 np0005625203.localdomain sshd[105344]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:56:07 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14490 DF PROTO=TCP SPT=41500 DPT=9105 SEQ=2683727196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E66FA90000000001030307) 
Feb 20 08:56:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:56:07 np0005625203.localdomain podman[105346]: 2026-02-20 08:56:07.775244613 +0000 UTC m=+0.091059420 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, version=17.1.13, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target)
Feb 20 08:56:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14491 DF PROTO=TCP SPT=41500 DPT=9105 SEQ=2683727196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E673C10000000001030307) 
Feb 20 08:56:08 np0005625203.localdomain podman[105346]: 2026-02-20 08:56:08.18144302 +0000 UTC m=+0.497257777 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, maintainer=OpenStack TripleO Team, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.openshift.expose-services=, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, batch=17.1_20260112.1, vendor=Red Hat, Inc., version=17.1.13, architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, build-date=2026-01-12T23:32:04Z, vcs-type=git, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container)
Feb 20 08:56:08 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:56:08 np0005625203.localdomain sshd[105367]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:56:08 np0005625203.localdomain sshd[105344]: Invalid user claude from 102.210.148.92 port 43786
Feb 20 08:56:08 np0005625203.localdomain sshd[105367]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:56:08 np0005625203.localdomain sshd[105344]: Received disconnect from 102.210.148.92 port 43786:11: Bye Bye [preauth]
Feb 20 08:56:08 np0005625203.localdomain sshd[105344]: Disconnected from invalid user claude 102.210.148.92 port 43786 [preauth]
Feb 20 08:56:10 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14492 DF PROTO=TCP SPT=41500 DPT=9105 SEQ=2683727196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E67BC00000000001030307) 
Feb 20 08:56:13 np0005625203.localdomain sshd[105369]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:56:13 np0005625203.localdomain sshd[105369]: Accepted publickey for zuul from 192.168.122.31 port 55802 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 08:56:13 np0005625203.localdomain systemd-logind[759]: New session 35 of user zuul.
Feb 20 08:56:13 np0005625203.localdomain systemd[1]: Started Session 35 of User zuul.
Feb 20 08:56:13 np0005625203.localdomain sshd[105369]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 08:56:13 np0005625203.localdomain sudo[105462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qrupiadtjqvkfomcowqltggqxrlnddiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577773.3399374-22-253361248062183/AnsiballZ_stat.py
Feb 20 08:56:13 np0005625203.localdomain sudo[105462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:56:13 np0005625203.localdomain python3.9[105464]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 08:56:14 np0005625203.localdomain sudo[105462]: pam_unix(sudo:session): session closed for user root
Feb 20 08:56:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14493 DF PROTO=TCP SPT=41500 DPT=9105 SEQ=2683727196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E68B810000000001030307) 
Feb 20 08:56:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58726 DF PROTO=TCP SPT=47744 DPT=9882 SEQ=4195159286 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E68C800000000001030307) 
Feb 20 08:56:14 np0005625203.localdomain sudo[105556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqmizejviwhiujxsgejkyqjzxnqzibgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577774.199854-58-49480063880511/AnsiballZ_command.py
Feb 20 08:56:14 np0005625203.localdomain sudo[105556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:56:14 np0005625203.localdomain python3.9[105558]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf'); print(p['DEFAULT']['host'])"
                                                             _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 08:56:14 np0005625203.localdomain sudo[105556]: pam_unix(sudo:session): session closed for user root
Feb 20 08:56:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48546 DF PROTO=TCP SPT=57236 DPT=9102 SEQ=1568550058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E68E800000000001030307) 
Feb 20 08:56:15 np0005625203.localdomain sudo[105649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpmibihbeyxcpuhzcsaxmcwhhqbvannd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577775.0459952-82-137236527474527/AnsiballZ_stat.py
Feb 20 08:56:15 np0005625203.localdomain sudo[105649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:56:15 np0005625203.localdomain python3.9[105651]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 08:56:15 np0005625203.localdomain sudo[105649]: pam_unix(sudo:session): session closed for user root
Feb 20 08:56:15 np0005625203.localdomain sudo[105743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvqlztocgpkzdxpzuudgcyecmnjprtsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577775.672397-106-113376467545050/AnsiballZ_command.py
Feb 20 08:56:15 np0005625203.localdomain sudo[105743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:56:15 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48216 DF PROTO=TCP SPT=37244 DPT=9101 SEQ=939518735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E6928D0000000001030307) 
Feb 20 08:56:16 np0005625203.localdomain python3.9[105745]: ansible-ansible.legacy.command Invoked with cmd=python3 -c "import configparser as c; p = c.ConfigParser(strict=False); p.read('/var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf'); print(p['DEFAULT']['host'])"
                                                             _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 08:56:16 np0005625203.localdomain sudo[105743]: pam_unix(sudo:session): session closed for user root
Feb 20 08:56:16 np0005625203.localdomain sudo[105836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gsmgpragyzcjnklklxvpeuvhdnehthjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577776.4027886-133-263446187539578/AnsiballZ_command.py
Feb 20 08:56:16 np0005625203.localdomain sudo[105836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:56:16 np0005625203.localdomain python3.9[105838]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 08:56:16 np0005625203.localdomain sudo[105836]: pam_unix(sudo:session): session closed for user root
Feb 20 08:56:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48217 DF PROTO=TCP SPT=37244 DPT=9101 SEQ=939518735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E696800000000001030307) 
Feb 20 08:56:17 np0005625203.localdomain python3.9[105929]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 20 08:56:19 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48218 DF PROTO=TCP SPT=37244 DPT=9101 SEQ=939518735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E69E810000000001030307) 
Feb 20 08:56:19 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10934 DF PROTO=TCP SPT=53434 DPT=9100 SEQ=3267851932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E69EBE0000000001030307) 
Feb 20 08:56:19 np0005625203.localdomain python3.9[106019]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 08:56:19 np0005625203.localdomain python3.9[106111]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 20 08:56:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10935 DF PROTO=TCP SPT=53434 DPT=9100 SEQ=3267851932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E6A2C10000000001030307) 
Feb 20 08:56:20 np0005625203.localdomain python3.9[106201]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 08:56:21 np0005625203.localdomain python3.9[106249]: ansible-ansible.legacy.dnf Invoked with name=['systemd-container'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 08:56:21 np0005625203.localdomain sshd[106251]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:56:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:56:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:56:21 np0005625203.localdomain podman[106253]: 2026-02-20 08:56:21.767970744 +0000 UTC m=+0.085235448 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, distribution-scope=public, build-date=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, version=17.1.13, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, com.redhat.component=openstack-iscsid-container, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, container_name=iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, vendor=Red Hat, Inc., vcs-type=git, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-iscsid)
Feb 20 08:56:21 np0005625203.localdomain podman[106252]: 2026-02-20 08:56:21.782607121 +0000 UTC m=+0.099062720 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, build-date=2026-01-12T22:10:15Z, release=1766032510, url=https://www.redhat.com, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, architecture=x86_64, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-collectd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, tcib_managed=true, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-collectd-container)
Feb 20 08:56:21 np0005625203.localdomain podman[106253]: 2026-02-20 08:56:21.781656111 +0000 UTC m=+0.098920845 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, config_id=tripleo_step3, io.openshift.expose-services=, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, managed_by=tripleo_ansible, architecture=x86_64, vendor=Red Hat, Inc., org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, io.buildah.version=1.41.5, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:34:43Z, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, release=1766032510, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:56:21 np0005625203.localdomain podman[106252]: 2026-02-20 08:56:21.798262229 +0000 UTC m=+0.114717848 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, url=https://www.redhat.com, tcib_managed=true, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, vendor=Red Hat, Inc., vcs-type=git, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, com.redhat.component=openstack-collectd-container)
Feb 20 08:56:21 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:56:21 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:56:21 np0005625203.localdomain sshd[106251]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:56:22 np0005625203.localdomain sshd[105369]: pam_unix(sshd:session): session closed for user zuul
Feb 20 08:56:22 np0005625203.localdomain systemd[1]: session-35.scope: Deactivated successfully.
Feb 20 08:56:22 np0005625203.localdomain systemd[1]: session-35.scope: Consumed 4.707s CPU time.
Feb 20 08:56:22 np0005625203.localdomain systemd-logind[759]: Session 35 logged out. Waiting for processes to exit.
Feb 20 08:56:22 np0005625203.localdomain systemd-logind[759]: Removed session 35.
Feb 20 08:56:22 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10936 DF PROTO=TCP SPT=53434 DPT=9100 SEQ=3267851932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E6AAC00000000001030307) 
Feb 20 08:56:22 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14494 DF PROTO=TCP SPT=41500 DPT=9105 SEQ=2683727196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E6AC810000000001030307) 
Feb 20 08:56:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48219 DF PROTO=TCP SPT=37244 DPT=9101 SEQ=939518735 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E6AE400000000001030307) 
Feb 20 08:56:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10937 DF PROTO=TCP SPT=53434 DPT=9100 SEQ=3267851932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E6BA810000000001030307) 
Feb 20 08:56:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:56:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:56:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:56:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:56:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:56:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:56:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:56:28 np0005625203.localdomain podman[106316]: 2026-02-20 08:56:28.802745383 +0000 UTC m=+0.101909979 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64, version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, config_id=tripleo_step1, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, build-date=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-qdrouterd-container, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:56:28 np0005625203.localdomain podman[106307]: 2026-02-20 08:56:28.782273214 +0000 UTC m=+0.092228417 container health_status 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, health_status=healthy, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ceilometer-compute-container, org.opencontainers.image.created=2026-01-12T23:07:47Z, build-date=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, architecture=x86_64, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, container_name=ceilometer_agent_compute, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, managed_by=tripleo_ansible, tcib_managed=true, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-compute)
Feb 20 08:56:28 np0005625203.localdomain podman[106307]: 2026-02-20 08:56:28.869285337 +0000 UTC m=+0.179240560 container exec_died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_compute, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, version=17.1.13, build-date=2026-01-12T23:07:47Z, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, com.redhat.component=openstack-ceilometer-compute-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1)
Feb 20 08:56:28 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Deactivated successfully.
Feb 20 08:56:28 np0005625203.localdomain podman[106306]: 2026-02-20 08:56:28.860822743 +0000 UTC m=+0.170777106 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=healthy, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, container_name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, version=17.1.13, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, architecture=x86_64, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public)
Feb 20 08:56:28 np0005625203.localdomain podman[106306]: 2026-02-20 08:56:28.944514003 +0000 UTC m=+0.254468376 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, vendor=Red Hat, Inc., io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute)
Feb 20 08:56:28 np0005625203.localdomain podman[106306]: unhealthy
Feb 20 08:56:28 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:56:28 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:56:28 np0005625203.localdomain podman[106330]: 2026-02-20 08:56:28.958303253 +0000 UTC m=+0.250984478 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, com.redhat.component=openstack-ovn-controller-container, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, release=1766032510, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_controller, architecture=x86_64, build-date=2026-01-12T22:36:40Z, distribution-scope=public, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, version=17.1.13, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:56:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5048 DF PROTO=TCP SPT=52224 DPT=9882 SEQ=915752414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E6C55D0000000001030307) 
Feb 20 08:56:29 np0005625203.localdomain podman[106305]: 2026-02-20 08:56:29.060060006 +0000 UTC m=+0.376629695 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, name=rhosp-rhel9/openstack-cron, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, batch=17.1_20260112.1, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, org.opencontainers.image.created=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:56:29 np0005625203.localdomain podman[106316]: 2026-02-20 08:56:29.07877433 +0000 UTC m=+0.377938896 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, container_name=metrics_qdr, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, release=1766032510, vendor=Red Hat, Inc., version=17.1.13, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, distribution-scope=public, com.redhat.component=openstack-qdrouterd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:14Z, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z)
Feb 20 08:56:29 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:56:29 np0005625203.localdomain podman[106324]: 2026-02-20 08:56:29.039763993 +0000 UTC m=+0.336521974 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, vcs-type=git, architecture=x86_64, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., io.buildah.version=1.41.5, container_name=ovn_metadata_agent, io.openshift.expose-services=, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, tcib_managed=true, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z)
Feb 20 08:56:29 np0005625203.localdomain podman[106305]: 2026-02-20 08:56:29.096247465 +0000 UTC m=+0.412817164 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, com.redhat.component=openstack-cron-container, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., version=17.1.13, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:10:15Z, tcib_managed=true, url=https://www.redhat.com, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, container_name=logrotate_crond, summary=Red Hat OpenStack Platform 17.1 cron, build-date=2026-01-12T22:10:15Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Feb 20 08:56:29 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:56:29 np0005625203.localdomain podman[106313]: 2026-02-20 08:56:29.110087296 +0000 UTC m=+0.416004013 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, org.opencontainers.image.created=2026-01-12T23:07:30Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, container_name=ceilometer_agent_ipmi, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, release=1766032510, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git)
Feb 20 08:56:29 np0005625203.localdomain podman[106324]: 2026-02-20 08:56:29.12335301 +0000 UTC m=+0.420111021 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, tcib_managed=true, maintainer=OpenStack TripleO Team, version=17.1.13, batch=17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, url=https://www.redhat.com, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git)
Feb 20 08:56:29 np0005625203.localdomain podman[106324]: unhealthy
Feb 20 08:56:29 np0005625203.localdomain podman[106330]: 2026-02-20 08:56:29.130965398 +0000 UTC m=+0.423646623 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, version=17.1.13, build-date=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.buildah.version=1.41.5, container_name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64)
Feb 20 08:56:29 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:56:29 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:56:29 np0005625203.localdomain podman[106330]: unhealthy
Feb 20 08:56:29 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:56:29 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:56:29 np0005625203.localdomain podman[106313]: 2026-02-20 08:56:29.160208389 +0000 UTC m=+0.466125106 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:07:30Z, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, url=https://www.redhat.com, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., tcib_managed=true, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, batch=17.1_20260112.1, architecture=x86_64)
Feb 20 08:56:29 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:56:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37550 DF PROTO=TCP SPT=57778 DPT=9102 SEQ=1966446827 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E6C6950000000001030307) 
Feb 20 08:56:29 np0005625203.localdomain systemd[1]: tmp-crun.lwOpds.mount: Deactivated successfully.
Feb 20 08:56:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5050 DF PROTO=TCP SPT=52224 DPT=9882 SEQ=915752414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E6D1800000000001030307) 
Feb 20 08:56:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5051 DF PROTO=TCP SPT=52224 DPT=9882 SEQ=915752414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E6E1400000000001030307) 
Feb 20 08:56:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=192 DF PROTO=TCP SPT=48652 DPT=9105 SEQ=4030787224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E6E9010000000001030307) 
Feb 20 08:56:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:56:38 np0005625203.localdomain podman[106466]: 2026-02-20 08:56:38.769320283 +0000 UTC m=+0.084852107 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, batch=17.1_20260112.1, version=17.1.13, description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, name=rhosp-rhel9/openstack-nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true, io.openshift.expose-services=)
Feb 20 08:56:39 np0005625203.localdomain podman[106466]: 2026-02-20 08:56:39.223410003 +0000 UTC m=+0.538941837 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_id=tripleo_step4, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, release=1766032510, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T23:32:04Z)
Feb 20 08:56:39 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:56:39 np0005625203.localdomain sshd[106489]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:56:39 np0005625203.localdomain sshd[106489]: Accepted publickey for zuul from 192.168.122.30 port 46740 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 08:56:39 np0005625203.localdomain systemd-logind[759]: New session 36 of user zuul.
Feb 20 08:56:39 np0005625203.localdomain systemd[1]: Started Session 36 of User zuul.
Feb 20 08:56:39 np0005625203.localdomain sshd[106489]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 08:56:40 np0005625203.localdomain sudo[106582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aateoqrtrvjcfbbodgffodltkinkvsnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577799.60233-19-235850598058130/AnsiballZ_systemd_service.py
Feb 20 08:56:40 np0005625203.localdomain sudo[106582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:56:40 np0005625203.localdomain python3.9[106584]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 08:56:40 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:56:40 np0005625203.localdomain systemd-rc-local-generator[106607]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:56:40 np0005625203.localdomain systemd-sysv-generator[106611]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:56:40 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:56:40 np0005625203.localdomain sudo[106582]: pam_unix(sudo:session): session closed for user root
Feb 20 08:56:41 np0005625203.localdomain python3.9[106710]: ansible-ansible.builtin.service_facts Invoked
Feb 20 08:56:41 np0005625203.localdomain network[106727]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 08:56:41 np0005625203.localdomain network[106728]: 'network-scripts' will be removed from distribution in near future.
Feb 20 08:56:41 np0005625203.localdomain network[106729]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 08:56:43 np0005625203.localdomain sudo[106759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:56:43 np0005625203.localdomain sudo[106759]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:56:43 np0005625203.localdomain sudo[106759]: pam_unix(sudo:session): session closed for user root
Feb 20 08:56:43 np0005625203.localdomain sudo[106779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:56:43 np0005625203.localdomain sudo[106779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:56:43 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:56:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=194 DF PROTO=TCP SPT=48652 DPT=9105 SEQ=4030787224 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E700C00000000001030307) 
Feb 20 08:56:44 np0005625203.localdomain sudo[106779]: pam_unix(sudo:session): session closed for user root
Feb 20 08:56:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5052 DF PROTO=TCP SPT=52224 DPT=9882 SEQ=915752414 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E702800000000001030307) 
Feb 20 08:56:45 np0005625203.localdomain sudo[106838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:56:45 np0005625203.localdomain sudo[106838]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:56:45 np0005625203.localdomain sudo[106838]: pam_unix(sudo:session): session closed for user root
Feb 20 08:56:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44908 DF PROTO=TCP SPT=57522 DPT=9101 SEQ=3616027090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E70BC00000000001030307) 
Feb 20 08:56:48 np0005625203.localdomain python3.9[107005]: ansible-ansible.builtin.service_facts Invoked
Feb 20 08:56:48 np0005625203.localdomain network[107022]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 08:56:48 np0005625203.localdomain network[107023]: 'network-scripts' will be removed from distribution in near future.
Feb 20 08:56:48 np0005625203.localdomain network[107024]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 08:56:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:56:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41760 DF PROTO=TCP SPT=44546 DPT=9100 SEQ=123468916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E718010000000001030307) 
Feb 20 08:56:52 np0005625203.localdomain sudo[107222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxrypxatlrhxqiyfzexfbfgacxuuwzxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577811.772075-109-239227148842619/AnsiballZ_systemd_service.py
Feb 20 08:56:52 np0005625203.localdomain sudo[107222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:56:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:56:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:56:52 np0005625203.localdomain podman[107224]: 2026-02-20 08:56:52.162149195 +0000 UTC m=+0.104904422 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, build-date=2026-01-12T22:10:15Z, url=https://www.redhat.com, config_id=tripleo_step3, com.redhat.component=openstack-collectd-container, release=1766032510, io.openshift.expose-services=, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-collectd, container_name=collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc.)
Feb 20 08:56:52 np0005625203.localdomain podman[107224]: 2026-02-20 08:56:52.172820828 +0000 UTC m=+0.115576025 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, config_id=tripleo_step3, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, managed_by=tripleo_ansible, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, release=1766032510, url=https://www.redhat.com, version=17.1.13, architecture=x86_64, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.component=openstack-collectd-container, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.expose-services=, name=rhosp-rhel9/openstack-collectd, container_name=collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 collectd, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git)
Feb 20 08:56:52 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:56:52 np0005625203.localdomain podman[107226]: 2026-02-20 08:56:52.245163144 +0000 UTC m=+0.187846238 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, container_name=iscsid, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, managed_by=tripleo_ansible, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, batch=17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.buildah.version=1.41.5, version=17.1.13, vendor=Red Hat, Inc., release=1766032510, description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:56:52 np0005625203.localdomain podman[107226]: 2026-02-20 08:56:52.28125631 +0000 UTC m=+0.223939364 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., release=1766032510, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, container_name=iscsid, batch=17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=)
Feb 20 08:56:52 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:56:52 np0005625203.localdomain python3.9[107225]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:56:52 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:56:52 np0005625203.localdomain systemd-rc-local-generator[107284]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:56:52 np0005625203.localdomain systemd-sysv-generator[107289]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:56:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:56:52 np0005625203.localdomain systemd[1]: Stopping ceilometer_agent_compute container...
Feb 20 08:56:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44910 DF PROTO=TCP SPT=57522 DPT=9101 SEQ=3616027090 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E723800000000001030307) 
Feb 20 08:56:53 np0005625203.localdomain sshd[107315]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:56:53 np0005625203.localdomain sshd[107315]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:56:54 np0005625203.localdomain sshd[107317]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:56:55 np0005625203.localdomain sshd[107317]: Received disconnect from 212.154.234.9 port 34668:11: Bye Bye [preauth]
Feb 20 08:56:55 np0005625203.localdomain sshd[107317]: Disconnected from authenticating user root 212.154.234.9 port 34668 [preauth]
Feb 20 08:56:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41762 DF PROTO=TCP SPT=44546 DPT=9100 SEQ=123468916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E72FC00000000001030307) 
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:56:59 np0005625203.localdomain podman[107322]: 2026-02-20 08:56:59.30046182 +0000 UTC m=+0.102262179 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, version=17.1.13, build-date=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, container_name=metrics_qdr, config_id=tripleo_step1, description=Red Hat OpenStack Platform 17.1 qdrouterd, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, tcib_managed=true, architecture=x86_64, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, batch=17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.component=openstack-qdrouterd-container, summary=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, managed_by=tripleo_ansible, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, org.opencontainers.image.created=2026-01-12T22:10:14Z, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:56:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48461 DF PROTO=TCP SPT=45912 DPT=9102 SEQ=2229979892 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E73BC50000000001030307) 
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: tmp-crun.ZHSaEG.mount: Deactivated successfully.
Feb 20 08:56:59 np0005625203.localdomain podman[107319]: 2026-02-20 08:56:59.364345683 +0000 UTC m=+0.161633412 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, distribution-scope=public, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, version=17.1.13, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, config_id=tripleo_step4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, io.openshift.expose-services=, container_name=logrotate_crond, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 cron, io.buildah.version=1.41.5)
Feb 20 08:56:59 np0005625203.localdomain podman[107319]: 2026-02-20 08:56:59.371281278 +0000 UTC m=+0.168569007 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_id=tripleo_step4, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 cron, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, vendor=Red Hat, Inc., batch=17.1_20260112.1, version=17.1.13, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, io.buildah.version=1.41.5, container_name=logrotate_crond, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, description=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, architecture=x86_64)
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:56:59 np0005625203.localdomain podman[107330]: 2026-02-20 08:56:59.323322033 +0000 UTC m=+0.114145730 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, distribution-scope=public, container_name=ovn_metadata_agent, architecture=x86_64, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, release=1766032510)
Feb 20 08:56:59 np0005625203.localdomain podman[107330]: 2026-02-20 08:56:59.453563235 +0000 UTC m=+0.244386942 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, managed_by=tripleo_ansible, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, container_name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, io.buildah.version=1.41.5, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f)
Feb 20 08:56:59 np0005625203.localdomain podman[107330]: unhealthy
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:56:59 np0005625203.localdomain podman[107343]: 2026-02-20 08:56:59.503389689 +0000 UTC m=+0.291135690 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, io.buildah.version=1.41.5, architecture=x86_64, tcib_managed=true, com.redhat.component=openstack-ceilometer-ipmi-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, distribution-scope=public, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, vcs-type=git, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi)
Feb 20 08:56:59 np0005625203.localdomain podman[107322]: 2026-02-20 08:56:59.537233584 +0000 UTC m=+0.339034003 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, tcib_managed=true, container_name=metrics_qdr, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, vcs-type=git, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, description=Red Hat OpenStack Platform 17.1 qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, config_id=tripleo_step1, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 qdrouterd, version=17.1.13, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, build-date=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-qdrouterd-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:10:14Z, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee)
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:56:59 np0005625203.localdomain podman[107320]: 2026-02-20 08:56:59.600575809 +0000 UTC m=+0.410046168 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, io.buildah.version=1.41.5, version=17.1.13, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, managed_by=tripleo_ansible, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc.)
Feb 20 08:56:59 np0005625203.localdomain podman[107332]: 2026-02-20 08:56:59.616182026 +0000 UTC m=+0.409379597 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, release=1766032510, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=ovn_controller, version=17.1.13, io.openshift.expose-services=, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc.)
Feb 20 08:56:59 np0005625203.localdomain podman[107320]: 2026-02-20 08:56:59.62818532 +0000 UTC m=+0.437655709 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, distribution-scope=public, build-date=2026-01-12T23:32:04Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, release=1766032510, vcs-type=git, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, name=rhosp-rhel9/openstack-nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1)
Feb 20 08:56:59 np0005625203.localdomain podman[107320]: unhealthy
Feb 20 08:56:59 np0005625203.localdomain podman[107332]: 2026-02-20 08:56:59.636232511 +0000 UTC m=+0.429430112 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, managed_by=tripleo_ansible, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, distribution-scope=public, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:36:40Z, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13)
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:56:59 np0005625203.localdomain podman[107332]: unhealthy
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:56:59 np0005625203.localdomain podman[107321]: Error: container 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 is not running
Feb 20 08:56:59 np0005625203.localdomain podman[107343]: 2026-02-20 08:56:59.69361847 +0000 UTC m=+0.481364541 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, vendor=Red Hat, Inc., managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, distribution-scope=public, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, tcib_managed=true, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, vcs-type=git, build-date=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=17.1.13)
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Main process exited, code=exited, status=125/n/a
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Failed with result 'exit-code'.
Feb 20 08:56:59 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:57:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5688 DF PROTO=TCP SPT=35154 DPT=9882 SEQ=2513024402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E746800000000001030307) 
Feb 20 08:57:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5689 DF PROTO=TCP SPT=35154 DPT=9882 SEQ=2513024402 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E756400000000001030307) 
Feb 20 08:57:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25892 DF PROTO=TCP SPT=51240 DPT=9105 SEQ=3430139614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E75E000000000001030307) 
Feb 20 08:57:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:57:09 np0005625203.localdomain podman[107463]: 2026-02-20 08:57:09.520584716 +0000 UTC m=+0.085217397 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.buildah.version=1.41.5, distribution-scope=public, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, url=https://www.redhat.com, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, name=rhosp-rhel9/openstack-nova-compute, build-date=2026-01-12T23:32:04Z, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, config_id=tripleo_step4)
Feb 20 08:57:09 np0005625203.localdomain podman[107463]: 2026-02-20 08:57:09.887439726 +0000 UTC m=+0.452072377 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, release=1766032510, com.redhat.component=openstack-nova-compute-container, batch=17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, managed_by=tripleo_ansible, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_migration_target, vcs-type=git, maintainer=OpenStack TripleO Team, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, config_id=tripleo_step4, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T23:32:04Z)
Feb 20 08:57:09 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:57:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14496 DF PROTO=TCP SPT=41500 DPT=9105 SEQ=2683727196 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E76A810000000001030307) 
Feb 20 08:57:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25894 DF PROTO=TCP SPT=51240 DPT=9105 SEQ=3430139614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E775C00000000001030307) 
Feb 20 08:57:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27445 DF PROTO=TCP SPT=39854 DPT=9101 SEQ=194378977 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E781000000000001030307) 
Feb 20 08:57:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40157 DF PROTO=TCP SPT=34476 DPT=9100 SEQ=350466662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E78D400000000001030307) 
Feb 20 08:57:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:57:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:57:22 np0005625203.localdomain podman[107486]: 2026-02-20 08:57:22.771393868 +0000 UTC m=+0.087959413 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, container_name=collectd, com.redhat.component=openstack-collectd-container, config_id=tripleo_step3, release=1766032510, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-collectd, tcib_managed=true)
Feb 20 08:57:22 np0005625203.localdomain podman[107486]: 2026-02-20 08:57:22.78138593 +0000 UTC m=+0.097951505 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, build-date=2026-01-12T22:10:15Z, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, tcib_managed=true, com.redhat.component=openstack-collectd-container, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step3, version=17.1.13, managed_by=tripleo_ansible, release=1766032510, container_name=collectd, distribution-scope=public, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, batch=17.1_20260112.1, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, vendor=Red Hat, Inc.)
Feb 20 08:57:22 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:57:22 np0005625203.localdomain podman[107487]: 2026-02-20 08:57:22.871120959 +0000 UTC m=+0.182138781 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, io.openshift.expose-services=, com.redhat.component=openstack-iscsid-container, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 iscsid, architecture=x86_64, build-date=2026-01-12T22:34:43Z, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, release=1766032510, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, tcib_managed=true, batch=17.1_20260112.1, container_name=iscsid, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Feb 20 08:57:22 np0005625203.localdomain podman[107487]: 2026-02-20 08:57:22.911372243 +0000 UTC m=+0.222390075 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, version=17.1.13, build-date=2026-01-12T22:34:43Z, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, release=1766032510, batch=17.1_20260112.1, url=https://www.redhat.com, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, architecture=x86_64, vcs-ref=705339545363fec600102567c4e923938e0f43b3, config_id=tripleo_step3, com.redhat.component=openstack-iscsid-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:34:43Z, io.buildah.version=1.41.5, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=iscsid, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid)
Feb 20 08:57:22 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:57:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10940 DF PROTO=TCP SPT=53434 DPT=9100 SEQ=3267851932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E798800000000001030307) 
Feb 20 08:57:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40159 DF PROTO=TCP SPT=34476 DPT=9100 SEQ=350466662 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E7A5000000000001030307) 
Feb 20 08:57:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60200 DF PROTO=TCP SPT=36460 DPT=9102 SEQ=145726013 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E7B0F40000000001030307) 
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: tmp-crun.UtvkYJ.mount: Deactivated successfully.
Feb 20 08:57:29 np0005625203.localdomain podman[107526]: 2026-02-20 08:57:29.515569653 +0000 UTC m=+0.089248034 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.5, description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, managed_by=tripleo_ansible, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, name=rhosp-rhel9/openstack-cron, version=17.1.13, io.openshift.expose-services=, distribution-scope=public, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, architecture=x86_64, tcib_managed=true)
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:57:29 np0005625203.localdomain podman[107526]: 2026-02-20 08:57:29.53438081 +0000 UTC m=+0.108059231 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, vcs-type=git, com.redhat.component=openstack-cron-container, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 cron, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, version=17.1.13, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, container_name=logrotate_crond, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, release=1766032510, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-cron, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 cron, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:57:29 np0005625203.localdomain podman[107545]: 2026-02-20 08:57:29.640581461 +0000 UTC m=+0.099797803 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2026-01-12T22:56:19Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, architecture=x86_64, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true)
Feb 20 08:57:29 np0005625203.localdomain podman[107545]: 2026-02-20 08:57:29.652903076 +0000 UTC m=+0.112119418 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, distribution-scope=public, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:56:19Z, vcs-type=git, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, release=1766032510, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:57:29 np0005625203.localdomain podman[107545]: unhealthy
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:57:29 np0005625203.localdomain podman[107575]: 2026-02-20 08:57:29.768116218 +0000 UTC m=+0.086669944 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, tcib_managed=true, maintainer=OpenStack TripleO Team, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step5, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public)
Feb 20 08:57:29 np0005625203.localdomain podman[107564]: 2026-02-20 08:57:29.735353166 +0000 UTC m=+0.086666943 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=metrics_qdr, managed_by=tripleo_ansible, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_id=tripleo_step1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, tcib_managed=true, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:14Z, url=https://www.redhat.com, version=17.1.13, distribution-scope=public, maintainer=OpenStack TripleO Team)
Feb 20 08:57:29 np0005625203.localdomain podman[107577]: 2026-02-20 08:57:29.785904013 +0000 UTC m=+0.096987365 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, batch=17.1_20260112.1, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, url=https://www.redhat.com, release=1766032510, build-date=2026-01-12T22:36:40Z, version=17.1.13, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.component=openstack-ovn-controller-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, container_name=ovn_controller, architecture=x86_64, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:57:29 np0005625203.localdomain podman[107575]: 2026-02-20 08:57:29.7935009 +0000 UTC m=+0.112054596 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, tcib_managed=true, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, container_name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vcs-type=git, config_id=tripleo_step5, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T23:32:04Z, distribution-scope=public)
Feb 20 08:57:29 np0005625203.localdomain podman[107575]: unhealthy
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:57:29 np0005625203.localdomain podman[107577]: 2026-02-20 08:57:29.873751122 +0000 UTC m=+0.184834424 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, url=https://www.redhat.com, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.buildah.version=1.41.5, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, version=17.1.13, description=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vendor=Red Hat, Inc., architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:57:29 np0005625203.localdomain podman[107577]: unhealthy
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:57:29 np0005625203.localdomain podman[107613]: 2026-02-20 08:57:29.888866053 +0000 UTC m=+0.130060546 container health_status 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.5, config_id=tripleo_step4, name=rhosp-rhel9/openstack-ceilometer-ipmi, maintainer=OpenStack TripleO Team, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, build-date=2026-01-12T23:07:30Z, batch=17.1_20260112.1, com.redhat.component=openstack-ceilometer-ipmi-container)
Feb 20 08:57:29 np0005625203.localdomain podman[107611]: Error: container 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 is not running
Feb 20 08:57:29 np0005625203.localdomain podman[107613]: 2026-02-20 08:57:29.924156094 +0000 UTC m=+0.165350627 container exec_died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:30Z, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, architecture=x86_64, config_id=tripleo_step4, tcib_managed=true, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-ceilometer-ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Main process exited, code=exited, status=125/n/a
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Failed with result 'exit-code'.
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Deactivated successfully.
Feb 20 08:57:29 np0005625203.localdomain podman[107564]: 2026-02-20 08:57:29.967355351 +0000 UTC m=+0.318669078 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:14Z, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, vendor=Red Hat, Inc., release=1766032510, tcib_managed=true, batch=17.1_20260112.1, build-date=2026-01-12T22:10:14Z, config_id=tripleo_step1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, name=rhosp-rhel9/openstack-qdrouterd, com.redhat.component=openstack-qdrouterd-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:57:29 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:57:30 np0005625203.localdomain systemd[1]: tmp-crun.nqlfpr.mount: Deactivated successfully.
Feb 20 08:57:30 np0005625203.localdomain sshd[107673]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:57:30 np0005625203.localdomain sshd[107673]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:57:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30908 DF PROTO=TCP SPT=59440 DPT=9882 SEQ=3474542798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E7BBC00000000001030307) 
Feb 20 08:57:34 np0005625203.localdomain podman[107300]: time="2026-02-20T08:57:34Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_compute in 42 seconds, resorting to SIGKILL"
Feb 20 08:57:34 np0005625203.localdomain systemd[1]: libpod-5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.scope: Deactivated successfully.
Feb 20 08:57:34 np0005625203.localdomain systemd[1]: libpod-5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.scope: Consumed 5.690s CPU time.
Feb 20 08:57:34 np0005625203.localdomain podman[107300]: 2026-02-20 08:57:34.849587776 +0000 UTC m=+42.099547099 container died 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, config_id=tripleo_step4, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 ceilometer-compute, build-date=2026-01-12T23:07:47Z, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, com.redhat.component=openstack-ceilometer-compute-container, version=17.1.13, batch=17.1_20260112.1, architecture=x86_64, managed_by=tripleo_ansible, release=1766032510, url=https://www.redhat.com)
Feb 20 08:57:34 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.timer: Deactivated successfully.
Feb 20 08:57:34 np0005625203.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.
Feb 20 08:57:34 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Failed to open /run/systemd/transient/5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: No such file or directory
Feb 20 08:57:34 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29-userdata-shm.mount: Deactivated successfully.
Feb 20 08:57:34 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0b6de8b10d609c1a823e14e242768f43904250823435fbc3f960804bb5b5ac65-merged.mount: Deactivated successfully.
Feb 20 08:57:34 np0005625203.localdomain podman[107300]: 2026-02-20 08:57:34.904118645 +0000 UTC m=+42.154077988 container cleanup 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, version=17.1.13, com.redhat.component=openstack-ceilometer-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, release=1766032510, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, name=rhosp-rhel9/openstack-ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:57:34 np0005625203.localdomain podman[107300]: ceilometer_agent_compute
Feb 20 08:57:34 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.timer: Failed to open /run/systemd/transient/5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.timer: No such file or directory
Feb 20 08:57:34 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Failed to open /run/systemd/transient/5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: No such file or directory
Feb 20 08:57:34 np0005625203.localdomain podman[107676]: 2026-02-20 08:57:34.939285663 +0000 UTC m=+0.084721043 container cleanup 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.buildah.version=1.41.5, com.redhat.component=openstack-ceilometer-compute-container, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, container_name=ceilometer_agent_compute, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., version=17.1.13, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:07:47Z, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, name=rhosp-rhel9/openstack-ceilometer-compute, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:57:34 np0005625203.localdomain systemd[1]: libpod-conmon-5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.scope: Deactivated successfully.
Feb 20 08:57:35 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:d7:b4:4a MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=47010 SEQ=2852316380 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 
Feb 20 08:57:35 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.timer: Failed to open /run/systemd/transient/5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.timer: No such file or directory
Feb 20 08:57:35 np0005625203.localdomain systemd[1]: 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: Failed to open /run/systemd/transient/5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29.service: No such file or directory
Feb 20 08:57:35 np0005625203.localdomain podman[107691]: 2026-02-20 08:57:35.054252307 +0000 UTC m=+0.075738772 container cleanup 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, build-date=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, batch=17.1_20260112.1, tcib_managed=true, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, name=rhosp-rhel9/openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, org.opencontainers.image.created=2026-01-12T23:07:47Z, io.openshift.expose-services=, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:57:35 np0005625203.localdomain podman[107691]: ceilometer_agent_compute
Feb 20 08:57:35 np0005625203.localdomain systemd[1]: tripleo_ceilometer_agent_compute.service: Deactivated successfully.
Feb 20 08:57:35 np0005625203.localdomain systemd[1]: Stopped ceilometer_agent_compute container.
Feb 20 08:57:35 np0005625203.localdomain systemd[1]: tripleo_ceilometer_agent_compute.service: Consumed 1.134s CPU time, no IO.
Feb 20 08:57:35 np0005625203.localdomain sudo[107222]: pam_unix(sudo:session): session closed for user root
Feb 20 08:57:35 np0005625203.localdomain sudo[107794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-voxwjgjqstumdwkmkqnhlmyrlpmcwtri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577855.2258353-109-89985378295160/AnsiballZ_systemd_service.py
Feb 20 08:57:35 np0005625203.localdomain sudo[107794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:57:35 np0005625203.localdomain python3.9[107796]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:57:35 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:57:35 np0005625203.localdomain systemd-rc-local-generator[107820]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:57:35 np0005625203.localdomain systemd-sysv-generator[107825]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:57:36 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:57:36 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:57:36 np0005625203.localdomain systemd[1]: Stopping ceilometer_agent_ipmi container...
Feb 20 08:57:36 np0005625203.localdomain recover_tripleo_nova_virtqemud[107837]: 62505
Feb 20 08:57:36 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:57:36 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:57:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56521 DF PROTO=TCP SPT=35692 DPT=9105 SEQ=2281559790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E7D3400000000001030307) 
Feb 20 08:57:38 np0005625203.localdomain sshd[107854]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:57:39 np0005625203.localdomain sshd[107854]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:57:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:57:40 np0005625203.localdomain systemd[1]: tmp-crun.0zRkNK.mount: Deactivated successfully.
Feb 20 08:57:40 np0005625203.localdomain podman[107856]: 2026-02-20 08:57:40.020998317 +0000 UTC m=+0.076276479 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, version=17.1.13, architecture=x86_64, release=1766032510, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, vcs-type=git, config_id=tripleo_step4, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, managed_by=tripleo_ansible, batch=17.1_20260112.1)
Feb 20 08:57:40 np0005625203.localdomain podman[107856]: 2026-02-20 08:57:40.407322334 +0000 UTC m=+0.462600526 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, vcs-type=git, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=nova_migration_target, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, release=1766032510, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, version=17.1.13)
Feb 20 08:57:40 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:57:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:d7:b4:4a MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.110 DST=192.168.122.107 LEN=40 TOS=0x00 PREC=0x00 TTL=64 ID=0 DF PROTO=TCP SPT=6379 DPT=47010 SEQ=2852316380 ACK=0 WINDOW=0 RES=0x00 RST URGP=0 
Feb 20 08:57:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56523 DF PROTO=TCP SPT=35692 DPT=9105 SEQ=2281559790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E7EB010000000001030307) 
Feb 20 08:57:45 np0005625203.localdomain sudo[107879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:57:45 np0005625203.localdomain sudo[107879]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:57:45 np0005625203.localdomain sudo[107879]: pam_unix(sudo:session): session closed for user root
Feb 20 08:57:45 np0005625203.localdomain sudo[107894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:57:45 np0005625203.localdomain sudo[107894]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:57:46 np0005625203.localdomain sudo[107894]: pam_unix(sudo:session): session closed for user root
Feb 20 08:57:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11592 DF PROTO=TCP SPT=35758 DPT=9101 SEQ=790704660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E7F6410000000001030307) 
Feb 20 08:57:47 np0005625203.localdomain sudo[107941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:57:47 np0005625203.localdomain sudo[107941]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:57:47 np0005625203.localdomain sudo[107941]: pam_unix(sudo:session): session closed for user root
Feb 20 08:57:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3478 DF PROTO=TCP SPT=40976 DPT=9100 SEQ=234487130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E802400000000001030307) 
Feb 20 08:57:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:57:53 np0005625203.localdomain podman[107956]: 2026-02-20 08:57:53.026351057 +0000 UTC m=+0.089826352 container health_status 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, health_status=healthy, batch=17.1_20260112.1, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-collectd-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, url=https://www.redhat.com, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, container_name=collectd, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, name=rhosp-rhel9/openstack-collectd, version=17.1.13, io.buildah.version=1.41.5, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd)
Feb 20 08:57:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:57:53 np0005625203.localdomain podman[107956]: 2026-02-20 08:57:53.067316475 +0000 UTC m=+0.130791760 container exec_died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, summary=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, build-date=2026-01-12T22:10:15Z, config_id=tripleo_step3, container_name=collectd, batch=17.1_20260112.1, io.openshift.expose-services=, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, version=17.1.13, architecture=x86_64, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, name=rhosp-rhel9/openstack-collectd, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:10:15Z, release=1766032510, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc.)
Feb 20 08:57:53 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Deactivated successfully.
Feb 20 08:57:53 np0005625203.localdomain systemd[1]: tmp-crun.cMTuAI.mount: Deactivated successfully.
Feb 20 08:57:53 np0005625203.localdomain podman[107976]: 2026-02-20 08:57:53.117928853 +0000 UTC m=+0.074026399 container health_status 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, health_status=healthy, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, container_name=iscsid, tcib_managed=true, release=1766032510, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhosp-rhel9/openstack-iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, vcs-ref=705339545363fec600102567c4e923938e0f43b3, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 iscsid, org.opencontainers.image.created=2026-01-12T22:34:43Z, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, com.redhat.component=openstack-iscsid-container, build-date=2026-01-12T22:34:43Z, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc.)
Feb 20 08:57:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11594 DF PROTO=TCP SPT=35758 DPT=9101 SEQ=790704660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E80E010000000001030307) 
Feb 20 08:57:53 np0005625203.localdomain podman[107976]: 2026-02-20 08:57:53.128333258 +0000 UTC m=+0.084430834 container exec_died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, tcib_managed=true, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=iscsid, com.redhat.component=openstack-iscsid-container, version=17.1.13, build-date=2026-01-12T22:34:43Z, vcs-ref=705339545363fec600102567c4e923938e0f43b3, architecture=x86_64, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 iscsid, managed_by=tripleo_ansible, vcs-type=git, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:34:43Z, release=1766032510, vendor=Red Hat, Inc., io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']})
Feb 20 08:57:53 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Deactivated successfully.
Feb 20 08:57:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3480 DF PROTO=TCP SPT=40976 DPT=9100 SEQ=234487130 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E81A010000000001030307) 
Feb 20 08:57:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17707 DF PROTO=TCP SPT=44718 DPT=9102 SEQ=3774343325 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E826250000000001030307) 
Feb 20 08:57:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:57:59 np0005625203.localdomain systemd[1]: tmp-crun.uH85mH.mount: Deactivated successfully.
Feb 20 08:57:59 np0005625203.localdomain podman[107995]: 2026-02-20 08:57:59.767495389 +0000 UTC m=+0.085319032 container health_status 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, health_status=healthy, release=1766032510, version=17.1.13, build-date=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, batch=17.1_20260112.1, container_name=logrotate_crond, url=https://www.redhat.com, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 cron, io.openshift.expose-services=, architecture=x86_64, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.buildah.version=1.41.5, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git)
Feb 20 08:57:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:57:59 np0005625203.localdomain podman[107995]: 2026-02-20 08:57:59.803265015 +0000 UTC m=+0.121088618 container exec_died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, build-date=2026-01-12T22:10:15Z, name=rhosp-rhel9/openstack-cron, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, version=17.1.13, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, org.opencontainers.image.created=2026-01-12T22:10:15Z, summary=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, distribution-scope=public, architecture=x86_64, io.buildah.version=1.41.5, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, config_id=tripleo_step4, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, container_name=logrotate_crond, vendor=Red Hat, Inc., managed_by=tripleo_ansible, com.redhat.component=openstack-cron-container, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 cron, maintainer=OpenStack TripleO Team)
Feb 20 08:57:59 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Deactivated successfully.
Feb 20 08:57:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:57:59 np0005625203.localdomain podman[108016]: 2026-02-20 08:57:59.880223664 +0000 UTC m=+0.082053050 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, release=1766032510, build-date=2026-01-12T22:56:19Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, vcs-type=git, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, vendor=Red Hat, Inc., config_id=tripleo_step4, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 08:57:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:57:59 np0005625203.localdomain podman[108016]: 2026-02-20 08:57:59.922655597 +0000 UTC m=+0.124485013 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, distribution-scope=public, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, managed_by=tripleo_ansible, io.buildah.version=1.41.5, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, io.openshift.expose-services=)
Feb 20 08:57:59 np0005625203.localdomain podman[108016]: unhealthy
Feb 20 08:57:59 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:57:59 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:57:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:57:59 np0005625203.localdomain sshd[108055]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:58:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:58:00 np0005625203.localdomain podman[108054]: Error: container 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 is not running
Feb 20 08:58:00 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Main process exited, code=exited, status=125/n/a
Feb 20 08:58:00 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Failed with result 'exit-code'.
Feb 20 08:58:00 np0005625203.localdomain podman[108027]: 2026-02-20 08:57:59.97533064 +0000 UTC m=+0.135314221 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, release=1766032510, container_name=nova_compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, io.buildah.version=1.41.5)
Feb 20 08:58:00 np0005625203.localdomain podman[108027]: 2026-02-20 08:58:00.058310818 +0000 UTC m=+0.218294379 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, com.redhat.component=openstack-nova-compute-container, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-type=git, build-date=2026-01-12T23:32:04Z, container_name=nova_compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, release=1766032510, config_id=tripleo_step5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:58:00 np0005625203.localdomain podman[108045]: 2026-02-20 08:58:00.007281756 +0000 UTC m=+0.082821373 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, container_name=ovn_controller, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, release=1766032510, tcib_managed=true, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, vcs-type=git, io.buildah.version=1.41.5, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 08:58:00 np0005625203.localdomain podman[108027]: unhealthy
Feb 20 08:58:00 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:58:00 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:58:00 np0005625203.localdomain podman[108045]: 2026-02-20 08:58:00.145334491 +0000 UTC m=+0.220874128 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, io.openshift.expose-services=, container_name=ovn_controller, architecture=x86_64, version=17.1.13, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c)
Feb 20 08:58:00 np0005625203.localdomain podman[108045]: unhealthy
Feb 20 08:58:00 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:58:00 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:58:00 np0005625203.localdomain podman[108086]: 2026-02-20 08:58:00.150509802 +0000 UTC m=+0.132653887 container health_status a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, health_status=healthy, io.openshift.expose-services=, managed_by=tripleo_ansible, release=1766032510, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, container_name=metrics_qdr, io.buildah.version=1.41.5, url=https://www.redhat.com, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1)
Feb 20 08:58:00 np0005625203.localdomain podman[108086]: 2026-02-20 08:58:00.34926292 +0000 UTC m=+0.331406975 container exec_died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, container_name=metrics_qdr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 qdrouterd, config_id=tripleo_step1, distribution-scope=public, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.buildah.version=1.41.5, architecture=x86_64, build-date=2026-01-12T22:10:14Z, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team)
Feb 20 08:58:00 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Deactivated successfully.
Feb 20 08:58:00 np0005625203.localdomain systemd[1]: tmp-crun.BPrz1O.mount: Deactivated successfully.
Feb 20 08:58:01 np0005625203.localdomain sshd[108055]: Invalid user claude from 103.200.25.162 port 55006
Feb 20 08:58:01 np0005625203.localdomain sshd[108055]: Received disconnect from 103.200.25.162 port 55006:11: Bye Bye [preauth]
Feb 20 08:58:01 np0005625203.localdomain sshd[108055]: Disconnected from invalid user claude 103.200.25.162 port 55006 [preauth]
Feb 20 08:58:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51906 DF PROTO=TCP SPT=53452 DPT=9882 SEQ=4032441407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E831000000000001030307) 
Feb 20 08:58:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51907 DF PROTO=TCP SPT=53452 DPT=9882 SEQ=4032441407 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E840C00000000001030307) 
Feb 20 08:58:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58817 DF PROTO=TCP SPT=39008 DPT=9105 SEQ=1268920925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E848800000000001030307) 
Feb 20 08:58:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:58:10 np0005625203.localdomain systemd[1]: tmp-crun.LtmqAS.mount: Deactivated successfully.
Feb 20 08:58:10 np0005625203.localdomain podman[108119]: 2026-02-20 08:58:10.780731569 +0000 UTC m=+0.098142841 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, io.openshift.expose-services=, config_id=tripleo_step4, com.redhat.component=openstack-nova-compute-container, tcib_managed=true, batch=17.1_20260112.1, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:32:04Z, vcs-type=git, container_name=nova_migration_target)
Feb 20 08:58:11 np0005625203.localdomain podman[108119]: 2026-02-20 08:58:11.141177859 +0000 UTC m=+0.458589111 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, io.buildah.version=1.41.5, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, batch=17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, tcib_managed=true, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, vcs-type=git, name=rhosp-rhel9/openstack-nova-compute, io.openshift.expose-services=, build-date=2026-01-12T23:32:04Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-compute-container, release=1766032510, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']})
Feb 20 08:58:11 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:58:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25897 DF PROTO=TCP SPT=51240 DPT=9105 SEQ=3430139614 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E854810000000001030307) 
Feb 20 08:58:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58819 DF PROTO=TCP SPT=39008 DPT=9105 SEQ=1268920925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E860400000000001030307) 
Feb 20 08:58:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=619 DF PROTO=TCP SPT=40220 DPT=9101 SEQ=1332775424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E86B400000000001030307) 
Feb 20 08:58:18 np0005625203.localdomain podman[107839]: time="2026-02-20T08:58:18Z" level=warning msg="StopSignal SIGTERM failed to stop container ceilometer_agent_ipmi in 42 seconds, resorting to SIGKILL"
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: libpod-8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.scope: Deactivated successfully.
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: libpod-8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.scope: Consumed 6.308s CPU time.
Feb 20 08:58:18 np0005625203.localdomain podman[107839]: 2026-02-20 08:58:18.310458061 +0000 UTC m=+42.094892971 container stop 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:07:30Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ceilometer-ipmi-container, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, container_name=ceilometer_agent_ipmi, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-ceilometer-ipmi, managed_by=tripleo_ansible, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06)
Feb 20 08:58:18 np0005625203.localdomain podman[107839]: 2026-02-20 08:58:18.344137451 +0000 UTC m=+42.128572311 container died 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, managed_by=tripleo_ansible, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, org.opencontainers.image.created=2026-01-12T23:07:30Z, batch=17.1_20260112.1, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:07:30Z, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, container_name=ceilometer_agent_ipmi, release=1766032510, com.redhat.component=openstack-ceilometer-ipmi-container, io.buildah.version=1.41.5, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, config_id=tripleo_step4, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.openshift.expose-services=)
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.timer: Deactivated successfully.
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Failed to open /run/systemd/transient/8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: No such file or directory
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249-userdata-shm.mount: Deactivated successfully.
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-6cebad060ab405d52d2b962638c8f57ff1b2ca4462868ac6b7d7f52e12ed3e0a-merged.mount: Deactivated successfully.
Feb 20 08:58:18 np0005625203.localdomain podman[107839]: 2026-02-20 08:58:18.457280919 +0000 UTC m=+42.241715779 container cleanup 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, build-date=2026-01-12T23:07:30Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, vendor=Red Hat, Inc., container_name=ceilometer_agent_ipmi, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, managed_by=tripleo_ansible, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ceilometer-ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, tcib_managed=true, io.buildah.version=1.41.5, url=https://www.redhat.com, architecture=x86_64, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, maintainer=OpenStack TripleO Team, version=17.1.13, vcs-type=git, release=1766032510)
Feb 20 08:58:18 np0005625203.localdomain podman[107839]: ceilometer_agent_ipmi
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.timer: Failed to open /run/systemd/transient/8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.timer: No such file or directory
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Failed to open /run/systemd/transient/8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: No such file or directory
Feb 20 08:58:18 np0005625203.localdomain podman[108144]: 2026-02-20 08:58:18.472978369 +0000 UTC m=+0.149298296 container cleanup 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, io.buildah.version=1.41.5, build-date=2026-01-12T23:07:30Z, maintainer=OpenStack TripleO Team, container_name=ceilometer_agent_ipmi, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, url=https://www.redhat.com, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, distribution-scope=public, tcib_managed=true, version=17.1.13, org.opencontainers.image.created=2026-01-12T23:07:30Z, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, name=rhosp-rhel9/openstack-ceilometer-ipmi, architecture=x86_64)
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: libpod-conmon-8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.scope: Deactivated successfully.
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.timer: Failed to open /run/systemd/transient/8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.timer: No such file or directory
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: Failed to open /run/systemd/transient/8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249.service: No such file or directory
Feb 20 08:58:18 np0005625203.localdomain podman[108158]: 2026-02-20 08:58:18.575438915 +0000 UTC m=+0.071093059 container cleanup 8e6da3af6bd84e7b194ae4b4300c3b6ff60f783c41c6e649d091ab4014053249 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1, name=ceilometer_agent_ipmi, org.opencontainers.image.created=2026-01-12T23:07:30Z, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, vcs-type=git, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-ipmi, io.openshift.expose-services=, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, url=https://www.redhat.com, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, batch=17.1_20260112.1, build-date=2026-01-12T23:07:30Z, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-ipmi, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, com.redhat.component=openstack-ceilometer-ipmi-container, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, config_id=tripleo_step4, container_name=ceilometer_agent_ipmi, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, io.buildah.version=1.41.5, tcib_managed=true, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-ipmi:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, description=Red Hat OpenStack Platform 17.1 ceilometer-ipmi, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64)
Feb 20 08:58:18 np0005625203.localdomain podman[108158]: ceilometer_agent_ipmi
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: tripleo_ceilometer_agent_ipmi.service: Deactivated successfully.
Feb 20 08:58:18 np0005625203.localdomain systemd[1]: Stopped ceilometer_agent_ipmi container.
Feb 20 08:58:18 np0005625203.localdomain sudo[107794]: pam_unix(sudo:session): session closed for user root
Feb 20 08:58:19 np0005625203.localdomain sudo[108260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jpdpynvthjmzcsxmbxchludpqdiezbbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577898.7520692-109-115109516170111/AnsiballZ_systemd_service.py
Feb 20 08:58:19 np0005625203.localdomain sudo[108260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:58:19 np0005625203.localdomain python3.9[108262]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_collectd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:58:19 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:58:19 np0005625203.localdomain systemd-rc-local-generator[108289]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:58:19 np0005625203.localdomain systemd-sysv-generator[108293]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:58:19 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:58:19 np0005625203.localdomain systemd[1]: Stopping collectd container...
Feb 20 08:58:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41193 DF PROTO=TCP SPT=47490 DPT=9100 SEQ=4229380149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E877800000000001030307) 
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: libpod-3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.scope: Deactivated successfully.
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: libpod-3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.scope: Consumed 2.130s CPU time.
Feb 20 08:58:20 np0005625203.localdomain podman[108302]: 2026-02-20 08:58:20.228189671 +0000 UTC m=+0.463789762 container died 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, tcib_managed=true, io.buildah.version=1.41.5, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-collectd-container, description=Red Hat OpenStack Platform 17.1 collectd, architecture=x86_64, container_name=collectd, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, summary=Red Hat OpenStack Platform 17.1 collectd, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, managed_by=tripleo_ansible, release=1766032510, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vcs-type=git, name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, url=https://www.redhat.com)
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.timer: Deactivated successfully.
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Failed to open /run/systemd/transient/3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: No such file or directory
Feb 20 08:58:20 np0005625203.localdomain podman[108302]: 2026-02-20 08:58:20.287706108 +0000 UTC m=+0.523306189 container cleanup 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, architecture=x86_64, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=collectd, com.redhat.component=openstack-collectd-container, distribution-scope=public, name=rhosp-rhel9/openstack-collectd, build-date=2026-01-12T22:10:15Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, batch=17.1_20260112.1, config_id=tripleo_step3, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, tcib_managed=true, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, release=1766032510, description=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, io.buildah.version=1.41.5)
Feb 20 08:58:20 np0005625203.localdomain podman[108302]: collectd
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.timer: Failed to open /run/systemd/transient/3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.timer: No such file or directory
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Failed to open /run/systemd/transient/3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: No such file or directory
Feb 20 08:58:20 np0005625203.localdomain podman[108314]: 2026-02-20 08:58:20.33717955 +0000 UTC m=+0.091835264 container cleanup 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, name=rhosp-rhel9/openstack-collectd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, url=https://www.redhat.com, managed_by=tripleo_ansible, build-date=2026-01-12T22:10:15Z, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 collectd, batch=17.1_20260112.1, com.redhat.component=openstack-collectd-container, io.buildah.version=1.41.5, io.openshift.expose-services=, distribution-scope=public, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=collectd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, summary=Red Hat OpenStack Platform 17.1 collectd, version=17.1.13, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z)
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: tripleo_collectd.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: libpod-conmon-3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.scope: Deactivated successfully.
Feb 20 08:58:20 np0005625203.localdomain podman[108343]: error opening file `/run/crun/3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021/status`: No such file or directory
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.timer: Failed to open /run/systemd/transient/3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.timer: No such file or directory
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: Failed to open /run/systemd/transient/3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021.service: No such file or directory
Feb 20 08:58:20 np0005625203.localdomain podman[108333]: 2026-02-20 08:58:20.458845835 +0000 UTC m=+0.080121570 container cleanup 3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021 (image=registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1, name=collectd, io.k8s.description=Red Hat OpenStack Platform 17.1 collectd, build-date=2026-01-12T22:10:15Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-collectd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, io.buildah.version=1.41.5, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-collectd, version=17.1.13, distribution-scope=public, com.redhat.component=openstack-collectd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, config_id=tripleo_step3, container_name=collectd, config_data={'cap_add': ['IPC_LOCK'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '4767aaabc3de112d8791c290aa2b669d'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-collectd:17.1', 'memory': '512m', 'net': 'host', 'pid': 'host', 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/collectd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/containers/storage/overlay-containers:/var/lib/containers/storage/overlay-containers:ro', '/var/lib/config-data/puppet-generated/collectd:/var/lib/kolla/config_files/src:ro', '/var/log/containers/collectd:/var/log/collectd:rw,z', '/var/lib/container-config-scripts:/config-scripts:ro', '/var/lib/container-user-scripts:/scripts:z', '/run:/run:rw', '/sys/fs/cgroup:/sys/fs/cgroup:ro']}, summary=Red Hat OpenStack Platform 17.1 collectd, url=https://www.redhat.com, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 collectd, org.opencontainers.image.created=2026-01-12T22:10:15Z, description=Red Hat OpenStack Platform 17.1 collectd, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64)
Feb 20 08:58:20 np0005625203.localdomain podman[108333]: collectd
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: tripleo_collectd.service: Failed with result 'exit-code'.
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: Stopped collectd container.
Feb 20 08:58:20 np0005625203.localdomain sudo[108260]: pam_unix(sudo:session): session closed for user root
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4853d8cbf1b20eda152efab24d0e4cb43146df568657aa0bf8852ddc75389e64-merged.mount: Deactivated successfully.
Feb 20 08:58:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a62a404e7efece2c6f1adfdb982dc1c4f20afa2e285bebde4d318ea59636021-userdata-shm.mount: Deactivated successfully.
Feb 20 08:58:20 np0005625203.localdomain sudo[108438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vvtdtnhardntldbxsbaxmpcspbatrmqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577900.6159565-109-44406345220697/AnsiballZ_systemd_service.py
Feb 20 08:58:20 np0005625203.localdomain sudo[108438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:58:21 np0005625203.localdomain python3.9[108440]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_iscsid.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:58:22 np0005625203.localdomain systemd-sysv-generator[108470]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:58:22 np0005625203.localdomain systemd-rc-local-generator[108464]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: Stopping iscsid container...
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: libpod-54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.scope: Deactivated successfully.
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: libpod-54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.scope: Consumed 1.195s CPU time.
Feb 20 08:58:22 np0005625203.localdomain podman[108481]: 2026-02-20 08:58:22.732104853 +0000 UTC m=+0.082153703 container died 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, summary=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, build-date=2026-01-12T22:34:43Z, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, release=1766032510, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, config_id=tripleo_step3, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, container_name=iscsid, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-iscsid, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, tcib_managed=true, com.redhat.component=openstack-iscsid-container)
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.timer: Deactivated successfully.
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Failed to open /run/systemd/transient/54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: No such file or directory
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46-userdata-shm.mount: Deactivated successfully.
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bfd7902c54f5e5aa9d28d09659c7eace41e35d954c014d763e6c73387e43d5dd-merged.mount: Deactivated successfully.
Feb 20 08:58:22 np0005625203.localdomain podman[108481]: 2026-02-20 08:58:22.812107528 +0000 UTC m=+0.162156348 container cleanup 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, com.redhat.component=openstack-iscsid-container, release=1766032510, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-iscsid, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, build-date=2026-01-12T22:34:43Z, batch=17.1_20260112.1, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T22:34:43Z, version=17.1.13, architecture=x86_64, vcs-type=git, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, container_name=iscsid)
Feb 20 08:58:22 np0005625203.localdomain podman[108481]: iscsid
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.timer: Failed to open /run/systemd/transient/54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.timer: No such file or directory
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Failed to open /run/systemd/transient/54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: No such file or directory
Feb 20 08:58:22 np0005625203.localdomain podman[108495]: 2026-02-20 08:58:22.836386085 +0000 UTC m=+0.093657252 container cleanup 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, description=Red Hat OpenStack Platform 17.1 iscsid, version=17.1.13, vendor=Red Hat, Inc., com.redhat.component=openstack-iscsid-container, io.openshift.expose-services=, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 iscsid, distribution-scope=public, io.buildah.version=1.41.5, architecture=x86_64, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, vcs-ref=705339545363fec600102567c4e923938e0f43b3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-iscsid, batch=17.1_20260112.1, container_name=iscsid, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:34:43Z)
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: libpod-conmon-54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.scope: Deactivated successfully.
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.timer: Failed to open /run/systemd/transient/54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.timer: No such file or directory
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: Failed to open /run/systemd/transient/54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46.service: No such file or directory
Feb 20 08:58:22 np0005625203.localdomain podman[108510]: 2026-02-20 08:58:22.94366721 +0000 UTC m=+0.076074963 container cleanup 54507d0f95eb2d9af620044d6e6481cb642b3449ee74bbdb21478acb5c396b46 (image=registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1, name=iscsid, vcs-ref=705339545363fec600102567c4e923938e0f43b3, name=rhosp-rhel9/openstack-iscsid, version=17.1.13, managed_by=tripleo_ansible, io.buildah.version=1.41.5, release=1766032510, vcs-type=git, description=Red Hat OpenStack Platform 17.1 iscsid, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 iscsid, io.k8s.description=Red Hat OpenStack Platform 17.1 iscsid, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-iscsid, distribution-scope=public, com.redhat.component=openstack-iscsid-container, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:34:43Z, org.opencontainers.image.revision=705339545363fec600102567c4e923938e0f43b3, build-date=2026-01-12T22:34:43Z, config_id=tripleo_step3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-iscsid:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 2, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z']}, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, batch=17.1_20260112.1, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 iscsid, url=https://www.redhat.com, container_name=iscsid)
Feb 20 08:58:22 np0005625203.localdomain podman[108510]: iscsid
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: tripleo_iscsid.service: Deactivated successfully.
Feb 20 08:58:22 np0005625203.localdomain systemd[1]: Stopped iscsid container.
Feb 20 08:58:22 np0005625203.localdomain sudo[108438]: pam_unix(sudo:session): session closed for user root
Feb 20 08:58:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=621 DF PROTO=TCP SPT=40220 DPT=9101 SEQ=1332775424 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E883010000000001030307) 
Feb 20 08:58:23 np0005625203.localdomain sudo[108611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kspacplgndsykmcrrrhccxfcvdvecfbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577903.1311219-109-94011402267140/AnsiballZ_systemd_service.py
Feb 20 08:58:23 np0005625203.localdomain sudo[108611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:58:23 np0005625203.localdomain python3.9[108613]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_logrotate_crond.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:58:23 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:58:23 np0005625203.localdomain systemd-rc-local-generator[108642]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:58:23 np0005625203.localdomain systemd-sysv-generator[108645]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:58:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: Stopping logrotate_crond container...
Feb 20 08:58:24 np0005625203.localdomain crond[71674]: (CRON) INFO (Shutting down)
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: libpod-0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.scope: Deactivated successfully.
Feb 20 08:58:24 np0005625203.localdomain podman[108653]: 2026-02-20 08:58:24.253062521 +0000 UTC m=+0.076380752 container died 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, batch=17.1_20260112.1, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, managed_by=tripleo_ansible, vcs-type=git, config_id=tripleo_step4, version=17.1.13, vendor=Red Hat, Inc., io.k8s.description=Red Hat OpenStack Platform 17.1 cron, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, com.redhat.component=openstack-cron-container, distribution-scope=public, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 cron, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, url=https://www.redhat.com, container_name=logrotate_crond, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T22:10:15Z, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=)
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: tmp-crun.eni3T1.mount: Deactivated successfully.
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.timer: Deactivated successfully.
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.
Feb 20 08:58:24 np0005625203.localdomain sshd[108675]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Failed to open /run/systemd/transient/0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: No such file or directory
Feb 20 08:58:24 np0005625203.localdomain podman[108653]: 2026-02-20 08:58:24.336567175 +0000 UTC m=+0.159885356 container cleanup 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, release=1766032510, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:15Z, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, container_name=logrotate_crond, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, io.openshift.expose-services=, org.opencontainers.image.created=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-cron, tcib_managed=true, distribution-scope=public, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 cron, config_id=tripleo_step4, managed_by=tripleo_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 cron, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Feb 20 08:58:24 np0005625203.localdomain podman[108653]: logrotate_crond
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.timer: Failed to open /run/systemd/transient/0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.timer: No such file or directory
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Failed to open /run/systemd/transient/0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: No such file or directory
Feb 20 08:58:24 np0005625203.localdomain podman[108666]: 2026-02-20 08:58:24.362238566 +0000 UTC m=+0.099412121 container cleanup 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, distribution-scope=public, build-date=2026-01-12T22:10:15Z, container_name=logrotate_crond, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vcs-type=git, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron, batch=17.1_20260112.1, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, com.redhat.component=openstack-cron-container, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, version=17.1.13, org.opencontainers.image.created=2026-01-12T22:10:15Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']})
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: libpod-conmon-0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.scope: Deactivated successfully.
Feb 20 08:58:24 np0005625203.localdomain podman[108697]: error opening file `/run/crun/0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2/status`: No such file or directory
Feb 20 08:58:24 np0005625203.localdomain sshd[108675]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.timer: Failed to open /run/systemd/transient/0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.timer: No such file or directory
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: Failed to open /run/systemd/transient/0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2.service: No such file or directory
Feb 20 08:58:24 np0005625203.localdomain podman[108686]: 2026-02-20 08:58:24.490112893 +0000 UTC m=+0.090724460 container cleanup 0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2 (image=registry.redhat.io/rhosp-rhel9/openstack-cron:17.1, name=logrotate_crond, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 cron, vcs-type=git, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 cron, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, batch=17.1_20260112.1, container_name=logrotate_crond, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '53ed83bb0cae779ff95edb2002262c6f'}, 'healthcheck': {'test': '/usr/share/openstack-tripleo-common/healthcheck/cron'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-cron:17.1', 'net': 'none', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/logrotate-crond.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/crond:/var/lib/kolla/config_files/src:ro', '/var/log/containers:/var/log/containers:z']}, build-date=2026-01-12T22:10:15Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, release=1766032510, org.opencontainers.image.created=2026-01-12T22:10:15Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step4, vendor=Red Hat, Inc., com.redhat.component=openstack-cron-container, summary=Red Hat OpenStack Platform 17.1 cron, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, url=https://www.redhat.com, version=17.1.13, description=Red Hat OpenStack Platform 17.1 cron, name=rhosp-rhel9/openstack-cron, managed_by=tripleo_ansible, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-cron)
Feb 20 08:58:24 np0005625203.localdomain podman[108686]: logrotate_crond
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: tripleo_logrotate_crond.service: Deactivated successfully.
Feb 20 08:58:24 np0005625203.localdomain systemd[1]: Stopped logrotate_crond container.
Feb 20 08:58:24 np0005625203.localdomain sudo[108611]: pam_unix(sudo:session): session closed for user root
Feb 20 08:58:24 np0005625203.localdomain sudo[108789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efbbhesdhknwktgxprwmbbknhjbweuqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577904.6624064-109-121691594515913/AnsiballZ_systemd_service.py
Feb 20 08:58:24 np0005625203.localdomain sudo[108789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: tmp-crun.EROKAI.mount: Deactivated successfully.
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e0b96efd9497779e5b269af474ffc26b10023ac5cdee3873df81b8013dae2e66-merged.mount: Deactivated successfully.
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0224ec57d09bf598af3f3cf545ad6dd41c4fd21344bf6bacdadba46783c80da2-userdata-shm.mount: Deactivated successfully.
Feb 20 08:58:25 np0005625203.localdomain python3.9[108791]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_metrics_qdr.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:58:25 np0005625203.localdomain systemd-rc-local-generator[108819]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:58:25 np0005625203.localdomain systemd-sysv-generator[108823]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: Stopping metrics_qdr container...
Feb 20 08:58:25 np0005625203.localdomain kernel: qdrouterd[54908]: segfault at 0 ip 00007f6a536b07cb sp 00007ffeca568f20 error 4 in libc.so.6[7f6a5364d000+175000]
Feb 20 08:58:25 np0005625203.localdomain kernel: Code: 0b 00 64 44 89 23 85 c0 75 d4 e9 2b ff ff ff e8 db a5 00 00 e9 fd fe ff ff e8 41 1d 0d 00 90 f3 0f 1e fa 41 54 55 48 89 fd 53 <8b> 07 f6 c4 20 0f 85 aa 00 00 00 89 c2 81 e2 00 80 00 00 0f 84 a9
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: Created slice Slice /system/systemd-coredump.
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: Started Process Core Dump (PID 108845/UID 0).
Feb 20 08:58:25 np0005625203.localdomain systemd-coredump[108846]: Resource limits disable core dumping for process 54908 (qdrouterd).
Feb 20 08:58:25 np0005625203.localdomain systemd-coredump[108846]: Process 54908 (qdrouterd) of user 42465 dumped core.
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: systemd-coredump@0-108845-0.service: Deactivated successfully.
Feb 20 08:58:25 np0005625203.localdomain podman[108832]: 2026-02-20 08:58:25.91864474 +0000 UTC m=+0.222446548 container died a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.buildah.version=1.41.5, tcib_managed=true, vcs-type=git, architecture=x86_64, com.redhat.component=openstack-qdrouterd-container, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, summary=Red Hat OpenStack Platform 17.1 qdrouterd, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-qdrouterd, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_id=tripleo_step1, container_name=metrics_qdr, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:10:14Z, managed_by=tripleo_ansible, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, vendor=Red Hat, Inc., url=https://www.redhat.com)
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: libpod-a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.scope: Deactivated successfully.
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: libpod-a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.scope: Consumed 28.499s CPU time.
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.timer: Deactivated successfully.
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.
Feb 20 08:58:25 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Failed to open /run/systemd/transient/a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: No such file or directory
Feb 20 08:58:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f-userdata-shm.mount: Deactivated successfully.
Feb 20 08:58:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-de09a3fcf798e8d6f765a1320359ed0f97a6a1d1a2a8fd17434f89e173a7556b-merged.mount: Deactivated successfully.
Feb 20 08:58:26 np0005625203.localdomain podman[108832]: 2026-02-20 08:58:26.074091047 +0000 UTC m=+0.377892885 container cleanup a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, vendor=Red Hat, Inc., config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, release=1766032510, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, com.redhat.component=openstack-qdrouterd-container, config_id=tripleo_step1, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:10:14Z, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, vcs-type=git, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, container_name=metrics_qdr, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, tcib_managed=true, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:58:26 np0005625203.localdomain podman[108832]: metrics_qdr
Feb 20 08:58:26 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.timer: Failed to open /run/systemd/transient/a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.timer: No such file or directory
Feb 20 08:58:26 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Failed to open /run/systemd/transient/a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: No such file or directory
Feb 20 08:58:26 np0005625203.localdomain podman[108850]: 2026-02-20 08:58:26.090056985 +0000 UTC m=+0.155378736 container cleanup a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, com.redhat.component=openstack-qdrouterd-container, url=https://www.redhat.com, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, vendor=Red Hat, Inc., version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, konflux.additional-tags=17.1.13 17.1_20260112.1, container_name=metrics_qdr, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 qdrouterd, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-qdrouterd, release=1766032510, build-date=2026-01-12T22:10:14Z, tcib_managed=true, vcs-type=git, config_id=tripleo_step1, org.opencontainers.image.created=2026-01-12T22:10:14Z, description=Red Hat OpenStack Platform 17.1 qdrouterd)
Feb 20 08:58:26 np0005625203.localdomain systemd[1]: tripleo_metrics_qdr.service: Main process exited, code=exited, status=139/n/a
Feb 20 08:58:26 np0005625203.localdomain systemd[1]: libpod-conmon-a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.scope: Deactivated successfully.
Feb 20 08:58:26 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.timer: Failed to open /run/systemd/transient/a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.timer: No such file or directory
Feb 20 08:58:26 np0005625203.localdomain systemd[1]: a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: Failed to open /run/systemd/transient/a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f.service: No such file or directory
Feb 20 08:58:26 np0005625203.localdomain podman[108865]: 2026-02-20 08:58:26.188564947 +0000 UTC m=+0.067951870 container cleanup a75dd84657efb8ac97583f66b9954e1f0996042d5573a4f375efdec0815ba96f (image=registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1, name=metrics_qdr, config_id=tripleo_step1, name=rhosp-rhel9/openstack-qdrouterd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-qdrouterd, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 qdrouterd, io.k8s.description=Red Hat OpenStack Platform 17.1 qdrouterd, description=Red Hat OpenStack Platform 17.1 qdrouterd, org.opencontainers.image.created=2026-01-12T22:10:14Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '03a088c43b03e1e2de6da7d8c7c66191'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-qdrouterd:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 1, 'user': 'qdrouterd', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/metrics_qdr.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/metrics_qdr:/var/lib/kolla/config_files/src:ro', '/var/lib/metrics_qdr:/var/lib/qdrouterd:z', '/var/log/containers/metrics_qdr:/var/log/qdrouterd:z']}, distribution-scope=public, org.opencontainers.image.revision=4ce33df42b67d01daba32be86cee6dc0ad95cdee, io.openshift.expose-services=, container_name=metrics_qdr, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 qdrouterd, build-date=2026-01-12T22:10:14Z, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, vcs-ref=4ce33df42b67d01daba32be86cee6dc0ad95cdee, maintainer=OpenStack TripleO Team, release=1766032510, io.buildah.version=1.41.5, vendor=Red Hat, Inc., com.redhat.component=openstack-qdrouterd-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:58:26 np0005625203.localdomain podman[108865]: metrics_qdr
Feb 20 08:58:26 np0005625203.localdomain systemd[1]: tripleo_metrics_qdr.service: Failed with result 'exit-code'.
Feb 20 08:58:26 np0005625203.localdomain systemd[1]: Stopped metrics_qdr container.
Feb 20 08:58:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41195 DF PROTO=TCP SPT=47490 DPT=9100 SEQ=4229380149 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E88F400000000001030307) 
Feb 20 08:58:26 np0005625203.localdomain sudo[108789]: pam_unix(sudo:session): session closed for user root
Feb 20 08:58:26 np0005625203.localdomain sudo[108966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hzaydkifsaherourczmnrwmrzalomejk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577906.3563662-109-8988932972089/AnsiballZ_systemd_service.py
Feb 20 08:58:26 np0005625203.localdomain sudo[108966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:58:26 np0005625203.localdomain python3.9[108968]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_dhcp.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:58:27 np0005625203.localdomain sudo[108966]: pam_unix(sudo:session): session closed for user root
Feb 20 08:58:27 np0005625203.localdomain sudo[109059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ateosjxildejwpopqxenidpajgsjqhot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577907.1125584-109-87750350090489/AnsiballZ_systemd_service.py
Feb 20 08:58:27 np0005625203.localdomain sudo[109059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:58:27 np0005625203.localdomain python3.9[109061]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_l3_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:58:27 np0005625203.localdomain sudo[109059]: pam_unix(sudo:session): session closed for user root
Feb 20 08:58:28 np0005625203.localdomain sudo[109152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eizcczpzhxvznqcmpmyqtsopanspiycm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577907.7722278-109-67947089861158/AnsiballZ_systemd_service.py
Feb 20 08:58:28 np0005625203.localdomain sudo[109152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:58:28 np0005625203.localdomain python3.9[109154]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_neutron_ovs_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:58:28 np0005625203.localdomain sudo[109152]: pam_unix(sudo:session): session closed for user root
Feb 20 08:58:28 np0005625203.localdomain sudo[109245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-edvwozodwtusrnweysohkbgoorhhhzcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577908.4429605-109-107398877825565/AnsiballZ_systemd_service.py
Feb 20 08:58:28 np0005625203.localdomain sudo[109245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:58:28 np0005625203.localdomain python3.9[109247]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:58:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55068 DF PROTO=TCP SPT=43968 DPT=9102 SEQ=3067019015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E89B550000000001030307) 
Feb 20 08:58:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:58:30 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:58:30 np0005625203.localdomain systemd-sysv-generator[109294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:58:30 np0005625203.localdomain systemd-rc-local-generator[109291]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:58:30 np0005625203.localdomain podman[109251]: 2026-02-20 08:58:30.184186323 +0000 UTC m=+0.117032410 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, architecture=x86_64, org.opencontainers.image.created=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, cpe=cpe:/a:redhat:openstack:17.1::el9, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., distribution-scope=public, container_name=ovn_metadata_agent, batch=17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, io.buildah.version=1.41.5, version=17.1.13)
Feb 20 08:58:30 np0005625203.localdomain podman[109251]: 2026-02-20 08:58:30.202183255 +0000 UTC m=+0.135029312 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, url=https://www.redhat.com, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, container_name=ovn_metadata_agent, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, distribution-scope=public, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13)
Feb 20 08:58:30 np0005625203.localdomain podman[109251]: unhealthy
Feb 20 08:58:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:58:30 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:58:30 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:58:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:58:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:58:30 np0005625203.localdomain systemd[1]: Stopping nova_compute container...
Feb 20 08:58:30 np0005625203.localdomain podman[109308]: 2026-02-20 08:58:30.492783927 +0000 UTC m=+0.079704567 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.buildah.version=1.41.5, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, url=https://www.redhat.com, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, architecture=x86_64, container_name=ovn_controller, batch=17.1_20260112.1, release=1766032510, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:58:30 np0005625203.localdomain podman[109308]: 2026-02-20 08:58:30.515295069 +0000 UTC m=+0.102215739 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, version=17.1.13, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.openshift.expose-services=, io.buildah.version=1.41.5, managed_by=tripleo_ansible, com.redhat.component=openstack-ovn-controller-container, cpe=cpe:/a:redhat:openstack:17.1::el9, name=rhosp-rhel9/openstack-ovn-controller, config_id=tripleo_step4, tcib_managed=true, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, description=Red Hat OpenStack Platform 17.1 ovn-controller, release=1766032510, vendor=Red Hat, Inc., architecture=x86_64, batch=17.1_20260112.1, build-date=2026-01-12T22:36:40Z, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, url=https://www.redhat.com, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c)
Feb 20 08:58:30 np0005625203.localdomain podman[109307]: 2026-02-20 08:58:30.549048811 +0000 UTC m=+0.134592818 container health_status 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, health_status=unhealthy, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_compute, batch=17.1_20260112.1, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, managed_by=tripleo_ansible, config_id=tripleo_step5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., architecture=x86_64, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, org.opencontainers.image.created=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute)
Feb 20 08:58:30 np0005625203.localdomain podman[109308]: unhealthy
Feb 20 08:58:30 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:58:30 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:58:30 np0005625203.localdomain podman[109307]: 2026-02-20 08:58:30.576329891 +0000 UTC m=+0.161873898 container exec_died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, maintainer=OpenStack TripleO Team, config_id=tripleo_step5, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, build-date=2026-01-12T23:32:04Z, io.openshift.expose-services=, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vendor=Red Hat, Inc., container_name=nova_compute, vcs-type=git, distribution-scope=public, name=rhosp-rhel9/openstack-nova-compute, batch=17.1_20260112.1, tcib_managed=true, org.opencontainers.image.created=2026-01-12T23:32:04Z, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:58:30 np0005625203.localdomain podman[109307]: unhealthy
Feb 20 08:58:30 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:58:30 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:58:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58263 DF PROTO=TCP SPT=50168 DPT=9882 SEQ=2525655600 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E8A6410000000001030307) 
Feb 20 08:58:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58264 DF PROTO=TCP SPT=50168 DPT=9882 SEQ=2525655600 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E8B6010000000001030307) 
Feb 20 08:58:37 np0005625203.localdomain sshd[109365]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:58:37 np0005625203.localdomain sshd[109365]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:58:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=963 DF PROTO=TCP SPT=47022 DPT=9105 SEQ=223180354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E8BDC00000000001030307) 
Feb 20 08:58:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:58:41 np0005625203.localdomain podman[109367]: 2026-02-20 08:58:41.262960868 +0000 UTC m=+0.083122384 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.41.5, tcib_managed=true, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, config_id=tripleo_step4, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1)
Feb 20 08:58:41 np0005625203.localdomain podman[109367]: 2026-02-20 08:58:41.669399792 +0000 UTC m=+0.489561318 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, org.opencontainers.image.created=2026-01-12T23:32:04Z, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, version=17.1.13, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, tcib_managed=true, url=https://www.redhat.com, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, architecture=x86_64, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, name=rhosp-rhel9/openstack-nova-compute, release=1766032510, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, config_id=tripleo_step4, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:58:41 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:58:43 np0005625203.localdomain sshd[109390]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:58:43 np0005625203.localdomain sshd[109390]: Invalid user  from 194.187.176.105 port 58838
Feb 20 08:58:43 np0005625203.localdomain sshd[109390]: Connection closed by invalid user  194.187.176.105 port 58838 [preauth]
Feb 20 08:58:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=965 DF PROTO=TCP SPT=47022 DPT=9105 SEQ=223180354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E8D5800000000001030307) 
Feb 20 08:58:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58265 DF PROTO=TCP SPT=50168 DPT=9882 SEQ=2525655600 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E8D6800000000001030307) 
Feb 20 08:58:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56246 DF PROTO=TCP SPT=47688 DPT=9101 SEQ=1869294119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E8E0800000000001030307) 
Feb 20 08:58:47 np0005625203.localdomain sudo[109392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:58:47 np0005625203.localdomain sudo[109392]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:58:47 np0005625203.localdomain sudo[109392]: pam_unix(sudo:session): session closed for user root
Feb 20 08:58:47 np0005625203.localdomain sudo[109407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:58:47 np0005625203.localdomain sudo[109407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:58:48 np0005625203.localdomain sudo[109407]: pam_unix(sudo:session): session closed for user root
Feb 20 08:58:48 np0005625203.localdomain sudo[109454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:58:48 np0005625203.localdomain sudo[109454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:58:48 np0005625203.localdomain sudo[109454]: pam_unix(sudo:session): session closed for user root
Feb 20 08:58:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11597 DF PROTO=TCP SPT=35758 DPT=9101 SEQ=790704660 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E8EC800000000001030307) 
Feb 20 08:58:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56248 DF PROTO=TCP SPT=47688 DPT=9101 SEQ=1869294119 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E8F8400000000001030307) 
Feb 20 08:58:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28951 DF PROTO=TCP SPT=38734 DPT=9100 SEQ=2759316194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E904800000000001030307) 
Feb 20 08:58:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30989 DF PROTO=TCP SPT=38704 DPT=9102 SEQ=4124166063 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E910850000000001030307) 
Feb 20 08:59:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:59:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:59:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:59:00 np0005625203.localdomain podman[109469]: Error: container 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 is not running
Feb 20 08:59:00 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Main process exited, code=exited, status=125/n/a
Feb 20 08:59:00 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed with result 'exit-code'.
Feb 20 08:59:00 np0005625203.localdomain podman[109470]: 2026-02-20 08:59:00.839209509 +0000 UTC m=+0.150622189 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vendor=Red Hat, Inc., version=17.1.13, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, config_id=tripleo_step4, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, vcs-type=git, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, tcib_managed=true, release=1766032510, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, container_name=ovn_metadata_agent, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 08:59:00 np0005625203.localdomain podman[109470]: 2026-02-20 08:59:00.861603447 +0000 UTC m=+0.173016157 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, batch=17.1_20260112.1, vcs-type=git, config_id=tripleo_step4, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, architecture=x86_64, container_name=ovn_metadata_agent, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:59:00 np0005625203.localdomain systemd[1]: tmp-crun.65ECGn.mount: Deactivated successfully.
Feb 20 08:59:00 np0005625203.localdomain podman[109471]: 2026-02-20 08:59:00.892900903 +0000 UTC m=+0.200184503 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, summary=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, batch=17.1_20260112.1, url=https://www.redhat.com, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, distribution-scope=public, name=rhosp-rhel9/openstack-ovn-controller, release=1766032510, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, architecture=x86_64, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., version=17.1.13, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4)
Feb 20 08:59:00 np0005625203.localdomain podman[109471]: 2026-02-20 08:59:00.911292057 +0000 UTC m=+0.218575637 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, version=17.1.13, batch=17.1_20260112.1, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, build-date=2026-01-12T22:36:40Z, managed_by=tripleo_ansible, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, com.redhat.component=openstack-ovn-controller-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, config_id=tripleo_step4, release=1766032510, container_name=ovn_controller, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, vendor=Red Hat, Inc., architecture=x86_64, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller)
Feb 20 08:59:00 np0005625203.localdomain podman[109471]: unhealthy
Feb 20 08:59:00 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:59:00 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:59:00 np0005625203.localdomain podman[109470]: unhealthy
Feb 20 08:59:00 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:59:00 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:59:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61065 DF PROTO=TCP SPT=55892 DPT=9882 SEQ=3731806943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E91B400000000001030307) 
Feb 20 08:59:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61066 DF PROTO=TCP SPT=55892 DPT=9882 SEQ=3731806943 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E92B000000000001030307) 
Feb 20 08:59:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=426 DF PROTO=TCP SPT=47070 DPT=9105 SEQ=1672072562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E932C00000000001030307) 
Feb 20 08:59:09 np0005625203.localdomain sshd[109522]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:59:09 np0005625203.localdomain sshd[109522]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:59:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58822 DF PROTO=TCP SPT=39008 DPT=9105 SEQ=1268920925 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E93E810000000001030307) 
Feb 20 08:59:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: tmp-crun.UGIvlq.mount: Deactivated successfully.
Feb 20 08:59:12 np0005625203.localdomain podman[109524]: 2026-02-20 08:59:12.02686072 +0000 UTC m=+0.089884304 container health_status 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, health_status=healthy, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, build-date=2026-01-12T23:32:04Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-compute, container_name=nova_migration_target, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, managed_by=tripleo_ansible, version=17.1.13, distribution-scope=public, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-compute-container, vendor=Red Hat, Inc., summary=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team)
Feb 20 08:59:12 np0005625203.localdomain podman[109524]: 2026-02-20 08:59:12.396642011 +0000 UTC m=+0.459665655 container exec_died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T23:32:04Z, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, version=17.1.13, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-nova-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, batch=17.1_20260112.1)
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Deactivated successfully.
Feb 20 08:59:12 np0005625203.localdomain podman[109310]: time="2026-02-20T08:59:12Z" level=warning msg="StopSignal SIGTERM failed to stop container nova_compute in 42 seconds, resorting to SIGKILL"
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: libpod-31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.scope: Deactivated successfully.
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: libpod-31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.scope: Consumed 28.970s CPU time.
Feb 20 08:59:12 np0005625203.localdomain podman[109310]: 2026-02-20 08:59:12.675361531 +0000 UTC m=+42.253395364 container died 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T23:32:04Z, version=17.1.13, summary=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, container_name=nova_compute, batch=17.1_20260112.1, config_id=tripleo_step5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, managed_by=tripleo_ansible, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, distribution-scope=public, vcs-type=git, com.redhat.component=openstack-nova-compute-container, description=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.timer: Deactivated successfully.
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed to open /run/systemd/transient/31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: No such file or directory
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f79d573e93a443ab224a39aaf10649c6676b1bd955bb2a70f97bd5d3d23c519b-merged.mount: Deactivated successfully.
Feb 20 08:59:12 np0005625203.localdomain podman[109310]: 2026-02-20 08:59:12.739853642 +0000 UTC m=+42.317887445 container cleanup 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., container_name=nova_compute, managed_by=tripleo_ansible, build-date=2026-01-12T23:32:04Z, name=rhosp-rhel9/openstack-nova-compute, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, release=1766032510, batch=17.1_20260112.1, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, architecture=x86_64, io.openshift.expose-services=, tcib_managed=true, distribution-scope=public, vcs-type=git, config_id=tripleo_step5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 08:59:12 np0005625203.localdomain podman[109310]: nova_compute
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.timer: Failed to open /run/systemd/transient/31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.timer: No such file or directory
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed to open /run/systemd/transient/31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: No such file or directory
Feb 20 08:59:12 np0005625203.localdomain podman[109547]: 2026-02-20 08:59:12.770269791 +0000 UTC m=+0.082827473 container cleanup 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, container_name=nova_compute, batch=17.1_20260112.1, vendor=Red Hat, Inc., baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step5, build-date=2026-01-12T23:32:04Z, architecture=x86_64, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, com.redhat.component=openstack-nova-compute-container, vcs-type=git, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible, tcib_managed=true)
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: libpod-conmon-31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.scope: Deactivated successfully.
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.timer: Failed to open /run/systemd/transient/31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.timer: No such file or directory
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: Failed to open /run/systemd/transient/31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4.service: No such file or directory
Feb 20 08:59:12 np0005625203.localdomain podman[109562]: 2026-02-20 08:59:12.875846894 +0000 UTC m=+0.072366408 container cleanup 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, architecture=x86_64, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step5, version=17.1.13, managed_by=tripleo_ansible, vendor=Red Hat, Inc., config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.component=openstack-nova-compute-container, distribution-scope=public, maintainer=OpenStack TripleO Team, release=1766032510, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, build-date=2026-01-12T23:32:04Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, container_name=nova_compute, io.openshift.expose-services=)
Feb 20 08:59:12 np0005625203.localdomain podman[109562]: nova_compute
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: tripleo_nova_compute.service: Deactivated successfully.
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: Stopped nova_compute container.
Feb 20 08:59:12 np0005625203.localdomain systemd[1]: tripleo_nova_compute.service: Consumed 1.169s CPU time, no IO.
Feb 20 08:59:12 np0005625203.localdomain sudo[109245]: pam_unix(sudo:session): session closed for user root
Feb 20 08:59:13 np0005625203.localdomain sudo[109663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-viizkwwqihqgduoophakcsfkgcwnrnzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577953.041837-109-97245216329272/AnsiballZ_systemd_service.py
Feb 20 08:59:13 np0005625203.localdomain sudo[109663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:59:13 np0005625203.localdomain python3.9[109665]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:59:13 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:59:13 np0005625203.localdomain systemd-rc-local-generator[109691]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:59:13 np0005625203.localdomain systemd-sysv-generator[109695]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:59:13 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: Stopping nova_migration_target container...
Feb 20 08:59:14 np0005625203.localdomain recover_tripleo_nova_virtqemud[109706]: 62505
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 08:59:14 np0005625203.localdomain sshd[72022]: Received signal 15; terminating.
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: libpod-1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.scope: Deactivated successfully.
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: libpod-1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.scope: Consumed 35.072s CPU time.
Feb 20 08:59:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=428 DF PROTO=TCP SPT=47070 DPT=9105 SEQ=1672072562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E94A800000000001030307) 
Feb 20 08:59:14 np0005625203.localdomain podman[109708]: 2026-02-20 08:59:14.148725276 +0000 UTC m=+0.084776565 container died 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, batch=17.1_20260112.1, release=1766032510, name=rhosp-rhel9/openstack-nova-compute, tcib_managed=true, version=17.1.13, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, config_id=tripleo_step4, architecture=x86_64, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_migration_target, description=Red Hat OpenStack Platform 17.1 nova-compute, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-compute-container, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, vcs-type=git)
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.timer: Deactivated successfully.
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Failed to open /run/systemd/transient/1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: No such file or directory
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-37d662a0623c7987d6654fd890eea9e2ed325a6255b9d568db582e750601fc64-merged.mount: Deactivated successfully.
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86-userdata-shm.mount: Deactivated successfully.
Feb 20 08:59:14 np0005625203.localdomain podman[109708]: 2026-02-20 08:59:14.199115797 +0000 UTC m=+0.135167036 container cleanup 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-compute, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-nova-compute-container, org.opencontainers.image.created=2026-01-12T23:32:04Z, url=https://www.redhat.com, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, container_name=nova_migration_target, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, version=17.1.13, maintainer=OpenStack TripleO Team, name=rhosp-rhel9/openstack-nova-compute, architecture=x86_64, build-date=2026-01-12T23:32:04Z, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step4, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, tcib_managed=true, distribution-scope=public, release=1766032510)
Feb 20 08:59:14 np0005625203.localdomain podman[109708]: nova_migration_target
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.timer: Failed to open /run/systemd/transient/1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.timer: No such file or directory
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Failed to open /run/systemd/transient/1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: No such file or directory
Feb 20 08:59:14 np0005625203.localdomain podman[109722]: 2026-02-20 08:59:14.231618051 +0000 UTC m=+0.072114370 container cleanup 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, url=https://www.redhat.com, vcs-type=git, tcib_managed=true, com.redhat.component=openstack-nova-compute-container, name=rhosp-rhel9/openstack-nova-compute, org.opencontainers.image.created=2026-01-12T23:32:04Z, summary=Red Hat OpenStack Platform 17.1 nova-compute, architecture=x86_64, vendor=Red Hat, Inc., batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, maintainer=OpenStack TripleO Team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, container_name=nova_migration_target, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, config_id=tripleo_step4, managed_by=tripleo_ansible, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:32:04Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-compute, io.buildah.version=1.41.5, io.openshift.expose-services=, distribution-scope=public)
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: libpod-conmon-1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.scope: Deactivated successfully.
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.timer: Failed to open /run/systemd/transient/1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.timer: No such file or directory
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: Failed to open /run/systemd/transient/1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86.service: No such file or directory
Feb 20 08:59:14 np0005625203.localdomain podman[109736]: 2026-02-20 08:59:14.33711235 +0000 UTC m=+0.069923970 container cleanup 1df0f1d05e5e627eca55394e29e1f7478e4c7da1cefb14f22309143b510e7c86 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_migration_target, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, distribution-scope=public, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-compute, build-date=2026-01-12T23:32:04Z, managed_by=tripleo_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/nova-migration-target.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/etc/ssh:/host-ssh:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared']}, com.redhat.component=openstack-nova-compute-container, release=1766032510, architecture=x86_64, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-compute, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-type=git, tcib_managed=true, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_migration_target, io.openshift.expose-services=, version=17.1.13)
Feb 20 08:59:14 np0005625203.localdomain podman[109736]: nova_migration_target
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: tripleo_nova_migration_target.service: Deactivated successfully.
Feb 20 08:59:14 np0005625203.localdomain systemd[1]: Stopped nova_migration_target container.
Feb 20 08:59:14 np0005625203.localdomain sudo[109663]: pam_unix(sudo:session): session closed for user root
Feb 20 08:59:14 np0005625203.localdomain sudo[109837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ciqbozpxhaamjrowfjcpgifwzvzgisji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771577954.5335097-109-234583049675965/AnsiballZ_systemd_service.py
Feb 20 08:59:14 np0005625203.localdomain sudo[109837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 08:59:15 np0005625203.localdomain python3.9[109839]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 08:59:15 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 08:59:15 np0005625203.localdomain systemd-rc-local-generator[109864]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 08:59:15 np0005625203.localdomain systemd-sysv-generator[109869]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 08:59:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 08:59:15 np0005625203.localdomain systemd[1]: Stopping nova_virtlogd_wrapper container...
Feb 20 08:59:15 np0005625203.localdomain systemd[1]: libpod-43574e7f2e8aeca074fdd4e8544dcd5b03a668e63dbdf8da8ca806e1ad7100b1.scope: Deactivated successfully.
Feb 20 08:59:15 np0005625203.localdomain podman[109880]: 2026-02-20 08:59:15.609512128 +0000 UTC m=+0.073531554 container died 43574e7f2e8aeca074fdd4e8544dcd5b03a668e63dbdf8da8ca806e1ad7100b1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, architecture=x86_64, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, org.opencontainers.image.created=2026-01-12T23:31:49Z, container_name=nova_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, vendor=Red Hat, Inc., version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, io.openshift.expose-services=, release=1766032510, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, config_id=tripleo_step3, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, tcib_managed=true, distribution-scope=public, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Feb 20 08:59:15 np0005625203.localdomain systemd[1]: tmp-crun.OwhRWa.mount: Deactivated successfully.
Feb 20 08:59:15 np0005625203.localdomain podman[109880]: 2026-02-20 08:59:15.649460504 +0000 UTC m=+0.113479870 container cleanup 43574e7f2e8aeca074fdd4e8544dcd5b03a668e63dbdf8da8ca806e1ad7100b1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, container_name=nova_virtlogd_wrapper, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T23:31:49Z, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, io.buildah.version=1.41.5, url=https://www.redhat.com, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.openshift.expose-services=, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., config_id=tripleo_step3)
Feb 20 08:59:15 np0005625203.localdomain podman[109880]: nova_virtlogd_wrapper
Feb 20 08:59:15 np0005625203.localdomain podman[109894]: 2026-02-20 08:59:15.684174696 +0000 UTC m=+0.069521688 container cleanup 43574e7f2e8aeca074fdd4e8544dcd5b03a668e63dbdf8da8ca806e1ad7100b1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, url=https://www.redhat.com, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, release=1766032510, managed_by=tripleo_ansible, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, tcib_managed=true)
Feb 20 08:59:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-28cf91af479bb1e20b302e6f839411cf8d51c7bb716b7360ee3fc14275286500-merged.mount: Deactivated successfully.
Feb 20 08:59:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43574e7f2e8aeca074fdd4e8544dcd5b03a668e63dbdf8da8ca806e1ad7100b1-userdata-shm.mount: Deactivated successfully.
Feb 20 08:59:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14416 DF PROTO=TCP SPT=56400 DPT=9101 SEQ=490138624 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E955C00000000001030307) 
Feb 20 08:59:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6135 DF PROTO=TCP SPT=36910 DPT=9100 SEQ=988688005 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E962000000000001030307) 
Feb 20 08:59:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14418 DF PROTO=TCP SPT=56400 DPT=9101 SEQ=490138624 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E96D800000000001030307) 
Feb 20 08:59:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6137 DF PROTO=TCP SPT=36910 DPT=9100 SEQ=988688005 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E979C00000000001030307) 
Feb 20 08:59:27 np0005625203.localdomain sshd[109909]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:59:27 np0005625203.localdomain sshd[109909]: error: kex_exchange_identification: Connection closed by remote host
Feb 20 08:59:27 np0005625203.localdomain sshd[109909]: Connection closed by 80.94.92.168 port 55854
Feb 20 08:59:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23296 DF PROTO=TCP SPT=37190 DPT=9102 SEQ=125192572 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E985B50000000001030307) 
Feb 20 08:59:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 08:59:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 08:59:31 np0005625203.localdomain podman[109910]: 2026-02-20 08:59:31.261760196 +0000 UTC m=+0.084094634 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, vendor=Red Hat, Inc., url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, maintainer=OpenStack TripleO Team, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, distribution-scope=public, container_name=ovn_metadata_agent, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.buildah.version=1.41.5, release=1766032510, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, version=17.1.13, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, build-date=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:59:31 np0005625203.localdomain podman[109911]: 2026-02-20 08:59:31.268150885 +0000 UTC m=+0.082056520 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vendor=Red Hat, Inc., distribution-scope=public, config_id=tripleo_step4, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, container_name=ovn_controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, io.openshift.expose-services=, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, name=rhosp-rhel9/openstack-ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, version=17.1.13, architecture=x86_64, tcib_managed=true, batch=17.1_20260112.1, com.redhat.component=openstack-ovn-controller-container, org.opencontainers.image.created=2026-01-12T22:36:40Z, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 08:59:31 np0005625203.localdomain podman[109910]: 2026-02-20 08:59:31.274189333 +0000 UTC m=+0.096523771 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, distribution-scope=public, build-date=2026-01-12T22:56:19Z, tcib_managed=true, config_id=tripleo_step4, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1766032510, architecture=x86_64, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.expose-services=, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 08:59:31 np0005625203.localdomain podman[109910]: unhealthy
Feb 20 08:59:31 np0005625203.localdomain podman[109911]: 2026-02-20 08:59:31.284169764 +0000 UTC m=+0.098075369 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, batch=17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, summary=Red Hat OpenStack Platform 17.1 ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, container_name=ovn_controller, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-type=git, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, name=rhosp-rhel9/openstack-ovn-controller, tcib_managed=true, url=https://www.redhat.com, com.redhat.component=openstack-ovn-controller-container, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., release=1766032510, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=)
Feb 20 08:59:31 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:59:31 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 08:59:31 np0005625203.localdomain podman[109911]: unhealthy
Feb 20 08:59:31 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 08:59:31 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 08:59:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10656 DF PROTO=TCP SPT=39020 DPT=9882 SEQ=2998584929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E990800000000001030307) 
Feb 20 08:59:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10657 DF PROTO=TCP SPT=39020 DPT=9882 SEQ=2998584929 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E9A0400000000001030307) 
Feb 20 08:59:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11215 DF PROTO=TCP SPT=53588 DPT=9105 SEQ=670397047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E9A8000000000001030307) 
Feb 20 08:59:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=968 DF PROTO=TCP SPT=47022 DPT=9105 SEQ=223180354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E9B4810000000001030307) 
Feb 20 08:59:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11217 DF PROTO=TCP SPT=53588 DPT=9105 SEQ=670397047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E9BFC10000000001030307) 
Feb 20 08:59:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25085 DF PROTO=TCP SPT=53936 DPT=9101 SEQ=1470119564 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E9CB000000000001030307) 
Feb 20 08:59:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 08:59:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4800.1 total, 600.0 interval
                                                          Cumulative writes: 4844 writes, 21K keys, 4844 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4844 writes, 660 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 08:59:48 np0005625203.localdomain sudo[109949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 08:59:48 np0005625203.localdomain sudo[109949]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:59:48 np0005625203.localdomain sudo[109949]: pam_unix(sudo:session): session closed for user root
Feb 20 08:59:48 np0005625203.localdomain sudo[109964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 08:59:48 np0005625203.localdomain sudo[109964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:59:49 np0005625203.localdomain sudo[109964]: pam_unix(sudo:session): session closed for user root
Feb 20 08:59:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24838 DF PROTO=TCP SPT=47650 DPT=9100 SEQ=1105526151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E9D7000000000001030307) 
Feb 20 08:59:50 np0005625203.localdomain sudo[110010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 08:59:50 np0005625203.localdomain sudo[110010]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 08:59:50 np0005625203.localdomain sudo[110010]: pam_unix(sudo:session): session closed for user root
Feb 20 08:59:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 08:59:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 4800.1 total, 600.0 interval
                                                          Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5843 writes, 764 syncs, 7.65 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 08:59:52 np0005625203.localdomain sshd[110025]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:59:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28954 DF PROTO=TCP SPT=38734 DPT=9100 SEQ=2759316194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E9E2800000000001030307) 
Feb 20 08:59:53 np0005625203.localdomain sshd[110025]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:59:55 np0005625203.localdomain sshd[110027]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:59:55 np0005625203.localdomain sshd[110027]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 08:59:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24840 DF PROTO=TCP SPT=47650 DPT=9100 SEQ=1105526151 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E9EEC10000000001030307) 
Feb 20 08:59:58 np0005625203.localdomain sshd[110029]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 08:59:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16626 DF PROTO=TCP SPT=59536 DPT=9882 SEQ=3054057489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9E9F9AD0000000001030307) 
Feb 20 09:00:00 np0005625203.localdomain sshd[110029]: Received disconnect from 102.210.148.92 port 51188:11: Bye Bye [preauth]
Feb 20 09:00:00 np0005625203.localdomain sshd[110029]: Disconnected from authenticating user root 102.210.148.92 port 51188 [preauth]
Feb 20 09:00:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 09:00:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 09:00:01 np0005625203.localdomain podman[110032]: 2026-02-20 09:00:01.51492726 +0000 UTC m=+0.075090672 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, distribution-scope=public, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, managed_by=tripleo_ansible, url=https://www.redhat.com, vendor=Red Hat, Inc., tcib_managed=true, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, version=17.1.13, architecture=x86_64, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, name=rhosp-rhel9/openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510)
Feb 20 09:00:01 np0005625203.localdomain podman[110032]: 2026-02-20 09:00:01.529055911 +0000 UTC m=+0.089219323 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, distribution-scope=public, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, build-date=2026-01-12T22:36:40Z, summary=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, name=rhosp-rhel9/openstack-ovn-controller, vcs-type=git, version=17.1.13, io.buildah.version=1.41.5, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, architecture=x86_64, config_id=tripleo_step4, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.created=2026-01-12T22:36:40Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 09:00:01 np0005625203.localdomain podman[110032]: unhealthy
Feb 20 09:00:01 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:00:01 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 09:00:01 np0005625203.localdomain podman[110031]: 2026-02-20 09:00:01.616459637 +0000 UTC m=+0.180205641 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, distribution-scope=public, managed_by=tripleo_ansible, batch=17.1_20260112.1, vcs-type=git, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T22:56:19Z, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, tcib_managed=true, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=ovn_metadata_agent, org.opencontainers.image.created=2026-01-12T22:56:19Z, io.buildah.version=1.41.5, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_id=tripleo_step4)
Feb 20 09:00:01 np0005625203.localdomain podman[110031]: 2026-02-20 09:00:01.630070341 +0000 UTC m=+0.193816355 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, managed_by=tripleo_ansible, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, container_name=ovn_metadata_agent, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, tcib_managed=true, build-date=2026-01-12T22:56:19Z, io.openshift.expose-services=, distribution-scope=public, url=https://www.redhat.com, config_id=tripleo_step4, org.opencontainers.image.created=2026-01-12T22:56:19Z, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, batch=17.1_20260112.1, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, maintainer=OpenStack TripleO Team)
Feb 20 09:00:01 np0005625203.localdomain podman[110031]: unhealthy
Feb 20 09:00:01 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:00:01 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 09:00:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16628 DF PROTO=TCP SPT=59536 DPT=9882 SEQ=3054057489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA05C10000000001030307) 
Feb 20 09:00:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16629 DF PROTO=TCP SPT=59536 DPT=9882 SEQ=3054057489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA15810000000001030307) 
Feb 20 09:00:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32854 DF PROTO=TCP SPT=39988 DPT=9105 SEQ=2549678700 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA1D400000000001030307) 
Feb 20 09:00:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32856 DF PROTO=TCP SPT=39988 DPT=9105 SEQ=2549678700 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA35000000000001030307) 
Feb 20 09:00:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16630 DF PROTO=TCP SPT=59536 DPT=9882 SEQ=3054057489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA36800000000001030307) 
Feb 20 09:00:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52066 DF PROTO=TCP SPT=39138 DPT=9101 SEQ=2817991505 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA40000000000001030307) 
Feb 20 09:00:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34828 DF PROTO=TCP SPT=47006 DPT=9100 SEQ=2302357966 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA4C410000000001030307) 
Feb 20 09:00:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52068 DF PROTO=TCP SPT=39138 DPT=9101 SEQ=2817991505 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA57C00000000001030307) 
Feb 20 09:00:25 np0005625203.localdomain sshd[110072]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:00:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34830 DF PROTO=TCP SPT=47006 DPT=9100 SEQ=2302357966 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA64010000000001030307) 
Feb 20 09:00:26 np0005625203.localdomain sshd[110072]: Invalid user oracle from 212.154.234.9 port 14831
Feb 20 09:00:26 np0005625203.localdomain sshd[110072]: Received disconnect from 212.154.234.9 port 14831:11: Bye Bye [preauth]
Feb 20 09:00:26 np0005625203.localdomain sshd[110072]: Disconnected from invalid user oracle 212.154.234.9 port 14831 [preauth]
Feb 20 09:00:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36986 DF PROTO=TCP SPT=34222 DPT=9102 SEQ=430244111 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA70140000000001030307) 
Feb 20 09:00:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 09:00:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 09:00:31 np0005625203.localdomain podman[110074]: 2026-02-20 09:00:31.77929853 +0000 UTC m=+0.090349599 container health_status be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, health_status=unhealthy, tcib_managed=true, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, version=17.1.13, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, url=https://www.redhat.com, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.created=2026-01-12T22:56:19Z, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=OpenStack TripleO Team, distribution-scope=public, io.openshift.expose-services=, config_id=tripleo_step4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, vcs-type=git, managed_by=tripleo_ansible, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 09:00:31 np0005625203.localdomain systemd[1]: tmp-crun.hflRgf.mount: Deactivated successfully.
Feb 20 09:00:31 np0005625203.localdomain podman[110075]: 2026-02-20 09:00:31.8377097 +0000 UTC m=+0.144506756 container health_status d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, health_status=unhealthy, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, release=1766032510, com.redhat.component=openstack-ovn-controller-container, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, cpe=cpe:/a:redhat:openstack:17.1::el9, tcib_managed=true, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, config_id=tripleo_step4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, org.opencontainers.image.created=2026-01-12T22:36:40Z, distribution-scope=public, container_name=ovn_controller, description=Red Hat OpenStack Platform 17.1 ovn-controller, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, architecture=x86_64, io.openshift.expose-services=, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5)
Feb 20 09:00:31 np0005625203.localdomain podman[110074]: 2026-02-20 09:00:31.846209235 +0000 UTC m=+0.157260224 container exec_died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, release=1766032510, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, vendor=Red Hat, Inc., description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, io.buildah.version=1.41.5, container_name=ovn_metadata_agent, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, org.opencontainers.image.created=2026-01-12T22:56:19Z, managed_by=tripleo_ansible, version=17.1.13, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.expose-services=, architecture=x86_64, vcs-type=git, batch=17.1_20260112.1, distribution-scope=public, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn)
Feb 20 09:00:31 np0005625203.localdomain podman[110074]: unhealthy
Feb 20 09:00:31 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:00:31 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed with result 'exit-code'.
Feb 20 09:00:31 np0005625203.localdomain podman[110075]: 2026-02-20 09:00:31.901769118 +0000 UTC m=+0.208566174 container exec_died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, release=1766032510, architecture=x86_64, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T22:36:40Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-ovn-controller-container, tcib_managed=true, config_id=tripleo_step4, konflux.additional-tags=17.1.13 17.1_20260112.1, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, io.openshift.expose-services=, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ovn_controller, version=17.1.13, batch=17.1_20260112.1, vcs-type=git, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, distribution-scope=public, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller)
Feb 20 09:00:31 np0005625203.localdomain podman[110075]: unhealthy
Feb 20 09:00:31 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:00:31 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed with result 'exit-code'.
Feb 20 09:00:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11256 DF PROTO=TCP SPT=56148 DPT=9882 SEQ=2278719134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA7B000000000001030307) 
Feb 20 09:00:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11257 DF PROTO=TCP SPT=56148 DPT=9882 SEQ=2278719134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA8AC00000000001030307) 
Feb 20 09:00:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52454 DF PROTO=TCP SPT=51032 DPT=9105 SEQ=2495805621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA92810000000001030307) 
Feb 20 09:00:39 np0005625203.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: State 'stop-sigterm' timed out. Killing.
Feb 20 09:00:39 np0005625203.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Killing process 61736 (conmon) with signal SIGKILL.
Feb 20 09:00:39 np0005625203.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Main process exited, code=killed, status=9/KILL
Feb 20 09:00:39 np0005625203.localdomain systemd[1]: libpod-conmon-43574e7f2e8aeca074fdd4e8544dcd5b03a668e63dbdf8da8ca806e1ad7100b1.scope: Deactivated successfully.
Feb 20 09:00:39 np0005625203.localdomain podman[110127]: error opening file `/run/crun/43574e7f2e8aeca074fdd4e8544dcd5b03a668e63dbdf8da8ca806e1ad7100b1/status`: No such file or directory
Feb 20 09:00:39 np0005625203.localdomain podman[110114]: 2026-02-20 09:00:39.760866842 +0000 UTC m=+0.070677205 container cleanup 43574e7f2e8aeca074fdd4e8544dcd5b03a668e63dbdf8da8ca806e1ad7100b1 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd_wrapper, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vendor=Red Hat, Inc., container_name=nova_virtlogd_wrapper, distribution-scope=public, batch=17.1_20260112.1, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 0, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtlogd.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/container-config-scripts/virtlogd_wrapper:/usr/local/bin/virtlogd_wrapper:ro']}, architecture=x86_64, build-date=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, maintainer=OpenStack TripleO Team, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, org.opencontainers.image.created=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, version=17.1.13, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, tcib_managed=true)
Feb 20 09:00:39 np0005625203.localdomain podman[110114]: nova_virtlogd_wrapper
Feb 20 09:00:39 np0005625203.localdomain systemd[1]: tripleo_nova_virtlogd_wrapper.service: Failed with result 'timeout'.
Feb 20 09:00:39 np0005625203.localdomain systemd[1]: Stopped nova_virtlogd_wrapper container.
Feb 20 09:00:39 np0005625203.localdomain sudo[109837]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:40 np0005625203.localdomain sudo[110218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwxvemcdjdqeeprdogpzilnxbycoqtex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578039.9922853-109-214873790781365/AnsiballZ_systemd_service.py
Feb 20 09:00:40 np0005625203.localdomain sudo[110218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:00:40 np0005625203.localdomain python3.9[110220]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:00:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11220 DF PROTO=TCP SPT=53588 DPT=9105 SEQ=670397047 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EA9E800000000001030307) 
Feb 20 09:00:41 np0005625203.localdomain sshd[110223]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:00:41 np0005625203.localdomain sshd[110223]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:00:41 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:00:41 np0005625203.localdomain systemd-rc-local-generator[110244]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:00:41 np0005625203.localdomain systemd-sysv-generator[110248]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:00:41 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:00:41 np0005625203.localdomain systemd[1]: Starting Check and recover tripleo_nova_virtqemud...
Feb 20 09:00:41 np0005625203.localdomain systemd[1]: Stopping nova_virtnodedevd container...
Feb 20 09:00:41 np0005625203.localdomain recover_tripleo_nova_virtqemud[110263]: 62505
Feb 20 09:00:41 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.service: Deactivated successfully.
Feb 20 09:00:41 np0005625203.localdomain systemd[1]: Finished Check and recover tripleo_nova_virtqemud.
Feb 20 09:00:42 np0005625203.localdomain systemd[1]: libpod-5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9.scope: Deactivated successfully.
Feb 20 09:00:42 np0005625203.localdomain systemd[1]: libpod-5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9.scope: Consumed 1.527s CPU time.
Feb 20 09:00:42 np0005625203.localdomain podman[110265]: 2026-02-20 09:00:42.03282993 +0000 UTC m=+0.054866933 container died 5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, managed_by=tripleo_ansible, vcs-type=git, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, url=https://www.redhat.com, cpe=cpe:/a:redhat:openstack:17.1::el9, maintainer=OpenStack TripleO Team, config_id=tripleo_step3, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.created=2026-01-12T23:31:49Z, build-date=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.buildah.version=1.41.5, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, distribution-scope=public, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=nova_virtnodedevd, com.redhat.component=openstack-nova-libvirt-container)
Feb 20 09:00:42 np0005625203.localdomain podman[110265]: 2026-02-20 09:00:42.077455381 +0000 UTC m=+0.099492344 container cleanup 5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, io.openshift.expose-services=, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, url=https://www.redhat.com, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, container_name=nova_virtnodedevd, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-type=git, vendor=Red Hat, Inc., managed_by=tripleo_ansible, release=1766032510, config_id=tripleo_step3, batch=17.1_20260112.1, build-date=2026-01-12T23:31:49Z, maintainer=OpenStack TripleO Team, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 09:00:42 np0005625203.localdomain podman[110265]: nova_virtnodedevd
Feb 20 09:00:42 np0005625203.localdomain podman[110279]: 2026-02-20 09:00:42.126580323 +0000 UTC m=+0.078736936 container cleanup 5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, distribution-scope=public, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, container_name=nova_virtnodedevd, url=https://www.redhat.com, config_id=tripleo_step3, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, batch=17.1_20260112.1, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, build-date=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:31:49Z, tcib_managed=true, com.redhat.component=openstack-nova-libvirt-container, release=1766032510, version=17.1.13, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=)
Feb 20 09:00:42 np0005625203.localdomain systemd[1]: libpod-conmon-5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9.scope: Deactivated successfully.
Feb 20 09:00:42 np0005625203.localdomain podman[110309]: error opening file `/run/crun/5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9/status`: No such file or directory
Feb 20 09:00:42 np0005625203.localdomain podman[110296]: 2026-02-20 09:00:42.24032863 +0000 UTC m=+0.075328630 container cleanup 5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtnodedevd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, batch=17.1_20260112.1, vcs-type=git, version=17.1.13, cpe=cpe:/a:redhat:openstack:17.1::el9, container_name=nova_virtnodedevd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 2, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtnodedevd.json:/var/lib/kolla/config_files/config.json:ro']}, managed_by=tripleo_ansible, url=https://www.redhat.com, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt)
Feb 20 09:00:42 np0005625203.localdomain podman[110296]: nova_virtnodedevd
Feb 20 09:00:42 np0005625203.localdomain systemd[1]: tripleo_nova_virtnodedevd.service: Deactivated successfully.
Feb 20 09:00:42 np0005625203.localdomain systemd[1]: Stopped nova_virtnodedevd container.
Feb 20 09:00:42 np0005625203.localdomain sudo[110218]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:42 np0005625203.localdomain sudo[110400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nrngswccnmsjkfwidpyqcichfeiskvzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578042.4064388-109-210791796222097/AnsiballZ_systemd_service.py
Feb 20 09:00:42 np0005625203.localdomain sudo[110400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:00:43 np0005625203.localdomain python3.9[110402]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:00:43 np0005625203.localdomain systemd[1]: tmp-crun.qVQ9hk.mount: Deactivated successfully.
Feb 20 09:00:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c44b9ee3b74198541ae59b815b4be56ebc6ae69bec4e8dadcdc16fb5e4a48b77-merged.mount: Deactivated successfully.
Feb 20 09:00:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5ce0d3670662f8b4e9768bce15c2e0f242710ad4849921266d1bb819bab350c9-userdata-shm.mount: Deactivated successfully.
Feb 20 09:00:44 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:00:44 np0005625203.localdomain systemd-rc-local-generator[110431]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:00:44 np0005625203.localdomain systemd-sysv-generator[110435]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:00:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52456 DF PROTO=TCP SPT=51032 DPT=9105 SEQ=2495805621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EAAA410000000001030307) 
Feb 20 09:00:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:00:44 np0005625203.localdomain systemd[1]: Stopping nova_virtproxyd container...
Feb 20 09:00:44 np0005625203.localdomain systemd[1]: libpod-2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e.scope: Deactivated successfully.
Feb 20 09:00:44 np0005625203.localdomain podman[110443]: 2026-02-20 09:00:44.508519369 +0000 UTC m=+0.081759340 container died 2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.openshift.expose-services=, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, container_name=nova_virtproxyd, com.redhat.component=openstack-nova-libvirt-container, url=https://www.redhat.com, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, architecture=x86_64, build-date=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, distribution-scope=public, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, maintainer=OpenStack TripleO Team, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, release=1766032510, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt)
Feb 20 09:00:44 np0005625203.localdomain systemd[1]: tmp-crun.kC5MsD.mount: Deactivated successfully.
Feb 20 09:00:44 np0005625203.localdomain podman[110443]: 2026-02-20 09:00:44.557992152 +0000 UTC m=+0.131232083 container cleanup 2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:openstack:17.1::el9, distribution-scope=public, container_name=nova_virtproxyd, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, io.buildah.version=1.41.5, vcs-type=git, release=1766032510, version=17.1.13, config_id=tripleo_step3, url=https://www.redhat.com, vendor=Red Hat, Inc., io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, batch=17.1_20260112.1, name=rhosp-rhel9/openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, tcib_managed=true, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe)
Feb 20 09:00:44 np0005625203.localdomain podman[110443]: nova_virtproxyd
Feb 20 09:00:44 np0005625203.localdomain podman[110457]: 2026-02-20 09:00:44.594376536 +0000 UTC m=+0.072663916 container cleanup 2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, version=17.1.13, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, tcib_managed=true, release=1766032510, url=https://www.redhat.com, io.buildah.version=1.41.5, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, konflux.additional-tags=17.1.13 17.1_20260112.1, build-date=2026-01-12T23:31:49Z, container_name=nova_virtproxyd, managed_by=tripleo_ansible, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20260112.1, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step3, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-libvirt)
Feb 20 09:00:44 np0005625203.localdomain systemd[1]: libpod-conmon-2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e.scope: Deactivated successfully.
Feb 20 09:00:44 np0005625203.localdomain podman[110485]: error opening file `/run/crun/2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e/status`: No such file or directory
Feb 20 09:00:44 np0005625203.localdomain podman[110473]: 2026-02-20 09:00:44.695301404 +0000 UTC m=+0.070091227 container cleanup 2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtproxyd, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:31:49Z, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, container_name=nova_virtproxyd, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, config_id=tripleo_step3, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, tcib_managed=true, version=17.1.13, managed_by=tripleo_ansible, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, name=rhosp-rhel9/openstack-nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, batch=17.1_20260112.1, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 5, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtproxyd.json:/var/lib/kolla/config_files/config.json:ro']}, com.redhat.component=openstack-nova-libvirt-container, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Feb 20 09:00:44 np0005625203.localdomain podman[110473]: nova_virtproxyd
Feb 20 09:00:44 np0005625203.localdomain systemd[1]: tripleo_nova_virtproxyd.service: Deactivated successfully.
Feb 20 09:00:44 np0005625203.localdomain systemd[1]: Stopped nova_virtproxyd container.
Feb 20 09:00:44 np0005625203.localdomain sudo[110400]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:45 np0005625203.localdomain sudo[110576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oajpygjekmpathzeajzzpurorejirxcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578044.8621686-109-216282055097946/AnsiballZ_systemd_service.py
Feb 20 09:00:45 np0005625203.localdomain sudo[110576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:00:45 np0005625203.localdomain python3.9[110578]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:00:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-5839c92e4849940a9cba2db411bd09f73de0be67a96e976da2027adb67ab877e-merged.mount: Deactivated successfully.
Feb 20 09:00:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2883491975f4d07a87ab6f68c2456ff531ed53703b8e61bf27c807c46acce03e-userdata-shm.mount: Deactivated successfully.
Feb 20 09:00:45 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:00:45 np0005625203.localdomain systemd-sysv-generator[110607]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:00:45 np0005625203.localdomain systemd-rc-local-generator[110602]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:00:45 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:00:45 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud_recover.timer: Deactivated successfully.
Feb 20 09:00:45 np0005625203.localdomain systemd[1]: Stopped Check and recover tripleo_nova_virtqemud every 10m.
Feb 20 09:00:45 np0005625203.localdomain systemd[1]: Stopping nova_virtqemud container...
Feb 20 09:00:45 np0005625203.localdomain systemd[1]: libpod-441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e.scope: Deactivated successfully.
Feb 20 09:00:45 np0005625203.localdomain systemd[1]: libpod-441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e.scope: Consumed 2.248s CPU time.
Feb 20 09:00:45 np0005625203.localdomain podman[110619]: 2026-02-20 09:00:45.920343894 +0000 UTC m=+0.074725560 container died 441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, architecture=x86_64, build-date=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-01-12T23:31:49Z, release=1766032510, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, name=rhosp-rhel9/openstack-nova-libvirt, url=https://www.redhat.com, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtqemud, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-type=git, version=17.1.13, tcib_managed=true, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, managed_by=tripleo_ansible)
Feb 20 09:00:45 np0005625203.localdomain podman[110619]: 2026-02-20 09:00:45.952633662 +0000 UTC m=+0.107015318 container cleanup 441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, konflux.additional-tags=17.1.13 17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20260112.1, version=17.1.13, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.41.5, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, managed_by=tripleo_ansible, build-date=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=nova_virtqemud, release=1766032510, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, tcib_managed=true, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt)
Feb 20 09:00:45 np0005625203.localdomain podman[110619]: nova_virtqemud
Feb 20 09:00:45 np0005625203.localdomain podman[110632]: 2026-02-20 09:00:45.997700787 +0000 UTC m=+0.064762251 container cleanup 441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtqemud, release=1766032510, vcs-type=git, architecture=x86_64, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:31:49Z, batch=17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, config_id=tripleo_step3, tcib_managed=true, build-date=2026-01-12T23:31:49Z, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt)
Feb 20 09:00:46 np0005625203.localdomain systemd[1]: libpod-conmon-441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e.scope: Deactivated successfully.
Feb 20 09:00:46 np0005625203.localdomain podman[110661]: error opening file `/run/crun/441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e/status`: No such file or directory
Feb 20 09:00:46 np0005625203.localdomain podman[110648]: 2026-02-20 09:00:46.095315441 +0000 UTC m=+0.066480964 container cleanup 441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtqemud, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, com.redhat.component=openstack-nova-libvirt-container, name=rhosp-rhel9/openstack-nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 4, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtqemud.json:/var/lib/kolla/config_files/config.json:ro', '/var/log/containers/libvirt/swtpm:/var/log/swtpm:z']}, container_name=nova_virtqemud, org.opencontainers.image.created=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=tripleo_ansible, batch=17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.buildah.version=1.41.5, architecture=x86_64, distribution-scope=public, build-date=2026-01-12T23:31:49Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, maintainer=OpenStack TripleO Team, version=17.1.13, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, config_id=tripleo_step3, io.openshift.expose-services=)
Feb 20 09:00:46 np0005625203.localdomain podman[110648]: nova_virtqemud
Feb 20 09:00:46 np0005625203.localdomain systemd[1]: tripleo_nova_virtqemud.service: Deactivated successfully.
Feb 20 09:00:46 np0005625203.localdomain systemd[1]: Stopped nova_virtqemud container.
Feb 20 09:00:46 np0005625203.localdomain sudo[110576]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:46 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ca430835467936ee65e9cfbaab428c832bc1d43f5f133ae27dcd34c9dca062b5-merged.mount: Deactivated successfully.
Feb 20 09:00:46 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-441ab58fa4410c6147d50fcd6a4ba2af0491038916b3138d0f4ccccaa5c6e26e-userdata-shm.mount: Deactivated successfully.
Feb 20 09:00:46 np0005625203.localdomain sudo[110752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nmncjapihxkuaonecnifcbhubcceahmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578046.2477431-109-177868918856505/AnsiballZ_systemd_service.py
Feb 20 09:00:46 np0005625203.localdomain sudo[110752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:00:46 np0005625203.localdomain python3.9[110754]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud_recover.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:00:46 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:00:47 np0005625203.localdomain systemd-rc-local-generator[110784]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:00:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32099 DF PROTO=TCP SPT=44420 DPT=9101 SEQ=2093386438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EAB5400000000001030307) 
Feb 20 09:00:47 np0005625203.localdomain systemd-sysv-generator[110787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:00:47 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:00:47 np0005625203.localdomain sudo[110752]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:47 np0005625203.localdomain sudo[110882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-egdmdjjnntceccaoqseqvuvpbjohhrlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578047.3488765-109-97712623228618/AnsiballZ_systemd_service.py
Feb 20 09:00:47 np0005625203.localdomain sudo[110882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:00:47 np0005625203.localdomain python3.9[110884]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:00:47 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:00:48 np0005625203.localdomain systemd-sysv-generator[110914]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:00:48 np0005625203.localdomain systemd-rc-local-generator[110909]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:00:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:00:48 np0005625203.localdomain systemd[1]: Stopping nova_virtsecretd container...
Feb 20 09:00:48 np0005625203.localdomain systemd[1]: libpod-d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9.scope: Deactivated successfully.
Feb 20 09:00:48 np0005625203.localdomain podman[110925]: 2026-02-20 09:00:48.366719881 +0000 UTC m=+0.078264442 container died d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, konflux.additional-tags=17.1.13 17.1_20260112.1, distribution-scope=public, batch=17.1_20260112.1, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, managed_by=tripleo_ansible, name=rhosp-rhel9/openstack-nova-libvirt, vendor=Red Hat, Inc., url=https://www.redhat.com, architecture=x86_64, build-date=2026-01-12T23:31:49Z, version=17.1.13, maintainer=OpenStack TripleO Team, io.buildah.version=1.41.5, container_name=nova_virtsecretd, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, release=1766032510, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=tripleo_step3, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 09:00:48 np0005625203.localdomain podman[110925]: 2026-02-20 09:00:48.40582303 +0000 UTC m=+0.117367521 container cleanup d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://www.redhat.com, container_name=nova_virtsecretd, managed_by=tripleo_ansible, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, org.opencontainers.image.created=2026-01-12T23:31:49Z, config_id=tripleo_step3, batch=17.1_20260112.1, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, tcib_managed=true, build-date=2026-01-12T23:31:49Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, version=17.1.13, name=rhosp-rhel9/openstack-nova-libvirt, io.buildah.version=1.41.5, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']})
Feb 20 09:00:48 np0005625203.localdomain podman[110925]: nova_virtsecretd
Feb 20 09:00:48 np0005625203.localdomain podman[110939]: 2026-02-20 09:00:48.441150532 +0000 UTC m=+0.066280388 container cleanup d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, tcib_managed=true, cpe=cpe:/a:redhat:openstack:17.1::el9, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-type=git, build-date=2026-01-12T23:31:49Z, org.opencontainers.image.created=2026-01-12T23:31:49Z, io.openshift.expose-services=, maintainer=OpenStack TripleO Team, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, config_id=tripleo_step3, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, container_name=nova_virtsecretd, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, release=1766032510, version=17.1.13, vendor=Red Hat, Inc., managed_by=tripleo_ansible, io.buildah.version=1.41.5, architecture=x86_64, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, com.redhat.component=openstack-nova-libvirt-container, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://www.redhat.com, batch=17.1_20260112.1)
Feb 20 09:00:48 np0005625203.localdomain systemd[1]: libpod-conmon-d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9.scope: Deactivated successfully.
Feb 20 09:00:48 np0005625203.localdomain podman[110966]: error opening file `/run/crun/d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9/status`: No such file or directory
Feb 20 09:00:48 np0005625203.localdomain podman[110954]: 2026-02-20 09:00:48.538369103 +0000 UTC m=+0.066026140 container cleanup d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtsecretd, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, version=17.1.13, batch=17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, config_id=tripleo_step3, distribution-scope=public, io.buildah.version=1.41.5, org.opencontainers.image.created=2026-01-12T23:31:49Z, release=1766032510, name=rhosp-rhel9/openstack-nova-libvirt, description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 1, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtsecretd.json:/var/lib/kolla/config_files/config.json:ro']}, maintainer=OpenStack TripleO Team, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=openstack-nova-libvirt-container, konflux.additional-tags=17.1.13 17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, container_name=nova_virtsecretd, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com)
Feb 20 09:00:48 np0005625203.localdomain podman[110954]: nova_virtsecretd
Feb 20 09:00:48 np0005625203.localdomain systemd[1]: tripleo_nova_virtsecretd.service: Deactivated successfully.
Feb 20 09:00:48 np0005625203.localdomain systemd[1]: Stopped nova_virtsecretd container.
Feb 20 09:00:48 np0005625203.localdomain sudo[110882]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:48 np0005625203.localdomain sudo[111057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bhlmflygolfsggubgtfuxghfiynfctpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578048.6852982-109-65401557012434/AnsiballZ_systemd_service.py
Feb 20 09:00:48 np0005625203.localdomain sudo[111057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:00:49 np0005625203.localdomain python3.9[111059]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:00:49 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:00:49 np0005625203.localdomain systemd-rc-local-generator[111084]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:00:49 np0005625203.localdomain systemd-sysv-generator[111089]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:00:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:00:49 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d7cabd49781e8804ef140d29dc29d560de1c00c597492a71091452772fd383e9-userdata-shm.mount: Deactivated successfully.
Feb 20 09:00:49 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4cd727fde2209c79d9821216422c7a3f4abac9c93918abc294ef4cb9196199ef-merged.mount: Deactivated successfully.
Feb 20 09:00:49 np0005625203.localdomain systemd[1]: Stopping nova_virtstoraged container...
Feb 20 09:00:49 np0005625203.localdomain systemd[1]: libpod-73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c.scope: Deactivated successfully.
Feb 20 09:00:49 np0005625203.localdomain podman[111100]: 2026-02-20 09:00:49.708294185 +0000 UTC m=+0.071211692 container died 73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, io.openshift.expose-services=, version=17.1.13, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, config_id=tripleo_step3, konflux.additional-tags=17.1.13 17.1_20260112.1, io.buildah.version=1.41.5, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, tcib_managed=true, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-type=git, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, name=rhosp-rhel9/openstack-nova-libvirt, batch=17.1_20260112.1, architecture=x86_64, url=https://www.redhat.com, release=1766032510, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, container_name=nova_virtstoraged)
Feb 20 09:00:49 np0005625203.localdomain podman[111100]: 2026-02-20 09:00:49.746314621 +0000 UTC m=+0.109232148 container cleanup 73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, vcs-type=git, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, config_id=tripleo_step3, vendor=Red Hat, Inc., name=rhosp-rhel9/openstack-nova-libvirt, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat OpenStack Platform 17.1 nova-libvirt, managed_by=tripleo_ansible, distribution-scope=public, io.buildah.version=1.41.5, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, build-date=2026-01-12T23:31:49Z, version=17.1.13, com.redhat.component=openstack-nova-libvirt-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, container_name=nova_virtstoraged, konflux.additional-tags=17.1.13 17.1_20260112.1, batch=17.1_20260112.1, architecture=x86_64, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, release=1766032510, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 09:00:49 np0005625203.localdomain podman[111100]: nova_virtstoraged
Feb 20 09:00:49 np0005625203.localdomain podman[111115]: 2026-02-20 09:00:49.781467157 +0000 UTC m=+0.065137502 container cleanup 73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, architecture=x86_64, version=17.1.13, config_id=tripleo_step3, io.openshift.expose-services=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.component=openstack-nova-libvirt-container, description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, release=1766032510, batch=17.1_20260112.1, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.buildah.version=1.41.5, container_name=nova_virtstoraged, managed_by=tripleo_ansible, vendor=Red Hat, Inc., maintainer=OpenStack TripleO Team, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt)
Feb 20 09:00:49 np0005625203.localdomain systemd[1]: libpod-conmon-73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c.scope: Deactivated successfully.
Feb 20 09:00:49 np0005625203.localdomain podman[111143]: error opening file `/run/crun/73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c/status`: No such file or directory
Feb 20 09:00:49 np0005625203.localdomain podman[111130]: 2026-02-20 09:00:49.875033565 +0000 UTC m=+0.062546131 container cleanup 73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtstoraged, managed_by=tripleo_ansible, container_name=nova_virtstoraged, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, konflux.additional-tags=17.1.13 17.1_20260112.1, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, release=1766032510, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, description=Red Hat OpenStack Platform 17.1 nova-libvirt, config_data={'cgroupns': 'host', 'depends_on': ['tripleo_nova_virtlogd_wrapper.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '2eb7e8e9794eebaba92e1ff8facc8868'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1', 'net': 'host', 'pid': 'host', 'pids_limit': 65536, 'privileged': True, 'restart': 'always', 'security_opt': ['label=level:s0', 'label=type:spc_t', 'label=filetype:container_file_t'], 'start_order': 3, 'ulimit': ['nofile=131072', 'nproc=126960'], 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/libvirt:/var/log/libvirt:shared,z', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/run:/run', '/sys/fs/cgroup:/sys/fs/cgroup', '/sys/fs/selinux:/sys/fs/selinux', '/etc/selinux/config:/etc/selinux/config:ro', '/etc/libvirt:/etc/libvirt:shared', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/libvirt:/var/lib/libvirt:shared', '/var/cache/libvirt:/var/cache/libvirt:shared', '/var/lib/vhost_sockets:/var/lib/vhost_sockets', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/var/lib/kolla/config_files/nova_virtstoraged.json:/var/lib/kolla/config_files/config.json:ro']}, org.opencontainers.image.created=2026-01-12T23:31:49Z, com.redhat.component=openstack-nova-libvirt-container, batch=17.1_20260112.1, vcs-type=git, version=17.1.13, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, io.openshift.expose-services=, build-date=2026-01-12T23:31:49Z, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step3, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, distribution-scope=public)
Feb 20 09:00:49 np0005625203.localdomain podman[111130]: nova_virtstoraged
Feb 20 09:00:49 np0005625203.localdomain systemd[1]: tripleo_nova_virtstoraged.service: Deactivated successfully.
Feb 20 09:00:49 np0005625203.localdomain systemd[1]: Stopped nova_virtstoraged container.
Feb 20 09:00:49 np0005625203.localdomain sudo[111057]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2236 DF PROTO=TCP SPT=49106 DPT=9100 SEQ=1193619229 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EAC1800000000001030307) 
Feb 20 09:00:50 np0005625203.localdomain sudo[111234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vditbphvqjbzqffbxsryzukvxgswhtvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578050.0313642-109-72479578048510/AnsiballZ_systemd_service.py
Feb 20 09:00:50 np0005625203.localdomain sudo[111234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:00:50 np0005625203.localdomain sudo[111237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:00:50 np0005625203.localdomain sudo[111237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:00:50 np0005625203.localdomain sudo[111237]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:50 np0005625203.localdomain sudo[111252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:00:50 np0005625203.localdomain sudo[111252]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:00:50 np0005625203.localdomain python3.9[111236]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_controller.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:00:50 np0005625203.localdomain systemd[1]: tmp-crun.dzPx8E.mount: Deactivated successfully.
Feb 20 09:00:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-126ec7093b4409981b35de203497fb4c932b8ea0a58e787934a1a228394ab4e1-merged.mount: Deactivated successfully.
Feb 20 09:00:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73f1f38f88063860c7fee4ea71387b5375dfe69b576a163f03d7bd06870e2e1c-userdata-shm.mount: Deactivated successfully.
Feb 20 09:00:51 np0005625203.localdomain sudo[111252]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:51 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:00:51 np0005625203.localdomain systemd-rc-local-generator[111322]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:00:51 np0005625203.localdomain systemd-sysv-generator[111327]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:00:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:00:51 np0005625203.localdomain systemd[1]: Stopping ovn_controller container...
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: libpod-d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.scope: Deactivated successfully.
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: libpod-d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.scope: Consumed 2.557s CPU time.
Feb 20 09:00:52 np0005625203.localdomain podman[111339]: 2026-02-20 09:00:52.008310647 +0000 UTC m=+0.076888059 container died d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.openshift.expose-services=, container_name=ovn_controller, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, batch=17.1_20260112.1, config_id=tripleo_step4, description=Red Hat OpenStack Platform 17.1 ovn-controller, io.buildah.version=1.41.5, summary=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, vcs-type=git, com.redhat.component=openstack-ovn-controller-container, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, maintainer=OpenStack TripleO Team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, release=1766032510, url=https://www.redhat.com, build-date=2026-01-12T22:36:40Z, distribution-scope=public, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, cpe=cpe:/a:redhat:openstack:17.1::el9)
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.timer: Deactivated successfully.
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed to open /run/systemd/transient/d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: No such file or directory
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933-userdata-shm.mount: Deactivated successfully.
Feb 20 09:00:52 np0005625203.localdomain podman[111339]: 2026-02-20 09:00:52.061733133 +0000 UTC m=+0.130310535 container cleanup d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., url=https://www.redhat.com, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, config_id=tripleo_step4, release=1766032510, vcs-type=git, architecture=x86_64, tcib_managed=true, batch=17.1_20260112.1, version=17.1.13, konflux.additional-tags=17.1.13 17.1_20260112.1, name=rhosp-rhel9/openstack-ovn-controller, org.opencontainers.image.created=2026-01-12T22:36:40Z, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, summary=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ovn-controller, container_name=ovn_controller, managed_by=tripleo_ansible, cpe=cpe:/a:redhat:openstack:17.1::el9, com.redhat.component=openstack-ovn-controller-container, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.buildah.version=1.41.5)
Feb 20 09:00:52 np0005625203.localdomain podman[111339]: ovn_controller
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.timer: Failed to open /run/systemd/transient/d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.timer: No such file or directory
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed to open /run/systemd/transient/d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: No such file or directory
Feb 20 09:00:52 np0005625203.localdomain podman[111353]: 2026-02-20 09:00:52.090027485 +0000 UTC m=+0.070018884 container cleanup d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, version=17.1.13, architecture=x86_64, io.buildah.version=1.41.5, io.openshift.expose-services=, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 ovn-controller, maintainer=OpenStack TripleO Team, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, distribution-scope=public, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, url=https://www.redhat.com, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, vcs-type=git, description=Red Hat OpenStack Platform 17.1 ovn-controller, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:36:40Z, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:36:40Z, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, vendor=Red Hat, Inc., managed_by=tripleo_ansible, tcib_managed=true, release=1766032510, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_controller, com.redhat.component=openstack-ovn-controller-container)
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: libpod-conmon-d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.scope: Deactivated successfully.
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.timer: Failed to open /run/systemd/transient/d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.timer: No such file or directory
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: Failed to open /run/systemd/transient/d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933.service: No such file or directory
Feb 20 09:00:52 np0005625203.localdomain podman[111367]: 2026-02-20 09:00:52.190140047 +0000 UTC m=+0.065983739 container cleanup d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, build-date=2026-01-12T22:36:40Z, config_id=tripleo_step4, version=17.1.13, com.redhat.component=openstack-ovn-controller-container, description=Red Hat OpenStack Platform 17.1 ovn-controller, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, container_name=ovn_controller, maintainer=OpenStack TripleO Team, tcib_managed=true, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, io.openshift.expose-services=, summary=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, cpe=cpe:/a:redhat:openstack:17.1::el9, release=1766032510, org.opencontainers.image.created=2026-01-12T22:36:40Z, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, architecture=x86_64, name=rhosp-rhel9/openstack-ovn-controller, konflux.additional-tags=17.1.13 17.1_20260112.1, managed_by=tripleo_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://www.redhat.com, vendor=Red Hat, Inc.)
Feb 20 09:00:52 np0005625203.localdomain podman[111367]: ovn_controller
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: tripleo_ovn_controller.service: Deactivated successfully.
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: Stopped ovn_controller container.
Feb 20 09:00:52 np0005625203.localdomain sudo[111234]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:52 np0005625203.localdomain sudo[111469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hacwwtfxizamrnmvycjyuxlzcdgnqzma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578052.36061-109-152927956299825/AnsiballZ_systemd_service.py
Feb 20 09:00:52 np0005625203.localdomain sudo[111469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:00:52 np0005625203.localdomain python3.9[111471]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ovn_metadata_agent.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:00:52 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:00:53 np0005625203.localdomain systemd-sysv-generator[111501]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:00:53 np0005625203.localdomain systemd-rc-local-generator[111496]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:00:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32101 DF PROTO=TCP SPT=44420 DPT=9101 SEQ=2093386438 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EACD000000000001030307) 
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e8d390892008df4878068c7165ea27144a38c2b77b5a7d1d8c8cfe57dd3f055d-merged.mount: Deactivated successfully.
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: Stopping ovn_metadata_agent container...
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: tmp-crun.eFNKDt.mount: Deactivated successfully.
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: libpod-be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.scope: Deactivated successfully.
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: libpod-be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.scope: Consumed 9.419s CPU time.
Feb 20 09:00:53 np0005625203.localdomain podman[111512]: 2026-02-20 09:00:53.449329212 +0000 UTC m=+0.184721811 container died be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, url=https://www.redhat.com, io.buildah.version=1.41.5, batch=17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, release=1766032510, container_name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, distribution-scope=public, maintainer=OpenStack TripleO Team, version=17.1.13, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, tcib_managed=true, build-date=2026-01-12T22:56:19Z, vcs-type=git, cpe=cpe:/a:redhat:openstack:17.1::el9, config_id=tripleo_step4, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, architecture=x86_64)
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.timer: Deactivated successfully.
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: Stopped /usr/bin/podman healthcheck run be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed to open /run/systemd/transient/be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: No such file or directory
Feb 20 09:00:53 np0005625203.localdomain podman[111512]: 2026-02-20 09:00:53.518241882 +0000 UTC m=+0.253634471 container cleanup be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, maintainer=OpenStack TripleO Team, architecture=x86_64, container_name=ovn_metadata_agent, cpe=cpe:/a:redhat:openstack:17.1::el9, vendor=Red Hat, Inc., managed_by=tripleo_ansible, konflux.additional-tags=17.1.13 17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, version=17.1.13, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T22:56:19Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, release=1766032510, config_id=tripleo_step4, tcib_managed=true, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 09:00:53 np0005625203.localdomain podman[111512]: ovn_metadata_agent
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.timer: Failed to open /run/systemd/transient/be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.timer: No such file or directory
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed to open /run/systemd/transient/be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: No such file or directory
Feb 20 09:00:53 np0005625203.localdomain podman[111525]: 2026-02-20 09:00:53.547272296 +0000 UTC m=+0.092495535 container cleanup be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, io.openshift.expose-services=, managed_by=tripleo_ansible, distribution-scope=public, org.opencontainers.image.created=2026-01-12T22:56:19Z, config_id=tripleo_step4, batch=17.1_20260112.1, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.buildah.version=1.41.5, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, tcib_managed=true, url=https://www.redhat.com, maintainer=OpenStack TripleO Team, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, build-date=2026-01-12T22:56:19Z, container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, vcs-type=git, version=17.1.13, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, architecture=x86_64)
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: libpod-conmon-be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.scope: Deactivated successfully.
Feb 20 09:00:53 np0005625203.localdomain podman[111557]: error opening file `/run/crun/be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e/status`: No such file or directory
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.timer: Failed to open /run/systemd/transient/be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.timer: No such file or directory
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: Failed to open /run/systemd/transient/be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e.service: No such file or directory
Feb 20 09:00:53 np0005625203.localdomain podman[111545]: 2026-02-20 09:00:53.659973041 +0000 UTC m=+0.078375205 container cleanup be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, tcib_managed=true, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=ovn_metadata_agent, release=1766032510, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, build-date=2026-01-12T22:56:19Z, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, io.buildah.version=1.41.5, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-type=git, maintainer=OpenStack TripleO Team, distribution-scope=public, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, io.openshift.expose-services=, url=https://www.redhat.com)
Feb 20 09:00:53 np0005625203.localdomain podman[111545]: ovn_metadata_agent
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: tripleo_ovn_metadata_agent.service: Deactivated successfully.
Feb 20 09:00:53 np0005625203.localdomain systemd[1]: Stopped ovn_metadata_agent container.
Feb 20 09:00:53 np0005625203.localdomain sudo[111469]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:54 np0005625203.localdomain sudo[111618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:00:54 np0005625203.localdomain sudo[111618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:00:54 np0005625203.localdomain sudo[111618]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:54 np0005625203.localdomain sudo[111663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jiwabrfuhhqyidiiivkadksqyedewcgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578053.8476694-109-106814348387136/AnsiballZ_systemd_service.py
Feb 20 09:00:54 np0005625203.localdomain sudo[111663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:00:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e7d6143fe0f43bd2b55aaa133ca6222921bb092a5876f130467bb0649b23807e-merged.mount: Deactivated successfully.
Feb 20 09:00:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e-userdata-shm.mount: Deactivated successfully.
Feb 20 09:00:54 np0005625203.localdomain python3.9[111665]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_rsyslog.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:00:54 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:00:54 np0005625203.localdomain systemd-rc-local-generator[111694]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:00:54 np0005625203.localdomain systemd-sysv-generator[111698]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:00:54 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:00:54 np0005625203.localdomain sudo[111663]: pam_unix(sudo:session): session closed for user root
Feb 20 09:00:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2238 DF PROTO=TCP SPT=49106 DPT=9100 SEQ=1193619229 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EAD9400000000001030307) 
Feb 20 09:00:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29444 DF PROTO=TCP SPT=54224 DPT=9102 SEQ=263468790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EAE5450000000001030307) 
Feb 20 09:01:01 np0005625203.localdomain CROND[111719]: (root) CMD (run-parts /etc/cron.hourly)
Feb 20 09:01:01 np0005625203.localdomain run-parts[111722]: (/etc/cron.hourly) starting 0anacron
Feb 20 09:01:01 np0005625203.localdomain run-parts[111728]: (/etc/cron.hourly) finished 0anacron
Feb 20 09:01:01 np0005625203.localdomain CROND[111718]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 20 09:01:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44641 DF PROTO=TCP SPT=42076 DPT=9882 SEQ=1104435642 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EAF0000000000001030307) 
Feb 20 09:01:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44642 DF PROTO=TCP SPT=42076 DPT=9882 SEQ=1104435642 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EAFFC10000000001030307) 
Feb 20 09:01:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51383 DF PROTO=TCP SPT=55240 DPT=9105 SEQ=2219370072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB07800000000001030307) 
Feb 20 09:01:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51385 DF PROTO=TCP SPT=55240 DPT=9105 SEQ=2219370072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB1F400000000001030307) 
Feb 20 09:01:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29448 DF PROTO=TCP SPT=54224 DPT=9102 SEQ=263468790 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB20810000000001030307) 
Feb 20 09:01:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33533 DF PROTO=TCP SPT=32840 DPT=9101 SEQ=4046541887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB2A800000000001030307) 
Feb 20 09:01:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52071 DF PROTO=TCP SPT=39138 DPT=9101 SEQ=2817991505 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB36800000000001030307) 
Feb 20 09:01:22 np0005625203.localdomain sshd[111729]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:01:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33535 DF PROTO=TCP SPT=32840 DPT=9101 SEQ=4046541887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB42400000000001030307) 
Feb 20 09:01:23 np0005625203.localdomain sshd[111729]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:01:25 np0005625203.localdomain sshd[111731]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:01:25 np0005625203.localdomain sshd[111731]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:01:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43808 DF PROTO=TCP SPT=52118 DPT=9100 SEQ=4145002780 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB4E800000000001030307) 
Feb 20 09:01:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6578 DF PROTO=TCP SPT=48696 DPT=9102 SEQ=1238799612 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB5A770000000001030307) 
Feb 20 09:01:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50236 DF PROTO=TCP SPT=38138 DPT=9882 SEQ=2968288146 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB65400000000001030307) 
Feb 20 09:01:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50237 DF PROTO=TCP SPT=38138 DPT=9882 SEQ=2968288146 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB75000000000001030307) 
Feb 20 09:01:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59751 DF PROTO=TCP SPT=56926 DPT=9105 SEQ=2291121811 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB7CC10000000001030307) 
Feb 20 09:01:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52459 DF PROTO=TCP SPT=51032 DPT=9105 SEQ=2495805621 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB88800000000001030307) 
Feb 20 09:01:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50238 DF PROTO=TCP SPT=38138 DPT=9882 SEQ=2968288146 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB94800000000001030307) 
Feb 20 09:01:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63360 DF PROTO=TCP SPT=33552 DPT=9101 SEQ=3630784758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EB9FC10000000001030307) 
Feb 20 09:01:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14461 DF PROTO=TCP SPT=45500 DPT=9100 SEQ=579367161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EBABC00000000001030307) 
Feb 20 09:01:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63362 DF PROTO=TCP SPT=33552 DPT=9101 SEQ=3630784758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EBB7800000000001030307) 
Feb 20 09:01:54 np0005625203.localdomain sudo[111733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:01:54 np0005625203.localdomain sudo[111733]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:01:54 np0005625203.localdomain sudo[111733]: pam_unix(sudo:session): session closed for user root
Feb 20 09:01:54 np0005625203.localdomain sudo[111748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 09:01:54 np0005625203.localdomain sudo[111748]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:01:54 np0005625203.localdomain sudo[111748]: pam_unix(sudo:session): session closed for user root
Feb 20 09:01:54 np0005625203.localdomain sudo[111785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:01:54 np0005625203.localdomain sudo[111785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:01:54 np0005625203.localdomain sudo[111785]: pam_unix(sudo:session): session closed for user root
Feb 20 09:01:54 np0005625203.localdomain sudo[111800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:01:54 np0005625203.localdomain sudo[111800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:01:55 np0005625203.localdomain sudo[111800]: pam_unix(sudo:session): session closed for user root
Feb 20 09:01:56 np0005625203.localdomain sudo[111846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:01:56 np0005625203.localdomain sudo[111846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:01:56 np0005625203.localdomain sudo[111846]: pam_unix(sudo:session): session closed for user root
Feb 20 09:01:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14463 DF PROTO=TCP SPT=45500 DPT=9100 SEQ=579367161 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EBC3800000000001030307) 
Feb 20 09:01:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44273 DF PROTO=TCP SPT=36496 DPT=9102 SEQ=3429863101 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EBCFA50000000001030307) 
Feb 20 09:02:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24616 DF PROTO=TCP SPT=38826 DPT=9882 SEQ=3748715892 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EBDA810000000001030307) 
Feb 20 09:02:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24617 DF PROTO=TCP SPT=38826 DPT=9882 SEQ=3748715892 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EBEA400000000001030307) 
Feb 20 09:02:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1296 DF PROTO=TCP SPT=45524 DPT=9105 SEQ=2990554850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EBF2000000000001030307) 
Feb 20 09:02:10 np0005625203.localdomain sshd[111861]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:02:10 np0005625203.localdomain sshd[111861]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:02:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51388 DF PROTO=TCP SPT=55240 DPT=9105 SEQ=2219370072 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EBFE800000000001030307) 
Feb 20 09:02:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1298 DF PROTO=TCP SPT=45524 DPT=9105 SEQ=2990554850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC09C00000000001030307) 
Feb 20 09:02:14 np0005625203.localdomain sshd[111863]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:02:15 np0005625203.localdomain sshd[106492]: Received disconnect from 192.168.122.30 port 46740:11: disconnected by user
Feb 20 09:02:15 np0005625203.localdomain sshd[106492]: Disconnected from user zuul 192.168.122.30 port 46740
Feb 20 09:02:15 np0005625203.localdomain sshd[106489]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:02:15 np0005625203.localdomain systemd[1]: session-36.scope: Deactivated successfully.
Feb 20 09:02:15 np0005625203.localdomain systemd[1]: session-36.scope: Consumed 19.034s CPU time.
Feb 20 09:02:15 np0005625203.localdomain systemd-logind[759]: Session 36 logged out. Waiting for processes to exit.
Feb 20 09:02:15 np0005625203.localdomain systemd-logind[759]: Removed session 36.
Feb 20 09:02:16 np0005625203.localdomain sshd[111863]: Received disconnect from 103.200.25.162 port 58286:11: Bye Bye [preauth]
Feb 20 09:02:16 np0005625203.localdomain sshd[111863]: Disconnected from authenticating user root 103.200.25.162 port 58286 [preauth]
Feb 20 09:02:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37561 DF PROTO=TCP SPT=50910 DPT=9101 SEQ=3553616963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC14C00000000001030307) 
Feb 20 09:02:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33538 DF PROTO=TCP SPT=32840 DPT=9101 SEQ=4046541887 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC20800000000001030307) 
Feb 20 09:02:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43811 DF PROTO=TCP SPT=52118 DPT=9100 SEQ=4145002780 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC2C810000000001030307) 
Feb 20 09:02:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39217 DF PROTO=TCP SPT=54866 DPT=9100 SEQ=2484037534 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC38C10000000001030307) 
Feb 20 09:02:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35366 DF PROTO=TCP SPT=53840 DPT=9102 SEQ=1614718412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC44D50000000001030307) 
Feb 20 09:02:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62085 DF PROTO=TCP SPT=48310 DPT=9882 SEQ=2834945839 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC4FC00000000001030307) 
Feb 20 09:02:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62086 DF PROTO=TCP SPT=48310 DPT=9882 SEQ=2834945839 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC5F800000000001030307) 
Feb 20 09:02:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47370 DF PROTO=TCP SPT=49800 DPT=9105 SEQ=953672269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC67400000000001030307) 
Feb 20 09:02:40 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59756 DF PROTO=TCP SPT=56926 DPT=9105 SEQ=2291121811 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC72810000000001030307) 
Feb 20 09:02:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47372 DF PROTO=TCP SPT=49800 DPT=9105 SEQ=953672269 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC7F000000000001030307) 
Feb 20 09:02:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14964 DF PROTO=TCP SPT=47884 DPT=9101 SEQ=1907717305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC8A010000000001030307) 
Feb 20 09:02:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9192 DF PROTO=TCP SPT=43696 DPT=9100 SEQ=655736717 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EC96400000000001030307) 
Feb 20 09:02:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14966 DF PROTO=TCP SPT=47884 DPT=9101 SEQ=1907717305 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ECA1C00000000001030307) 
Feb 20 09:02:53 np0005625203.localdomain sshd[111865]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:02:54 np0005625203.localdomain sshd[111865]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:02:55 np0005625203.localdomain sshd[111867]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:02:55 np0005625203.localdomain sshd[111867]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:02:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9194 DF PROTO=TCP SPT=43696 DPT=9100 SEQ=655736717 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ECAE000000000001030307) 
Feb 20 09:02:56 np0005625203.localdomain sudo[111869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:02:56 np0005625203.localdomain sudo[111869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:02:56 np0005625203.localdomain sudo[111869]: pam_unix(sudo:session): session closed for user root
Feb 20 09:02:56 np0005625203.localdomain sudo[111884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:02:56 np0005625203.localdomain sudo[111884]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:02:57 np0005625203.localdomain sudo[111884]: pam_unix(sudo:session): session closed for user root
Feb 20 09:02:57 np0005625203.localdomain sudo[111931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:02:57 np0005625203.localdomain sudo[111931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:02:57 np0005625203.localdomain sudo[111931]: pam_unix(sudo:session): session closed for user root
Feb 20 09:02:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7751 DF PROTO=TCP SPT=40806 DPT=9102 SEQ=1875498672 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ECBA050000000001030307) 
Feb 20 09:03:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13103 DF PROTO=TCP SPT=57396 DPT=9882 SEQ=3763332833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ECC4C00000000001030307) 
Feb 20 09:03:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13104 DF PROTO=TCP SPT=57396 DPT=9882 SEQ=3763332833 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ECD4800000000001030307) 
Feb 20 09:03:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13290 DF PROTO=TCP SPT=45414 DPT=9105 SEQ=1719007338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ECDC410000000001030307) 
Feb 20 09:03:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1301 DF PROTO=TCP SPT=45524 DPT=9105 SEQ=2990554850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ECE8800000000001030307) 
Feb 20 09:03:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13292 DF PROTO=TCP SPT=45414 DPT=9105 SEQ=1719007338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ECF4000000000001030307) 
Feb 20 09:03:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52881 DF PROTO=TCP SPT=53652 DPT=9101 SEQ=99618278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ECFF410000000001030307) 
Feb 20 09:03:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34615 DF PROTO=TCP SPT=49044 DPT=9100 SEQ=2290984328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED0B800000000001030307) 
Feb 20 09:03:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52883 DF PROTO=TCP SPT=53652 DPT=9101 SEQ=99618278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED17000000000001030307) 
Feb 20 09:03:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34617 DF PROTO=TCP SPT=49044 DPT=9100 SEQ=2290984328 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED23400000000001030307) 
Feb 20 09:03:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64893 DF PROTO=TCP SPT=41650 DPT=9882 SEQ=3784742134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED2DFD0000000001030307) 
Feb 20 09:03:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64895 DF PROTO=TCP SPT=41650 DPT=9882 SEQ=3784742134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED3A000000000001030307) 
Feb 20 09:03:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64896 DF PROTO=TCP SPT=41650 DPT=9882 SEQ=3784742134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED49C00000000001030307) 
Feb 20 09:03:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46051 DF PROTO=TCP SPT=35380 DPT=9105 SEQ=3028066831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED51800000000001030307) 
Feb 20 09:03:40 np0005625203.localdomain sshd[111946]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:03:41 np0005625203.localdomain sshd[111946]: Invalid user oracle from 103.61.123.132 port 58004
Feb 20 09:03:41 np0005625203.localdomain sshd[111948]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:03:41 np0005625203.localdomain sshd[111950]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:03:41 np0005625203.localdomain sshd[111948]: Accepted publickey for zuul from 192.168.122.30 port 35198 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:03:41 np0005625203.localdomain systemd-logind[759]: New session 37 of user zuul.
Feb 20 09:03:41 np0005625203.localdomain systemd[1]: Started Session 37 of User zuul.
Feb 20 09:03:41 np0005625203.localdomain sshd[111948]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:03:41 np0005625203.localdomain sshd[111946]: Received disconnect from 103.61.123.132 port 58004:11: Bye Bye [preauth]
Feb 20 09:03:41 np0005625203.localdomain sshd[111946]: Disconnected from invalid user oracle 103.61.123.132 port 58004 [preauth]
Feb 20 09:03:42 np0005625203.localdomain sshd[111950]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:03:42 np0005625203.localdomain sudo[112029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjqtqeoboswpvmuqjxybztsfgfccpocx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578221.9383748-559-245147037493216/AnsiballZ_file.py
Feb 20 09:03:42 np0005625203.localdomain sudo[112029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:42 np0005625203.localdomain python3.9[112031]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:42 np0005625203.localdomain sudo[112029]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:42 np0005625203.localdomain sudo[112121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nzsozepjvdutmiejqhebprwejirmeenk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578222.5052562-559-62578123788635/AnsiballZ_file.py
Feb 20 09:03:42 np0005625203.localdomain sudo[112121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:42 np0005625203.localdomain python3.9[112123]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:42 np0005625203.localdomain sudo[112121]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:43 np0005625203.localdomain sudo[112213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-frarxpsnurxlkcfuyjofaiqiswyahqws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578223.0707748-559-68794342954362/AnsiballZ_file.py
Feb 20 09:03:43 np0005625203.localdomain sudo[112213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:43 np0005625203.localdomain python3.9[112215]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:43 np0005625203.localdomain sudo[112213]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:43 np0005625203.localdomain sudo[112305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxlwpffhnvaflqwnxbtlzibxsolaiupg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578223.6043475-559-106460921677475/AnsiballZ_file.py
Feb 20 09:03:43 np0005625203.localdomain sudo[112305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:44 np0005625203.localdomain python3.9[112307]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:44 np0005625203.localdomain sudo[112305]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46053 DF PROTO=TCP SPT=35380 DPT=9105 SEQ=3028066831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED69400000000001030307) 
Feb 20 09:03:44 np0005625203.localdomain sudo[112397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-frklestjgekrsxjeyboudstguiifmtca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578224.161182-559-160630531919069/AnsiballZ_file.py
Feb 20 09:03:44 np0005625203.localdomain sudo[112397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64897 DF PROTO=TCP SPT=41650 DPT=9882 SEQ=3784742134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED6A800000000001030307) 
Feb 20 09:03:44 np0005625203.localdomain python3.9[112399]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:44 np0005625203.localdomain sudo[112397]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:45 np0005625203.localdomain sudo[112489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yahhyhjjwvjzduohlzlmvwxpccjwyrxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578224.7584581-559-235137938118476/AnsiballZ_file.py
Feb 20 09:03:45 np0005625203.localdomain sudo[112489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:45 np0005625203.localdomain python3.9[112491]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:45 np0005625203.localdomain sudo[112489]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:45 np0005625203.localdomain sudo[112581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvqvmthshnoxdastjipvofiefaxceqfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578225.375458-559-209969909585380/AnsiballZ_file.py
Feb 20 09:03:45 np0005625203.localdomain sudo[112581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:45 np0005625203.localdomain python3.9[112583]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:45 np0005625203.localdomain sudo[112581]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:46 np0005625203.localdomain sudo[112673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmteoiqvffnvkhasewaxfixgvjkkguno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578225.9575498-559-179309315025701/AnsiballZ_file.py
Feb 20 09:03:46 np0005625203.localdomain sudo[112673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:46 np0005625203.localdomain python3.9[112675]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:46 np0005625203.localdomain sudo[112673]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:46 np0005625203.localdomain sudo[112765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xtfdqmluioqsxllzejrprublftcbbkim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578226.5151548-559-45286253332157/AnsiballZ_file.py
Feb 20 09:03:46 np0005625203.localdomain sudo[112765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:46 np0005625203.localdomain python3.9[112767]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:46 np0005625203.localdomain sudo[112765]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37001 DF PROTO=TCP SPT=33684 DPT=9101 SEQ=65978452 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED74800000000001030307) 
Feb 20 09:03:47 np0005625203.localdomain auditd[725]: Audit daemon rotating log files
Feb 20 09:03:47 np0005625203.localdomain sudo[112857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jzppxxcjjumzddjkklxpbxhiwcmkuzhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578227.0619917-559-92462467321440/AnsiballZ_file.py
Feb 20 09:03:47 np0005625203.localdomain sudo[112857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:47 np0005625203.localdomain python3.9[112859]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:47 np0005625203.localdomain sudo[112857]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:47 np0005625203.localdomain sshd[112919]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:03:47 np0005625203.localdomain sudo[112951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oflwcgmvhaqejluwrtqxoptcklcctbxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578227.6277418-559-101932502815558/AnsiballZ_file.py
Feb 20 09:03:47 np0005625203.localdomain sudo[112951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:48 np0005625203.localdomain python3.9[112953]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:48 np0005625203.localdomain sudo[112951]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:48 np0005625203.localdomain sudo[113043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nymnjdailfdyjbikzgskxendjmpvldez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578228.18133-559-132056759713177/AnsiballZ_file.py
Feb 20 09:03:48 np0005625203.localdomain sudo[113043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:48 np0005625203.localdomain python3.9[113045]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:48 np0005625203.localdomain sudo[113043]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:49 np0005625203.localdomain sshd[112919]: Invalid user sftp from 102.210.148.92 port 34300
Feb 20 09:03:49 np0005625203.localdomain sudo[113135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-toqcfbrowmvlwxrgiyzkxlfxwystrejo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578228.7901318-559-126782418112346/AnsiballZ_file.py
Feb 20 09:03:49 np0005625203.localdomain sudo[113135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:49 np0005625203.localdomain python3.9[113137]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:49 np0005625203.localdomain sudo[113135]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:49 np0005625203.localdomain sshd[112919]: Received disconnect from 102.210.148.92 port 34300:11: Bye Bye [preauth]
Feb 20 09:03:49 np0005625203.localdomain sshd[112919]: Disconnected from invalid user sftp 102.210.148.92 port 34300 [preauth]
Feb 20 09:03:49 np0005625203.localdomain sudo[113227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udpkwljoyjoczghupcxkbjgrffhcvcja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578229.4095125-559-235341271271978/AnsiballZ_file.py
Feb 20 09:03:49 np0005625203.localdomain sudo[113227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:49 np0005625203.localdomain python3.9[113229]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:49 np0005625203.localdomain sudo[113227]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44756 DF PROTO=TCP SPT=48982 DPT=9100 SEQ=377659183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED80800000000001030307) 
Feb 20 09:03:50 np0005625203.localdomain sudo[113319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvuqihxdfaxyuhvikjxxbeojxepgvjze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578230.0015216-559-9136130226027/AnsiballZ_file.py
Feb 20 09:03:50 np0005625203.localdomain sudo[113319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:50 np0005625203.localdomain python3.9[113321]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:50 np0005625203.localdomain sudo[113319]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:50 np0005625203.localdomain sudo[113411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjaekmxhevabmvmazyreaauqbxgslbtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578230.6180046-559-93254694620017/AnsiballZ_file.py
Feb 20 09:03:50 np0005625203.localdomain sudo[113411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:51 np0005625203.localdomain python3.9[113413]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:51 np0005625203.localdomain sudo[113411]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:51 np0005625203.localdomain sudo[113503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xekiojiyihvensfejycqxlektzhphtgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578231.191849-559-103006191739344/AnsiballZ_file.py
Feb 20 09:03:51 np0005625203.localdomain sudo[113503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:51 np0005625203.localdomain python3.9[113505]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:51 np0005625203.localdomain sudo[113503]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:52 np0005625203.localdomain sudo[113595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uvtjgcjrckphmjpmrkuhigwzyvybjroj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578231.854199-559-137973354134471/AnsiballZ_file.py
Feb 20 09:03:52 np0005625203.localdomain sudo[113595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:52 np0005625203.localdomain python3.9[113597]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:52 np0005625203.localdomain sudo[113595]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:52 np0005625203.localdomain sudo[113687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwsnrsoulmgjnubpuyfegtmykzkbomit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578232.44556-559-16890005002115/AnsiballZ_file.py
Feb 20 09:03:52 np0005625203.localdomain sudo[113687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:52 np0005625203.localdomain python3.9[113689]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:52 np0005625203.localdomain sudo[113687]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37003 DF PROTO=TCP SPT=33684 DPT=9101 SEQ=65978452 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED8C400000000001030307) 
Feb 20 09:03:53 np0005625203.localdomain sudo[113779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cqkyrhnygsyxkzyswobkbgkeqzkaqgrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578233.0226464-559-11674000774591/AnsiballZ_file.py
Feb 20 09:03:53 np0005625203.localdomain sudo[113779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:53 np0005625203.localdomain python3.9[113781]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:53 np0005625203.localdomain sudo[113779]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:53 np0005625203.localdomain sudo[113871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pihebwktxwsiodlejcanlsqrozlucgoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578233.6337507-559-19482738181749/AnsiballZ_file.py
Feb 20 09:03:53 np0005625203.localdomain sudo[113871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:54 np0005625203.localdomain python3.9[113873]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:54 np0005625203.localdomain sudo[113871]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:54 np0005625203.localdomain sudo[113963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bfnvzcftzktvxezbsdowolblajkevloy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578234.397895-1009-248103070151355/AnsiballZ_file.py
Feb 20 09:03:54 np0005625203.localdomain sudo[113963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:54 np0005625203.localdomain python3.9[113965]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:54 np0005625203.localdomain sudo[113963]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:55 np0005625203.localdomain sudo[114055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trjbhoqvybikdfcpmzkfawpzirddscxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578235.009019-1009-241174534070341/AnsiballZ_file.py
Feb 20 09:03:55 np0005625203.localdomain sudo[114055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:55 np0005625203.localdomain sshd[114058]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:03:55 np0005625203.localdomain python3.9[114057]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:55 np0005625203.localdomain sudo[114055]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:55 np0005625203.localdomain sudo[114149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cpvgmwrvcnarpshkqrnijowkvusqoxey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578235.5749815-1009-251190913604904/AnsiballZ_file.py
Feb 20 09:03:55 np0005625203.localdomain sudo[114149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:56 np0005625203.localdomain python3.9[114151]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_collectd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:56 np0005625203.localdomain sudo[114149]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44758 DF PROTO=TCP SPT=48982 DPT=9100 SEQ=377659183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9ED98400000000001030307) 
Feb 20 09:03:56 np0005625203.localdomain sshd[114058]: Invalid user n8n from 212.154.234.9 port 18786
Feb 20 09:03:56 np0005625203.localdomain sudo[114241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ugqyxztcyzkrwatxpnllfkppakwtjhta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578236.157256-1009-177216172734696/AnsiballZ_file.py
Feb 20 09:03:56 np0005625203.localdomain sudo[114241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:56 np0005625203.localdomain sshd[114058]: Received disconnect from 212.154.234.9 port 18786:11: Bye Bye [preauth]
Feb 20 09:03:56 np0005625203.localdomain sshd[114058]: Disconnected from invalid user n8n 212.154.234.9 port 18786 [preauth]
Feb 20 09:03:56 np0005625203.localdomain python3.9[114243]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_iscsid.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:56 np0005625203.localdomain sudo[114241]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:57 np0005625203.localdomain sudo[114333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wunshqnpoezmwjgqbbdqvuncydcxxcmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578236.772034-1009-29939664790211/AnsiballZ_file.py
Feb 20 09:03:57 np0005625203.localdomain sudo[114333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:57 np0005625203.localdomain python3.9[114335]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_logrotate_crond.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:57 np0005625203.localdomain sudo[114333]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:57 np0005625203.localdomain sudo[114425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ijfgrsfiqchjezytfvoahslasmzbayjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578237.3480713-1009-243683704329558/AnsiballZ_file.py
Feb 20 09:03:57 np0005625203.localdomain sudo[114425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:57 np0005625203.localdomain python3.9[114427]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_metrics_qdr.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:57 np0005625203.localdomain sudo[114425]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:57 np0005625203.localdomain sudo[114446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:03:57 np0005625203.localdomain sudo[114446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:03:57 np0005625203.localdomain sudo[114446]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:58 np0005625203.localdomain sudo[114489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:03:58 np0005625203.localdomain sudo[114489]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:03:58 np0005625203.localdomain sudo[114547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjedcqsdxquaurvvtmwuaabqlcyinoim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578237.9066207-1009-163727356698811/AnsiballZ_file.py
Feb 20 09:03:58 np0005625203.localdomain sudo[114547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:58 np0005625203.localdomain python3.9[114549]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_dhcp.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:58 np0005625203.localdomain sudo[114547]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:58 np0005625203.localdomain podman[114681]: 2026-02-20 09:03:58.68895239 +0000 UTC m=+0.070938783 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, architecture=x86_64, vendor=Red Hat, Inc., version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, RELEASE=main, vcs-type=git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, name=rhceph, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:03:58 np0005625203.localdomain sudo[114729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-inlfqecaqachnddwjfyueumeisibjvva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578238.4806104-1009-273065542695627/AnsiballZ_file.py
Feb 20 09:03:58 np0005625203.localdomain sudo[114729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:58 np0005625203.localdomain podman[114681]: 2026-02-20 09:03:58.814447573 +0000 UTC m=+0.196433986 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1770267347, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_BRANCH=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2)
Feb 20 09:03:58 np0005625203.localdomain python3.9[114731]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_l3_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:58 np0005625203.localdomain sudo[114729]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18886 DF PROTO=TCP SPT=59504 DPT=9882 SEQ=65204892 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EDA32D0000000001030307) 
Feb 20 09:03:59 np0005625203.localdomain sudo[114489]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:59 np0005625203.localdomain sudo[114826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:03:59 np0005625203.localdomain sudo[114826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:03:59 np0005625203.localdomain sudo[114826]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:59 np0005625203.localdomain sudo[114854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:03:59 np0005625203.localdomain sudo[114854]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:03:59 np0005625203.localdomain sudo[114899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aanrwbkcepggqwrmjhzvpewlmnmyonkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578239.109243-1009-15299217533687/AnsiballZ_file.py
Feb 20 09:03:59 np0005625203.localdomain sudo[114899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:03:59 np0005625203.localdomain python3.9[114901]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_neutron_ovs_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:03:59 np0005625203.localdomain sudo[114899]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:59 np0005625203.localdomain sudo[114854]: pam_unix(sudo:session): session closed for user root
Feb 20 09:03:59 np0005625203.localdomain sudo[115023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vggjrctpzsxauvpodvznijikbfbyamed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578239.706339-1009-134754578676093/AnsiballZ_file.py
Feb 20 09:03:59 np0005625203.localdomain sudo[115023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:00 np0005625203.localdomain python3.9[115025]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:00 np0005625203.localdomain sudo[115023]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:00 np0005625203.localdomain sudo[115085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:04:00 np0005625203.localdomain sudo[115085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:04:00 np0005625203.localdomain sudo[115085]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:00 np0005625203.localdomain sudo[115130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-olaxhfvdtkxnespburykjdsziwcltomk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578240.2704005-1009-223361668162273/AnsiballZ_file.py
Feb 20 09:04:00 np0005625203.localdomain sudo[115130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:00 np0005625203.localdomain python3.9[115132]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:00 np0005625203.localdomain sudo[115130]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:01 np0005625203.localdomain sudo[115222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wzvbmfflswzpyjqtaaabknzbsklqrkzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578240.8307738-1009-122344043256829/AnsiballZ_file.py
Feb 20 09:04:01 np0005625203.localdomain sudo[115222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:01 np0005625203.localdomain python3.9[115224]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:01 np0005625203.localdomain sudo[115222]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:01 np0005625203.localdomain sudo[115314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sznymufvbedmhhqfuxzjfpoxvkcdlsvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578241.452857-1009-96041517276774/AnsiballZ_file.py
Feb 20 09:04:01 np0005625203.localdomain sudo[115314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:01 np0005625203.localdomain python3.9[115316]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:01 np0005625203.localdomain sudo[115314]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18888 DF PROTO=TCP SPT=59504 DPT=9882 SEQ=65204892 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EDAF400000000001030307) 
Feb 20 09:04:02 np0005625203.localdomain sudo[115406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qgbejssuhrjcemtzomfnoczukejhjeah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578242.0398788-1009-170612716065832/AnsiballZ_file.py
Feb 20 09:04:02 np0005625203.localdomain sudo[115406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:02 np0005625203.localdomain python3.9[115408]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:02 np0005625203.localdomain sudo[115406]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:02 np0005625203.localdomain sudo[115498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kldbzyeteuurexyweysswxirflejuyif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578242.6120386-1009-280419752778624/AnsiballZ_file.py
Feb 20 09:04:02 np0005625203.localdomain sudo[115498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:03 np0005625203.localdomain python3.9[115500]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:03 np0005625203.localdomain sudo[115498]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:03 np0005625203.localdomain sudo[115590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xaxjgptymzckodjkaxugycllmznhqafu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578243.1766565-1009-273055785778585/AnsiballZ_file.py
Feb 20 09:04:03 np0005625203.localdomain sudo[115590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:03 np0005625203.localdomain python3.9[115592]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud_recover.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:03 np0005625203.localdomain sudo[115590]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:04 np0005625203.localdomain sudo[115682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tjxrbdurzxsyiaodekmjwfquamqezjkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578243.7929697-1009-229848859872726/AnsiballZ_file.py
Feb 20 09:04:04 np0005625203.localdomain sudo[115682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:04 np0005625203.localdomain python3.9[115684]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:04 np0005625203.localdomain sudo[115682]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:04 np0005625203.localdomain sudo[115774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-irwzuemsqiolmxhynngfgtyvzyxnrlxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578244.4035711-1009-78041451050304/AnsiballZ_file.py
Feb 20 09:04:04 np0005625203.localdomain sudo[115774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:04 np0005625203.localdomain python3.9[115776]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:04 np0005625203.localdomain sudo[115774]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:05 np0005625203.localdomain sudo[115866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sekbqhwxatgpvmzxepoxplexrqrxynhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578244.9391634-1009-75217723310378/AnsiballZ_file.py
Feb 20 09:04:05 np0005625203.localdomain sudo[115866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:05 np0005625203.localdomain python3.9[115868]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_controller.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:05 np0005625203.localdomain sudo[115866]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:05 np0005625203.localdomain sudo[115958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-itjgzlatakhxxumxzwuaxbdhoswrvsoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578245.4792533-1009-85764253222383/AnsiballZ_file.py
Feb 20 09:04:05 np0005625203.localdomain sudo[115958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:05 np0005625203.localdomain python3.9[115960]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ovn_metadata_agent.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:05 np0005625203.localdomain sudo[115958]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18889 DF PROTO=TCP SPT=59504 DPT=9882 SEQ=65204892 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EDBF010000000001030307) 
Feb 20 09:04:06 np0005625203.localdomain sudo[116050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mghiuezhzdvursatzczfylwdgvsdgapj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578246.019116-1009-272188950747375/AnsiballZ_file.py
Feb 20 09:04:06 np0005625203.localdomain sudo[116050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:06 np0005625203.localdomain python3.9[116052]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_rsyslog.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:06 np0005625203.localdomain sudo[116050]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:07 np0005625203.localdomain sudo[116142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvhdnzrhnamyirinypzplkpddofblaom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578246.8329368-1456-47047004760061/AnsiballZ_command.py
Feb 20 09:04:07 np0005625203.localdomain sudo[116142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:07 np0005625203.localdomain python3.9[116144]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:07 np0005625203.localdomain sudo[116142]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5196 DF PROTO=TCP SPT=48660 DPT=9105 SEQ=2722283333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EDC6C00000000001030307) 
Feb 20 09:04:08 np0005625203.localdomain python3.9[116236]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 20 09:04:08 np0005625203.localdomain sshd[116238]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:04:08 np0005625203.localdomain sshd[116238]: error: kex_exchange_identification: Connection closed by remote host
Feb 20 09:04:08 np0005625203.localdomain sshd[116238]: Connection closed by 83.168.78.75 port 52956
Feb 20 09:04:08 np0005625203.localdomain sshd[116252]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:04:08 np0005625203.localdomain sudo[116329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttpfjjafnnnsbkknriqxntiuektqgjoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578248.5313919-1510-278880709942919/AnsiballZ_systemd_service.py
Feb 20 09:04:08 np0005625203.localdomain sudo[116329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:08 np0005625203.localdomain sshd[116252]: Invalid user a from 83.168.78.75 port 52970
Feb 20 09:04:09 np0005625203.localdomain sshd[116252]: Connection closed by invalid user a 83.168.78.75 port 52970 [preauth]
Feb 20 09:04:09 np0005625203.localdomain python3.9[116331]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:04:09 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:04:09 np0005625203.localdomain systemd-rc-local-generator[116353]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:04:09 np0005625203.localdomain systemd-sysv-generator[116356]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:04:09 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:04:09 np0005625203.localdomain sudo[116329]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13295 DF PROTO=TCP SPT=45414 DPT=9105 SEQ=1719007338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EDD2800000000001030307) 
Feb 20 09:04:11 np0005625203.localdomain sudo[116457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bhmqwzezzlbyptbztxynjrirhoiprbsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578251.3408751-1534-280018653875528/AnsiballZ_command.py
Feb 20 09:04:11 np0005625203.localdomain sudo[116457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:11 np0005625203.localdomain python3.9[116459]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:12 np0005625203.localdomain sudo[116457]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:13 np0005625203.localdomain sudo[116550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nscheljihnedircuqcyolfvimlhujart ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578252.9408598-1534-59706256159366/AnsiballZ_command.py
Feb 20 09:04:13 np0005625203.localdomain sudo[116550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:13 np0005625203.localdomain python3.9[116552]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:13 np0005625203.localdomain sudo[116550]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:13 np0005625203.localdomain sudo[116643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vqbwifjitbxjkmwqxyuwdntudbcaadry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578253.5382178-1534-37579122232021/AnsiballZ_command.py
Feb 20 09:04:13 np0005625203.localdomain sudo[116643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:14 np0005625203.localdomain python3.9[116645]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_collectd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:14 np0005625203.localdomain sudo[116643]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5198 DF PROTO=TCP SPT=48660 DPT=9105 SEQ=2722283333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EDDE800000000001030307) 
Feb 20 09:04:14 np0005625203.localdomain sudo[116736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lrcqgndrpybmrdaqqrxtzyakwagmqzhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578254.1339526-1534-209692565398413/AnsiballZ_command.py
Feb 20 09:04:14 np0005625203.localdomain sudo[116736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:14 np0005625203.localdomain python3.9[116738]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_iscsid.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:15 np0005625203.localdomain sudo[116736]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:16 np0005625203.localdomain sudo[116829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fhwcwdgovvprpcdhqheokovpgpabkmsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578255.7471244-1534-137715524700559/AnsiballZ_command.py
Feb 20 09:04:16 np0005625203.localdomain sudo[116829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:16 np0005625203.localdomain python3.9[116831]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_logrotate_crond.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:16 np0005625203.localdomain sudo[116829]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:16 np0005625203.localdomain sshd[116857]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:04:16 np0005625203.localdomain sudo[116924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wuiujijybetowfwsxurjksfsjlzrkube ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578256.3350391-1534-202524862665497/AnsiballZ_command.py
Feb 20 09:04:16 np0005625203.localdomain sudo[116924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:16 np0005625203.localdomain python3.9[116926]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_metrics_qdr.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:16 np0005625203.localdomain sudo[116924]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:16 np0005625203.localdomain sshd[116857]: Invalid user solana from 80.94.92.168 port 59028
Feb 20 09:04:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11305 DF PROTO=TCP SPT=37942 DPT=9101 SEQ=3489794858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EDE9810000000001030307) 
Feb 20 09:04:17 np0005625203.localdomain sshd[116857]: Connection closed by invalid user solana 80.94.92.168 port 59028 [preauth]
Feb 20 09:04:17 np0005625203.localdomain sudo[117017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hxsxwwgnrskioklgxkfeduotdnegeiyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578256.944323-1534-185668171949175/AnsiballZ_command.py
Feb 20 09:04:17 np0005625203.localdomain sudo[117017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:17 np0005625203.localdomain python3.9[117019]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_dhcp.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:17 np0005625203.localdomain sudo[117017]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:17 np0005625203.localdomain sudo[117110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vszfhseykkqavbhzogkajokkmcfbzhvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578257.4541128-1534-117557853903466/AnsiballZ_command.py
Feb 20 09:04:17 np0005625203.localdomain sudo[117110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:17 np0005625203.localdomain python3.9[117112]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_l3_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:17 np0005625203.localdomain sudo[117110]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:18 np0005625203.localdomain sudo[117203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kmckrzegfrhoqnmsbfkchtesyndwvvyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578258.0154185-1534-98837584078817/AnsiballZ_command.py
Feb 20 09:04:18 np0005625203.localdomain sudo[117203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:18 np0005625203.localdomain python3.9[117205]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_neutron_ovs_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:18 np0005625203.localdomain sudo[117203]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:18 np0005625203.localdomain sudo[117296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znjirfwecpheornplzdfoscxsmnvebqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578258.6271467-1534-154051758406833/AnsiballZ_command.py
Feb 20 09:04:18 np0005625203.localdomain sudo[117296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:19 np0005625203.localdomain python3.9[117298]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:19 np0005625203.localdomain sudo[117296]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:19 np0005625203.localdomain sudo[117389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twwazrnamvsxqsxpoabwarfhiizettin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578259.3007185-1534-173823708626019/AnsiballZ_command.py
Feb 20 09:04:19 np0005625203.localdomain sudo[117389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:19 np0005625203.localdomain python3.9[117391]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:19 np0005625203.localdomain sudo[117389]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:20 np0005625203.localdomain sudo[117482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-syxnxziqbxokewwyniizfmfsksdryzpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578259.8320842-1534-181318435460225/AnsiballZ_command.py
Feb 20 09:04:20 np0005625203.localdomain sudo[117482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5044 DF PROTO=TCP SPT=48310 DPT=9100 SEQ=790957514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EDF5C00000000001030307) 
Feb 20 09:04:20 np0005625203.localdomain python3.9[117484]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:20 np0005625203.localdomain sudo[117482]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:20 np0005625203.localdomain sudo[117575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rcaxqteeewucfaewnrkxbjsjngukvtqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578260.4157655-1534-175848437306365/AnsiballZ_command.py
Feb 20 09:04:20 np0005625203.localdomain sudo[117575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:20 np0005625203.localdomain python3.9[117577]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:20 np0005625203.localdomain sudo[117575]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:21 np0005625203.localdomain sudo[117668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trviyyyaxrudgiyqfccglzqjgxsmugsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578261.046114-1534-81700245510646/AnsiballZ_command.py
Feb 20 09:04:21 np0005625203.localdomain sudo[117668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:21 np0005625203.localdomain python3.9[117670]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:21 np0005625203.localdomain sudo[117668]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:21 np0005625203.localdomain sshd[117731]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:04:21 np0005625203.localdomain sudo[117762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nsrjtvfvsactgbazsegwizpxyoejehyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578261.6272457-1534-5671955543101/AnsiballZ_command.py
Feb 20 09:04:21 np0005625203.localdomain sudo[117762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:22 np0005625203.localdomain python3.9[117764]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:22 np0005625203.localdomain sudo[117762]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:22 np0005625203.localdomain sudo[117856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aeeontrcnhqyagmkcfbpnveydcllbcnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578262.2282038-1534-87634220873525/AnsiballZ_command.py
Feb 20 09:04:22 np0005625203.localdomain sudo[117856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:22 np0005625203.localdomain python3.9[117858]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud_recover.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:22 np0005625203.localdomain sudo[117856]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:22 np0005625203.localdomain sshd[117731]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:04:22 np0005625203.localdomain sudo[117949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-njfaqbppsgclwmoylnrljlpbuurnnfyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578262.731943-1534-85185001432804/AnsiballZ_command.py
Feb 20 09:04:22 np0005625203.localdomain sudo[117949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11307 DF PROTO=TCP SPT=37942 DPT=9101 SEQ=3489794858 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE01410000000001030307) 
Feb 20 09:04:23 np0005625203.localdomain python3.9[117951]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:23 np0005625203.localdomain sudo[117949]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:23 np0005625203.localdomain sudo[118042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-invnhjelphjujfjcvwshjhdfcnwdghwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578263.27875-1534-78458489346293/AnsiballZ_command.py
Feb 20 09:04:23 np0005625203.localdomain sudo[118042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:23 np0005625203.localdomain python3.9[118044]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:23 np0005625203.localdomain sudo[118042]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:24 np0005625203.localdomain sudo[118135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eeahwqtfdomsshjqmrnetnfutcmvqxln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578263.8753076-1534-259362529325342/AnsiballZ_command.py
Feb 20 09:04:24 np0005625203.localdomain sudo[118135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:24 np0005625203.localdomain python3.9[118137]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_controller.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:24 np0005625203.localdomain sudo[118135]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:24 np0005625203.localdomain sudo[118228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-arbkzicimmtxkamvwhekmqecjbrrolre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578264.4566417-1534-30787827444054/AnsiballZ_command.py
Feb 20 09:04:24 np0005625203.localdomain sudo[118228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:24 np0005625203.localdomain python3.9[118230]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ovn_metadata_agent.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:24 np0005625203.localdomain sudo[118228]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:25 np0005625203.localdomain sudo[118321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-docflmtxbgzebecukjgcjfeiqpzzmmrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578265.047292-1534-244081571969026/AnsiballZ_command.py
Feb 20 09:04:25 np0005625203.localdomain sudo[118321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:25 np0005625203.localdomain python3.9[118323]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_rsyslog.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:25 np0005625203.localdomain sudo[118321]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5046 DF PROTO=TCP SPT=48310 DPT=9100 SEQ=790957514 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE0D800000000001030307) 
Feb 20 09:04:26 np0005625203.localdomain sshd[111948]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:04:26 np0005625203.localdomain systemd[1]: session-37.scope: Deactivated successfully.
Feb 20 09:04:26 np0005625203.localdomain systemd[1]: session-37.scope: Consumed 30.137s CPU time.
Feb 20 09:04:26 np0005625203.localdomain systemd-logind[759]: Session 37 logged out. Waiting for processes to exit.
Feb 20 09:04:26 np0005625203.localdomain systemd-logind[759]: Removed session 37.
Feb 20 09:04:28 np0005625203.localdomain sshd[118339]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:04:28 np0005625203.localdomain sshd[118339]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:04:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20068 DF PROTO=TCP SPT=33054 DPT=9102 SEQ=1953515532 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE19950000000001030307) 
Feb 20 09:04:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5390 DF PROTO=TCP SPT=56292 DPT=9882 SEQ=1508480366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE24810000000001030307) 
Feb 20 09:04:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5391 DF PROTO=TCP SPT=56292 DPT=9882 SEQ=1508480366 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE34400000000001030307) 
Feb 20 09:04:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12290 DF PROTO=TCP SPT=39854 DPT=9105 SEQ=1760226797 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE3C000000000001030307) 
Feb 20 09:04:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46056 DF PROTO=TCP SPT=35380 DPT=9105 SEQ=3028066831 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE48800000000001030307) 
Feb 20 09:04:44 np0005625203.localdomain sshd[118341]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:04:44 np0005625203.localdomain sshd[118341]: Accepted publickey for zuul from 192.168.122.30 port 55058 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:04:44 np0005625203.localdomain systemd-logind[759]: New session 38 of user zuul.
Feb 20 09:04:44 np0005625203.localdomain systemd[1]: Started Session 38 of User zuul.
Feb 20 09:04:44 np0005625203.localdomain sshd[118341]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:04:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12292 DF PROTO=TCP SPT=39854 DPT=9105 SEQ=1760226797 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE53C00000000001030307) 
Feb 20 09:04:45 np0005625203.localdomain python3.9[118434]: ansible-ansible.legacy.ping Invoked with data=pong
Feb 20 09:04:46 np0005625203.localdomain python3.9[118538]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:04:47 np0005625203.localdomain sudo[118628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chxtruoldihfugbrrkpsdjlzcgxbkuav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578286.6119545-89-16212572727685/AnsiballZ_command.py
Feb 20 09:04:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55852 DF PROTO=TCP SPT=58834 DPT=9101 SEQ=1473074107 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE5EC00000000001030307) 
Feb 20 09:04:47 np0005625203.localdomain sudo[118628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:47 np0005625203.localdomain python3.9[118630]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:04:47 np0005625203.localdomain sudo[118628]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:48 np0005625203.localdomain sudo[118721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-edcungloclpeenqltqjhbrgjkkeiytwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578287.6201143-125-227309618041767/AnsiballZ_stat.py
Feb 20 09:04:48 np0005625203.localdomain sudo[118721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:48 np0005625203.localdomain python3.9[118723]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:04:48 np0005625203.localdomain sudo[118721]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:48 np0005625203.localdomain sudo[118813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-guprmysebvtobaryqlcgyexqiyhmqdno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578288.3889692-149-51378182737836/AnsiballZ_file.py
Feb 20 09:04:48 np0005625203.localdomain sudo[118813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:49 np0005625203.localdomain python3.9[118815]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:49 np0005625203.localdomain sudo[118813]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:49 np0005625203.localdomain sudo[118905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lrwanfvkxavxguziykvggnbmlqzwlxck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578289.2268314-173-154583469898237/AnsiballZ_stat.py
Feb 20 09:04:49 np0005625203.localdomain sudo[118905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:49 np0005625203.localdomain python3.9[118907]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:04:49 np0005625203.localdomain sudo[118905]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37006 DF PROTO=TCP SPT=33684 DPT=9101 SEQ=65978452 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE6A800000000001030307) 
Feb 20 09:04:50 np0005625203.localdomain sudo[118978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ukqzpslglgeiuywoemkhrqvkgqysbexx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578289.2268314-173-154583469898237/AnsiballZ_copy.py
Feb 20 09:04:50 np0005625203.localdomain sudo[118978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:50 np0005625203.localdomain python3.9[118980]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578289.2268314-173-154583469898237/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:50 np0005625203.localdomain sudo[118978]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:50 np0005625203.localdomain sudo[119070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvdtcvhfkelbdfeerdwbnenzxuxykjin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578290.6574805-218-156404774270726/AnsiballZ_setup.py
Feb 20 09:04:50 np0005625203.localdomain sudo[119070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:51 np0005625203.localdomain python3.9[119072]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:04:51 np0005625203.localdomain sudo[119070]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:51 np0005625203.localdomain sudo[119166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lozhzlxswzzcoihcutqcdpksonlaarbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578291.7374272-242-32648350317211/AnsiballZ_file.py
Feb 20 09:04:51 np0005625203.localdomain sudo[119166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:52 np0005625203.localdomain python3.9[119168]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:04:52 np0005625203.localdomain sudo[119166]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:52 np0005625203.localdomain sudo[119258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qkyfwntlwbargmrckulglekpmlbuefkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578292.487144-269-276926061808791/AnsiballZ_file.py
Feb 20 09:04:52 np0005625203.localdomain sudo[119258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:52 np0005625203.localdomain python3.9[119260]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:04:52 np0005625203.localdomain sudo[119258]: pam_unix(sudo:session): session closed for user root
Feb 20 09:04:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44761 DF PROTO=TCP SPT=48982 DPT=9100 SEQ=377659183 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE76800000000001030307) 
Feb 20 09:04:53 np0005625203.localdomain python3.9[119350]: ansible-ansible.builtin.service_facts Invoked
Feb 20 09:04:53 np0005625203.localdomain network[119367]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 09:04:53 np0005625203.localdomain network[119368]: 'network-scripts' will be removed from distribution in near future.
Feb 20 09:04:53 np0005625203.localdomain network[119369]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 09:04:54 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:04:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51255 DF PROTO=TCP SPT=48432 DPT=9100 SEQ=2707387698 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE82C00000000001030307) 
Feb 20 09:04:57 np0005625203.localdomain python3.9[119567]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:04:58 np0005625203.localdomain python3.9[119657]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:04:59 np0005625203.localdomain sudo[119751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nkqnyeovlhsrptpeazclvkbigmoofxmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578299.031186-371-7245791077915/AnsiballZ_command.py
Feb 20 09:04:59 np0005625203.localdomain sudo[119751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:04:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9029 DF PROTO=TCP SPT=50356 DPT=9102 SEQ=2273072654 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE8EC50000000001030307) 
Feb 20 09:04:59 np0005625203.localdomain python3.9[119753]: ansible-ansible.legacy.command Invoked with _raw_params=# This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream
                                                            set -euxo pipefail
                                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                                            python3 -m venv ./venv
                                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main
                                                            # This is required for FIPS enabled until trunk.rdoproject.org
                                                            # is not being served from a centos7 host, tracked by
                                                            # https://issues.redhat.com/browse/RHOSZUUL-1517
                                                            dnf -y install crypto-policies
                                                            update-crypto-policies --set FIPS:NO-ENFORCE-EMS
                                                            ./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream
                                                            
                                                            # Exclude ceph-common-18.2.7 as it's pulling newer openssl not compatible
                                                            # with rhel 9.2 openssh
                                                            dnf config-manager --setopt centos9-storage.exclude="ceph-common-18.2.7" --save
                                                            dnf -y upgrade openstack-selinux
                                                            rm -f /run/virtlogd.pid
                                                            
                                                            rm -rf repo-setup-main
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:05:00 np0005625203.localdomain sudo[119764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:05:00 np0005625203.localdomain sudo[119764]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:05:00 np0005625203.localdomain sudo[119764]: pam_unix(sudo:session): session closed for user root
Feb 20 09:05:00 np0005625203.localdomain sudo[119779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:05:00 np0005625203.localdomain sudo[119779]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:05:01 np0005625203.localdomain sudo[119779]: pam_unix(sudo:session): session closed for user root
Feb 20 09:05:01 np0005625203.localdomain sudo[119828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:05:01 np0005625203.localdomain sudo[119828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:05:01 np0005625203.localdomain sudo[119828]: pam_unix(sudo:session): session closed for user root
Feb 20 09:05:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19417 DF PROTO=TCP SPT=34550 DPT=9882 SEQ=2532824881 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EE99800000000001030307) 
Feb 20 09:05:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19418 DF PROTO=TCP SPT=34550 DPT=9882 SEQ=2532824881 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EEA9400000000001030307) 
Feb 20 09:05:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3742 DF PROTO=TCP SPT=56368 DPT=9105 SEQ=2950564560 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EEB1000000000001030307) 
Feb 20 09:05:08 np0005625203.localdomain sshd[119861]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:05:09 np0005625203.localdomain sshd[119861]: Received disconnect from 194.107.115.2 port 29884:11: Bye Bye [preauth]
Feb 20 09:05:09 np0005625203.localdomain sshd[119861]: Disconnected from authenticating user root 194.107.115.2 port 29884 [preauth]
Feb 20 09:05:09 np0005625203.localdomain sshd[45952]: Received signal 15; terminating.
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: Stopping OpenSSH server daemon...
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: sshd.service: Deactivated successfully.
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: Stopped OpenSSH server daemon.
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: sshd.service: Consumed 24.997s CPU time.
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: Stopped target sshd-keygen.target.
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: Stopping sshd-keygen.target...
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: Reached target sshd-keygen.target.
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: Starting OpenSSH server daemon...
Feb 20 09:05:09 np0005625203.localdomain sshd[119875]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:05:09 np0005625203.localdomain sshd[119875]: Server listening on 0.0.0.0 port 22.
Feb 20 09:05:09 np0005625203.localdomain sshd[119875]: Server listening on :: port 22.
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: Started OpenSSH server daemon.
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: Starting man-db-cache-update.service...
Feb 20 09:05:09 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: Finished man-db-cache-update.service.
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: run-rce3344043fc343549a1952ee4456c232.service: Deactivated successfully.
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: run-r02743837a1354b27a965ef705f5293a3.service: Deactivated successfully.
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: Stopping OpenSSH server daemon...
Feb 20 09:05:10 np0005625203.localdomain sshd[119875]: Received signal 15; terminating.
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: sshd.service: Deactivated successfully.
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: Stopped OpenSSH server daemon.
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: Stopped target sshd-keygen.target.
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: Stopping sshd-keygen.target...
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: Reached target sshd-keygen.target.
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: Starting OpenSSH server daemon...
Feb 20 09:05:10 np0005625203.localdomain sshd[120046]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:05:10 np0005625203.localdomain sshd[120046]: Server listening on 0.0.0.0 port 22.
Feb 20 09:05:10 np0005625203.localdomain sshd[120046]: Server listening on :: port 22.
Feb 20 09:05:10 np0005625203.localdomain systemd[1]: Started OpenSSH server daemon.
Feb 20 09:05:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5201 DF PROTO=TCP SPT=48660 DPT=9105 SEQ=2722283333 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EEBC800000000001030307) 
Feb 20 09:05:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3744 DF PROTO=TCP SPT=56368 DPT=9105 SEQ=2950564560 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EEC8C00000000001030307) 
Feb 20 09:05:14 np0005625203.localdomain sshd[120052]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:05:15 np0005625203.localdomain sshd[120052]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:05:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37282 DF PROTO=TCP SPT=57190 DPT=9101 SEQ=3856103669 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EED4000000000001030307) 
Feb 20 09:05:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30285 DF PROTO=TCP SPT=55956 DPT=9100 SEQ=1497053121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EEE0400000000001030307) 
Feb 20 09:05:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37284 DF PROTO=TCP SPT=57190 DPT=9101 SEQ=3856103669 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EEEBC00000000001030307) 
Feb 20 09:05:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30287 DF PROTO=TCP SPT=55956 DPT=9100 SEQ=1497053121 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EEF8000000000001030307) 
Feb 20 09:05:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59312 DF PROTO=TCP SPT=60240 DPT=9102 SEQ=1073300938 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF03F50000000001030307) 
Feb 20 09:05:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47299 DF PROTO=TCP SPT=36928 DPT=9882 SEQ=3741819254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF0EC00000000001030307) 
Feb 20 09:05:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47300 DF PROTO=TCP SPT=36928 DPT=9882 SEQ=3741819254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF1E810000000001030307) 
Feb 20 09:05:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16531 DF PROTO=TCP SPT=57946 DPT=9105 SEQ=1216846447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF26400000000001030307) 
Feb 20 09:05:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12295 DF PROTO=TCP SPT=39854 DPT=9105 SEQ=1760226797 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF32800000000001030307) 
Feb 20 09:05:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16533 DF PROTO=TCP SPT=57946 DPT=9105 SEQ=1216846447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF3E010000000001030307) 
Feb 20 09:05:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8869 DF PROTO=TCP SPT=55524 DPT=9101 SEQ=1329814158 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF49400000000001030307) 
Feb 20 09:05:48 np0005625203.localdomain sshd[120187]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:05:49 np0005625203.localdomain sshd[120187]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:05:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9668 DF PROTO=TCP SPT=46878 DPT=9100 SEQ=795184740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF55400000000001030307) 
Feb 20 09:05:52 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51258 DF PROTO=TCP SPT=48432 DPT=9100 SEQ=2707387698 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF60800000000001030307) 
Feb 20 09:05:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9670 DF PROTO=TCP SPT=46878 DPT=9100 SEQ=795184740 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF6D000000000001030307) 
Feb 20 09:05:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54641 DF PROTO=TCP SPT=50850 DPT=9102 SEQ=959923193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF79250000000001030307) 
Feb 20 09:05:59 np0005625203.localdomain sshd[120443]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:05:59 np0005625203.localdomain sshd[120443]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:06:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7683 DF PROTO=TCP SPT=50180 DPT=9882 SEQ=767918684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF84000000000001030307) 
Feb 20 09:06:02 np0005625203.localdomain sudo[120462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:06:02 np0005625203.localdomain sudo[120462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:06:02 np0005625203.localdomain sudo[120462]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:02 np0005625203.localdomain sudo[120478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:06:02 np0005625203.localdomain sudo[120478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:06:02 np0005625203.localdomain sudo[120478]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:03 np0005625203.localdomain sudo[120530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:06:03 np0005625203.localdomain sudo[120530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:06:03 np0005625203.localdomain sudo[120530]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7684 DF PROTO=TCP SPT=50180 DPT=9882 SEQ=767918684 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF93C00000000001030307) 
Feb 20 09:06:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56532 DF PROTO=TCP SPT=46954 DPT=9105 SEQ=517650576 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EF9B800000000001030307) 
Feb 20 09:06:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56534 DF PROTO=TCP SPT=46954 DPT=9105 SEQ=517650576 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EFB3400000000001030307) 
Feb 20 09:06:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54645 DF PROTO=TCP SPT=50850 DPT=9102 SEQ=959923193 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EFB4810000000001030307) 
Feb 20 09:06:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28436 DF PROTO=TCP SPT=60920 DPT=9101 SEQ=3166691404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EFBE400000000001030307) 
Feb 20 09:06:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7902 DF PROTO=TCP SPT=36144 DPT=9100 SEQ=1887955201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EFCA800000000001030307) 
Feb 20 09:06:21 np0005625203.localdomain kernel: SELinux:  Converting 2741 SID table entries...
Feb 20 09:06:21 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 09:06:21 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 09:06:21 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 09:06:21 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 09:06:21 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 09:06:21 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 09:06:21 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 09:06:22 np0005625203.localdomain sudo[119751]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28438 DF PROTO=TCP SPT=60920 DPT=9101 SEQ=3166691404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EFD6000000000001030307) 
Feb 20 09:06:23 np0005625203.localdomain sudo[120774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bwaieezwmvhfdbkrfwwfpqejgvtfzeav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578383.2728-398-135348031836258/AnsiballZ_file.py
Feb 20 09:06:23 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=17 res=1
Feb 20 09:06:23 np0005625203.localdomain sudo[120774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:23 np0005625203.localdomain python3.9[120776]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:06:23 np0005625203.localdomain sudo[120774]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:24 np0005625203.localdomain sudo[120866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ptwtftvskigcyxwnbavlxzallplqeaum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578383.9591498-422-211620574886003/AnsiballZ_stat.py
Feb 20 09:06:24 np0005625203.localdomain sudo[120866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:24 np0005625203.localdomain python3.9[120868]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/edpm.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:06:24 np0005625203.localdomain sudo[120866]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:24 np0005625203.localdomain sudo[120939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ryujtmanjyhsdkpnakpmdygywkqolbxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578383.9591498-422-211620574886003/AnsiballZ_copy.py
Feb 20 09:06:24 np0005625203.localdomain sudo[120939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:24 np0005625203.localdomain python3.9[120941]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/edpm.fact mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578383.9591498-422-211620574886003/.source.fact _original_basename=.dxnr4m1r follow=False checksum=d686dccd4d8cd0883f3e3bc0a6f664c73290ba68 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:06:24 np0005625203.localdomain sudo[120939]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:25 np0005625203.localdomain python3.9[121031]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:06:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7904 DF PROTO=TCP SPT=36144 DPT=9100 SEQ=1887955201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EFE2410000000001030307) 
Feb 20 09:06:26 np0005625203.localdomain sudo[121127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-puexhgofeakpcmajywppjthcfjcrsolc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578386.3440204-497-58056345396210/AnsiballZ_setup.py
Feb 20 09:06:26 np0005625203.localdomain sudo[121127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:26 np0005625203.localdomain python3.9[121129]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:06:27 np0005625203.localdomain sudo[121127]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:27 np0005625203.localdomain sudo[121181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-menauhskncjdscpextoeikmoaggbfqcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578386.3440204-497-58056345396210/AnsiballZ_dnf.py
Feb 20 09:06:27 np0005625203.localdomain sudo[121181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:27 np0005625203.localdomain python3.9[121183]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:06:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10436 DF PROTO=TCP SPT=42522 DPT=9102 SEQ=3268616485 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EFEE550000000001030307) 
Feb 20 09:06:31 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:06:31 np0005625203.localdomain systemd-sysv-generator[121224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:06:31 np0005625203.localdomain systemd-rc-local-generator[121220]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:06:31 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:06:31 np0005625203.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Feb 20 09:06:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63407 DF PROTO=TCP SPT=51278 DPT=9882 SEQ=2068263019 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9EFF9400000000001030307) 
Feb 20 09:06:32 np0005625203.localdomain sudo[121181]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:33 np0005625203.localdomain sudo[121321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtcfbxxraqlnzkteubusmfwctfiratol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578392.8223486-533-231365519895941/AnsiballZ_command.py
Feb 20 09:06:33 np0005625203.localdomain sudo[121321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:33 np0005625203.localdomain python3.9[121323]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:06:34 np0005625203.localdomain sudo[121321]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:34 np0005625203.localdomain sudo[121560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nfqitaoigkiqjrzyxfcxygguwhmuxhqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578394.2817478-557-223615676033636/AnsiballZ_selinux.py
Feb 20 09:06:34 np0005625203.localdomain sudo[121560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:35 np0005625203.localdomain python3.9[121562]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Feb 20 09:06:35 np0005625203.localdomain sudo[121560]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:35 np0005625203.localdomain sudo[121652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjcnlhygtebhlbhczrqqvbhfcpjxqqyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578395.6229928-590-121609868344984/AnsiballZ_command.py
Feb 20 09:06:35 np0005625203.localdomain sudo[121652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:36 np0005625203.localdomain python3.9[121654]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Feb 20 09:06:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63408 DF PROTO=TCP SPT=51278 DPT=9882 SEQ=2068263019 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F009010000000001030307) 
Feb 20 09:06:36 np0005625203.localdomain sudo[121652]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:36 np0005625203.localdomain sudo[121745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phkhallepgbuvbivcevbdsroizlryvck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578396.7536051-614-205025910783417/AnsiballZ_file.py
Feb 20 09:06:36 np0005625203.localdomain sudo[121745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:37 np0005625203.localdomain python3.9[121747]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:06:37 np0005625203.localdomain sudo[121745]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:37 np0005625203.localdomain sudo[121837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eulushalxplovkfzwyueuqoqieiptzft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578397.3806388-638-212829952089512/AnsiballZ_mount.py
Feb 20 09:06:37 np0005625203.localdomain sudo[121837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:38 np0005625203.localdomain python3.9[121839]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Feb 20 09:06:38 np0005625203.localdomain sudo[121837]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43999 DF PROTO=TCP SPT=44886 DPT=9105 SEQ=451158133 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F010C00000000001030307) 
Feb 20 09:06:39 np0005625203.localdomain sudo[121929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rfcngdlmeurezfvxmukkgzishvbvuwny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578399.0684915-722-199020247445662/AnsiballZ_file.py
Feb 20 09:06:39 np0005625203.localdomain sudo[121929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:39 np0005625203.localdomain python3.9[121931]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:06:39 np0005625203.localdomain sudo[121929]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:40 np0005625203.localdomain sudo[122021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-klkgbxjopuehpbpwbotrstlxacslojdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578399.7556534-746-177106871983780/AnsiballZ_stat.py
Feb 20 09:06:40 np0005625203.localdomain sudo[122021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:40 np0005625203.localdomain python3.9[122023]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:06:40 np0005625203.localdomain sudo[122021]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:40 np0005625203.localdomain sudo[122094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-odmajjikgnikqanxxkbnoctrnrdenvzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578399.7556534-746-177106871983780/AnsiballZ_copy.py
Feb 20 09:06:40 np0005625203.localdomain sudo[122094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:40 np0005625203.localdomain python3.9[122096]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578399.7556534-746-177106871983780/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:06:40 np0005625203.localdomain sudo[122094]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16536 DF PROTO=TCP SPT=57946 DPT=9105 SEQ=1216846447 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F01C810000000001030307) 
Feb 20 09:06:41 np0005625203.localdomain sudo[122186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xlqrrrwrhionoowpongqjdrglvqqttby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578401.480888-818-93614750462024/AnsiballZ_stat.py
Feb 20 09:06:41 np0005625203.localdomain sudo[122186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:41 np0005625203.localdomain python3.9[122188]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:06:41 np0005625203.localdomain sudo[122186]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:42 np0005625203.localdomain sudo[122280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-plhfwprojpyjnwljndapdyuqfpfnjazv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578402.5440238-857-49598221945672/AnsiballZ_getent.py
Feb 20 09:06:42 np0005625203.localdomain sudo[122280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:43 np0005625203.localdomain python3.9[122282]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Feb 20 09:06:43 np0005625203.localdomain sudo[122280]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:43 np0005625203.localdomain sudo[122373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djkpopahdtorbppgltxzehjdsbyfbczp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578403.629116-887-200441036237380/AnsiballZ_getent.py
Feb 20 09:06:43 np0005625203.localdomain sudo[122373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:44 np0005625203.localdomain python3.9[122375]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Feb 20 09:06:44 np0005625203.localdomain sudo[122373]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63409 DF PROTO=TCP SPT=51278 DPT=9882 SEQ=2068263019 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F028800000000001030307) 
Feb 20 09:06:44 np0005625203.localdomain sshd[122391]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:06:44 np0005625203.localdomain sshd[122391]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:06:44 np0005625203.localdomain sudo[122468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vjjpzeuifodbvuihapxagxfbwskmprzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578404.5147274-911-214664410179301/AnsiballZ_group.py
Feb 20 09:06:44 np0005625203.localdomain sudo[122468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:45 np0005625203.localdomain python3.9[122470]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 20 09:06:45 np0005625203.localdomain groupmod[122471]: group changed in /etc/group (group hugetlbfs/985, new gid: 42477)
Feb 20 09:06:45 np0005625203.localdomain groupmod[122471]: group changed in /etc/passwd (group hugetlbfs/985, new gid: 42477)
Feb 20 09:06:45 np0005625203.localdomain sudo[122468]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:45 np0005625203.localdomain sudo[122566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwupbjlsardmdkcjhoqpszqonysztmqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578405.4492583-938-71068848918309/AnsiballZ_file.py
Feb 20 09:06:45 np0005625203.localdomain sudo[122566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:45 np0005625203.localdomain python3.9[122568]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Feb 20 09:06:45 np0005625203.localdomain sudo[122566]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:46 np0005625203.localdomain sudo[122658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbxcettcwzvexxuiqhlmshiyusltnlsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578406.3281803-971-278026960400774/AnsiballZ_dnf.py
Feb 20 09:06:46 np0005625203.localdomain sudo[122658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:46 np0005625203.localdomain python3.9[122660]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:06:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40382 DF PROTO=TCP SPT=33134 DPT=9101 SEQ=1064166088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F033810000000001030307) 
Feb 20 09:06:49 np0005625203.localdomain sudo[122658]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41680 DF PROTO=TCP SPT=37952 DPT=9100 SEQ=1363203903 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F03FC00000000001030307) 
Feb 20 09:06:50 np0005625203.localdomain sudo[122752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ymnyjexxbolnquxkaegjisfabimvahzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578410.5389464-995-196335327972279/AnsiballZ_file.py
Feb 20 09:06:50 np0005625203.localdomain sudo[122752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:50 np0005625203.localdomain python3.9[122754]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:06:50 np0005625203.localdomain sudo[122752]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:51 np0005625203.localdomain sudo[122844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dogutjgjdwxywesxrqqzcygtqicrqevb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578411.7291317-1019-167862889225587/AnsiballZ_stat.py
Feb 20 09:06:51 np0005625203.localdomain sudo[122844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:52 np0005625203.localdomain python3.9[122846]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:06:52 np0005625203.localdomain sudo[122844]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:52 np0005625203.localdomain sudo[122917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lfyjutryihhcmqhfkqwlnjztynextpyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578411.7291317-1019-167862889225587/AnsiballZ_copy.py
Feb 20 09:06:52 np0005625203.localdomain sudo[122917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:52 np0005625203.localdomain python3.9[122919]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578411.7291317-1019-167862889225587/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:06:52 np0005625203.localdomain sudo[122917]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40384 DF PROTO=TCP SPT=33134 DPT=9101 SEQ=1064166088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F04B400000000001030307) 
Feb 20 09:06:53 np0005625203.localdomain sudo[123009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hqdfvbeakbfinshlgttgjijvnfkgkclj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578412.929522-1064-94968547728771/AnsiballZ_systemd.py
Feb 20 09:06:53 np0005625203.localdomain sudo[123009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:53 np0005625203.localdomain python3.9[123011]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:06:53 np0005625203.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 20 09:06:53 np0005625203.localdomain systemd[1]: Stopped Load Kernel Modules.
Feb 20 09:06:53 np0005625203.localdomain systemd[1]: Stopping Load Kernel Modules...
Feb 20 09:06:53 np0005625203.localdomain systemd[1]: Starting Load Kernel Modules...
Feb 20 09:06:53 np0005625203.localdomain systemd-modules-load[123015]: Module 'msr' is built in
Feb 20 09:06:53 np0005625203.localdomain systemd[1]: Finished Load Kernel Modules.
Feb 20 09:06:53 np0005625203.localdomain sudo[123009]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:54 np0005625203.localdomain sudo[123107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhkphnmkanqdrpkropebakstcngmadxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578413.99497-1088-40020664255239/AnsiballZ_stat.py
Feb 20 09:06:54 np0005625203.localdomain sudo[123107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:54 np0005625203.localdomain python3.9[123109]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:06:54 np0005625203.localdomain sudo[123107]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:54 np0005625203.localdomain sudo[123180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpskozhgcpcnjabkeoritawvfzxkmbge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578413.99497-1088-40020664255239/AnsiballZ_copy.py
Feb 20 09:06:54 np0005625203.localdomain sudo[123180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:54 np0005625203.localdomain python3.9[123182]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578413.99497-1088-40020664255239/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:06:54 np0005625203.localdomain sudo[123180]: pam_unix(sudo:session): session closed for user root
Feb 20 09:06:55 np0005625203.localdomain sudo[123272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgxmuzqlzavpqymffzliqrwtwvnkejgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578415.5800447-1142-44678723579289/AnsiballZ_dnf.py
Feb 20 09:06:55 np0005625203.localdomain sudo[123272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:06:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41682 DF PROTO=TCP SPT=37952 DPT=9100 SEQ=1363203903 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F057800000000001030307) 
Feb 20 09:06:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22422 DF PROTO=TCP SPT=36164 DPT=9882 SEQ=1978735383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F0624D0000000001030307) 
Feb 20 09:06:59 np0005625203.localdomain python3.9[123274]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:07:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22424 DF PROTO=TCP SPT=36164 DPT=9882 SEQ=1978735383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F06E410000000001030307) 
Feb 20 09:07:03 np0005625203.localdomain sudo[123272]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:03 np0005625203.localdomain sudo[123325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:07:03 np0005625203.localdomain sudo[123325]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:07:03 np0005625203.localdomain sudo[123325]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:03 np0005625203.localdomain sudo[123364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:07:03 np0005625203.localdomain sudo[123364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:07:04 np0005625203.localdomain python3.9[123398]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:07:04 np0005625203.localdomain sudo[123364]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:05 np0005625203.localdomain python3.9[123522]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Feb 20 09:07:05 np0005625203.localdomain sudo[123523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:07:05 np0005625203.localdomain sudo[123523]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:07:05 np0005625203.localdomain sudo[123523]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:05 np0005625203.localdomain python3.9[123627]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:07:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22425 DF PROTO=TCP SPT=36164 DPT=9882 SEQ=1978735383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F07E000000000001030307) 
Feb 20 09:07:06 np0005625203.localdomain sudo[123717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gpracrtrsauaibpggiiqolixhuxltfpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578426.148171-1265-256943280274404/AnsiballZ_systemd.py
Feb 20 09:07:06 np0005625203.localdomain sudo[123717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:06 np0005625203.localdomain python3.9[123719]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:07:06 np0005625203.localdomain systemd[1]: Stopping Dynamic System Tuning Daemon...
Feb 20 09:07:06 np0005625203.localdomain systemd[1]: tuned.service: Deactivated successfully.
Feb 20 09:07:06 np0005625203.localdomain systemd[1]: Stopped Dynamic System Tuning Daemon.
Feb 20 09:07:06 np0005625203.localdomain systemd[1]: tuned.service: Consumed 1.849s CPU time, no IO.
Feb 20 09:07:06 np0005625203.localdomain systemd[1]: Starting Dynamic System Tuning Daemon...
Feb 20 09:07:07 np0005625203.localdomain systemd[1]: Started Dynamic System Tuning Daemon.
Feb 20 09:07:07 np0005625203.localdomain sudo[123717]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20385 DF PROTO=TCP SPT=48260 DPT=9105 SEQ=434317176 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F085C00000000001030307) 
Feb 20 09:07:09 np0005625203.localdomain python3.9[123821]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Feb 20 09:07:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56537 DF PROTO=TCP SPT=46954 DPT=9105 SEQ=517650576 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F092810000000001030307) 
Feb 20 09:07:12 np0005625203.localdomain sudo[123911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxrejeimdobnufykejbhcmfxymanzecz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578432.6939409-1436-148757485413076/AnsiballZ_systemd.py
Feb 20 09:07:12 np0005625203.localdomain sudo[123911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:13 np0005625203.localdomain python3.9[123913]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:07:13 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:07:13 np0005625203.localdomain systemd-rc-local-generator[123938]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:07:13 np0005625203.localdomain systemd-sysv-generator[123942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:07:13 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:07:13 np0005625203.localdomain sudo[123911]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:13 np0005625203.localdomain sshd[124009]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:07:14 np0005625203.localdomain sudo[124042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jgxaplptqotcswmknylqwvagwulwfyir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578433.7458284-1436-169932360477675/AnsiballZ_systemd.py
Feb 20 09:07:14 np0005625203.localdomain sudo[124042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20387 DF PROTO=TCP SPT=48260 DPT=9105 SEQ=434317176 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F09D810000000001030307) 
Feb 20 09:07:14 np0005625203.localdomain sshd[124009]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:07:14 np0005625203.localdomain python3.9[124044]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:07:14 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:07:14 np0005625203.localdomain systemd-sysv-generator[124078]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:07:14 np0005625203.localdomain systemd-rc-local-generator[124072]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:07:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:07:14 np0005625203.localdomain sudo[124042]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:16 np0005625203.localdomain sudo[124173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbvgqtanoenncewsrphpaqvilqqsxvfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578436.0719876-1484-7465698772827/AnsiballZ_command.py
Feb 20 09:07:16 np0005625203.localdomain sudo[124173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:16 np0005625203.localdomain python3.9[124175]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:07:16 np0005625203.localdomain sudo[124173]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1343 DF PROTO=TCP SPT=50806 DPT=9101 SEQ=2407975767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F0A8C10000000001030307) 
Feb 20 09:07:17 np0005625203.localdomain sudo[124266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twgdbjpkefkttpcogaqzezuuvsuniydz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578436.8477938-1508-10487422924300/AnsiballZ_command.py
Feb 20 09:07:17 np0005625203.localdomain sudo[124266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:17 np0005625203.localdomain python3.9[124268]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:07:17 np0005625203.localdomain kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k FS
Feb 20 09:07:17 np0005625203.localdomain sudo[124266]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:17 np0005625203.localdomain sudo[124359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vdzxwotmhzejavdguufyhrcbqjwnkkjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578437.56007-1532-262639602170998/AnsiballZ_command.py
Feb 20 09:07:17 np0005625203.localdomain sudo[124359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:17 np0005625203.localdomain python3.9[124361]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:07:19 np0005625203.localdomain sudo[124359]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:19 np0005625203.localdomain sudo[124458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vblsfciasykkemslgiqpzdcgxdlbwiyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578439.3064427-1556-75129343618257/AnsiballZ_command.py
Feb 20 09:07:19 np0005625203.localdomain sudo[124458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:19 np0005625203.localdomain python3.9[124460]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:07:19 np0005625203.localdomain sudo[124458]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28441 DF PROTO=TCP SPT=60920 DPT=9101 SEQ=3166691404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F0B4800000000001030307) 
Feb 20 09:07:20 np0005625203.localdomain sudo[124551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxaiyyhtdgkutdrxjskuoxtytruaesha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578439.9746547-1581-163158093119958/AnsiballZ_systemd.py
Feb 20 09:07:20 np0005625203.localdomain sudo[124551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:20 np0005625203.localdomain python3.9[124553]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:07:20 np0005625203.localdomain systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 20 09:07:20 np0005625203.localdomain systemd[1]: Stopped Apply Kernel Variables.
Feb 20 09:07:20 np0005625203.localdomain systemd[1]: Stopping Apply Kernel Variables...
Feb 20 09:07:20 np0005625203.localdomain systemd[1]: Starting Apply Kernel Variables...
Feb 20 09:07:20 np0005625203.localdomain systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 20 09:07:20 np0005625203.localdomain systemd[1]: Finished Apply Kernel Variables.
Feb 20 09:07:20 np0005625203.localdomain sudo[124551]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:21 np0005625203.localdomain sshd[118341]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:07:21 np0005625203.localdomain systemd[1]: session-38.scope: Deactivated successfully.
Feb 20 09:07:21 np0005625203.localdomain systemd[1]: session-38.scope: Consumed 1min 57.189s CPU time.
Feb 20 09:07:21 np0005625203.localdomain systemd-logind[759]: Session 38 logged out. Waiting for processes to exit.
Feb 20 09:07:21 np0005625203.localdomain systemd-logind[759]: Removed session 38.
Feb 20 09:07:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7907 DF PROTO=TCP SPT=36144 DPT=9100 SEQ=1887955201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F0C0800000000001030307) 
Feb 20 09:07:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19919 DF PROTO=TCP SPT=58832 DPT=9100 SEQ=351581830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F0CCC00000000001030307) 
Feb 20 09:07:26 np0005625203.localdomain sshd[124573]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:07:26 np0005625203.localdomain sshd[124573]: Accepted publickey for zuul from 192.168.122.30 port 38722 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:07:26 np0005625203.localdomain systemd-logind[759]: New session 39 of user zuul.
Feb 20 09:07:26 np0005625203.localdomain systemd[1]: Started Session 39 of User zuul.
Feb 20 09:07:26 np0005625203.localdomain sshd[124573]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:07:26 np0005625203.localdomain sshd[124609]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:07:27 np0005625203.localdomain sshd[124609]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:07:27 np0005625203.localdomain python3.9[124668]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:07:28 np0005625203.localdomain python3.9[124762]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:07:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31200 DF PROTO=TCP SPT=42636 DPT=9102 SEQ=1421683916 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F0D8B50000000001030307) 
Feb 20 09:07:29 np0005625203.localdomain sudo[124856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-knmrtduitphcpqmloaqdstahwmbhztvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578449.4740427-106-267956283033359/AnsiballZ_command.py
Feb 20 09:07:29 np0005625203.localdomain sudo[124856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:30 np0005625203.localdomain python3.9[124858]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:07:30 np0005625203.localdomain sudo[124856]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:31 np0005625203.localdomain python3.9[124949]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:07:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51718 DF PROTO=TCP SPT=49404 DPT=9882 SEQ=2499024842 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F0E3800000000001030307) 
Feb 20 09:07:32 np0005625203.localdomain sudo[125043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-roynusnrudckfweyjjgmzchbsshuhqyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578451.9219522-166-242391842468115/AnsiballZ_setup.py
Feb 20 09:07:32 np0005625203.localdomain sudo[125043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:32 np0005625203.localdomain python3.9[125045]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:07:32 np0005625203.localdomain sudo[125043]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:33 np0005625203.localdomain sudo[125097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sednvrovwvrwurvxsavnujkpfqcxjrda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578451.9219522-166-242391842468115/AnsiballZ_dnf.py
Feb 20 09:07:33 np0005625203.localdomain sudo[125097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:33 np0005625203.localdomain python3.9[125099]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:07:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51719 DF PROTO=TCP SPT=49404 DPT=9882 SEQ=2499024842 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F0F3410000000001030307) 
Feb 20 09:07:36 np0005625203.localdomain sudo[125097]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:37 np0005625203.localdomain sudo[125191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-egehhxyvrrubppryqueryqqfflnkipgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578457.140758-202-23623544199851/AnsiballZ_setup.py
Feb 20 09:07:37 np0005625203.localdomain sudo[125191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:37 np0005625203.localdomain python3.9[125193]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:07:37 np0005625203.localdomain sudo[125191]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3289 DF PROTO=TCP SPT=47356 DPT=9105 SEQ=2575126548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F0FB000000000001030307) 
Feb 20 09:07:38 np0005625203.localdomain sudo[125338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-csnrnkzmvnehghanxxtufqvnbgrwrbyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578458.592027-235-215921718162355/AnsiballZ_file.py
Feb 20 09:07:39 np0005625203.localdomain sudo[125338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:39 np0005625203.localdomain python3.9[125340]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:07:39 np0005625203.localdomain sudo[125338]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:39 np0005625203.localdomain sudo[125430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tunknmnckievkjjxbsojmmddpgfgnmtb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578459.429757-259-255464566143560/AnsiballZ_command.py
Feb 20 09:07:39 np0005625203.localdomain sudo[125430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:39 np0005625203.localdomain python3.9[125432]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:07:39 np0005625203.localdomain sudo[125430]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:40 np0005625203.localdomain sudo[125533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjenhuvcmezrzrumupswudppeweubsye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578460.1795003-283-188133882693687/AnsiballZ_stat.py
Feb 20 09:07:40 np0005625203.localdomain sudo[125533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:40 np0005625203.localdomain python3.9[125535]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:07:40 np0005625203.localdomain sudo[125533]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:41 np0005625203.localdomain sudo[125581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aaztmrcmigfyhpcjqsqawitidggrwtin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578460.1795003-283-188133882693687/AnsiballZ_file.py
Feb 20 09:07:41 np0005625203.localdomain sudo[125581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44004 DF PROTO=TCP SPT=44886 DPT=9105 SEQ=451158133 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F106810000000001030307) 
Feb 20 09:07:41 np0005625203.localdomain python3.9[125583]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:07:41 np0005625203.localdomain sudo[125581]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:41 np0005625203.localdomain sshd[125611]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:07:41 np0005625203.localdomain sudo[125675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rcjuizpoaorgobpbjalyuhbkcdfguyed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578461.3971558-319-86621884515209/AnsiballZ_stat.py
Feb 20 09:07:41 np0005625203.localdomain sudo[125675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:41 np0005625203.localdomain python3.9[125677]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:07:41 np0005625203.localdomain sudo[125675]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:42 np0005625203.localdomain sudo[125748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jtniijhjwspmgppgbbpiefidbftavbym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578461.3971558-319-86621884515209/AnsiballZ_copy.py
Feb 20 09:07:42 np0005625203.localdomain sudo[125748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:42 np0005625203.localdomain python3.9[125750]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578461.3971558-319-86621884515209/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:07:42 np0005625203.localdomain sudo[125748]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:42 np0005625203.localdomain sshd[125611]: Invalid user shalini from 102.210.148.92 port 37802
Feb 20 09:07:42 np0005625203.localdomain sshd[125611]: Received disconnect from 102.210.148.92 port 37802:11: Bye Bye [preauth]
Feb 20 09:07:42 np0005625203.localdomain sshd[125611]: Disconnected from invalid user shalini 102.210.148.92 port 37802 [preauth]
Feb 20 09:07:43 np0005625203.localdomain sudo[125840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fhurddlaibnmxicedaznzdpfbrktoogj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578462.7069-367-223597473348230/AnsiballZ_ini_file.py
Feb 20 09:07:43 np0005625203.localdomain sudo[125840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:43 np0005625203.localdomain python3.9[125842]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:07:43 np0005625203.localdomain sudo[125840]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:43 np0005625203.localdomain sudo[125932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tcnnuaywxuddbyrknraegeqdvgcmohns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578463.4767897-367-260934534010408/AnsiballZ_ini_file.py
Feb 20 09:07:43 np0005625203.localdomain sudo[125932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:43 np0005625203.localdomain python3.9[125934]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:07:43 np0005625203.localdomain sudo[125932]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3291 DF PROTO=TCP SPT=47356 DPT=9105 SEQ=2575126548 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F112C00000000001030307) 
Feb 20 09:07:44 np0005625203.localdomain sudo[126024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-imjwzlcvdvipemgqomkqxdwrvwbaqvuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578464.0785553-367-157549557855884/AnsiballZ_ini_file.py
Feb 20 09:07:44 np0005625203.localdomain sudo[126024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:44 np0005625203.localdomain python3.9[126026]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:07:44 np0005625203.localdomain sudo[126024]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:44 np0005625203.localdomain sudo[126116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqwgoqlmgmywoilalzqfnxpakyoduhjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578464.6633062-367-150383862032123/AnsiballZ_ini_file.py
Feb 20 09:07:44 np0005625203.localdomain sudo[126116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:45 np0005625203.localdomain python3.9[126118]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:07:45 np0005625203.localdomain sudo[126116]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:46 np0005625203.localdomain python3.9[126208]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:07:46 np0005625203.localdomain sudo[126300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orfqntwwlboyttjazpoknimefsopwfow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578466.2467587-487-42887922360502/AnsiballZ_dnf.py
Feb 20 09:07:46 np0005625203.localdomain sudo[126300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:46 np0005625203.localdomain python3.9[126302]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 20 09:07:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26961 DF PROTO=TCP SPT=39316 DPT=9101 SEQ=3990747267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F11E000000000001030307) 
Feb 20 09:07:49 np0005625203.localdomain sudo[126300]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38732 DF PROTO=TCP SPT=47980 DPT=9100 SEQ=3081755563 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F12A000000000001030307) 
Feb 20 09:07:50 np0005625203.localdomain sudo[126394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kwzfbthwdmendiczywbnzrxofrfsddzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578470.4211786-511-209720940090389/AnsiballZ_dnf.py
Feb 20 09:07:50 np0005625203.localdomain sudo[126394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:50 np0005625203.localdomain python3.9[126396]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 20 09:07:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26963 DF PROTO=TCP SPT=39316 DPT=9101 SEQ=3990747267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F135C00000000001030307) 
Feb 20 09:07:54 np0005625203.localdomain sudo[126394]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:54 np0005625203.localdomain sudo[126488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fryrchfsrnztidjvsomhlrccoogkjatz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578474.727254-541-267200625073825/AnsiballZ_dnf.py
Feb 20 09:07:55 np0005625203.localdomain sudo[126488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:55 np0005625203.localdomain python3.9[126490]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 20 09:07:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38734 DF PROTO=TCP SPT=47980 DPT=9100 SEQ=3081755563 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F141C00000000001030307) 
Feb 20 09:07:58 np0005625203.localdomain sudo[126488]: pam_unix(sudo:session): session closed for user root
Feb 20 09:07:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6669 DF PROTO=TCP SPT=34508 DPT=9102 SEQ=807187287 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F14DE40000000001030307) 
Feb 20 09:07:59 np0005625203.localdomain sudo[126588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dclwwtwzjwjhkkxftagqinywozipbmdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578479.1186292-568-125527757406104/AnsiballZ_dnf.py
Feb 20 09:07:59 np0005625203.localdomain sudo[126588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:07:59 np0005625203.localdomain python3.9[126590]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 20 09:08:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49407 DF PROTO=TCP SPT=46368 DPT=9882 SEQ=3543608473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F158C00000000001030307) 
Feb 20 09:08:02 np0005625203.localdomain sudo[126588]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:03 np0005625203.localdomain sudo[126682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bgsuzxejrcnqfbmjpknvsyyhwjkgbvwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578483.5784147-604-270061921904038/AnsiballZ_dnf.py
Feb 20 09:08:03 np0005625203.localdomain sudo[126682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:08:04 np0005625203.localdomain python3.9[126684]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 20 09:08:05 np0005625203.localdomain sudo[126687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:08:05 np0005625203.localdomain sudo[126687]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:08:05 np0005625203.localdomain sudo[126687]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:05 np0005625203.localdomain sudo[126702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:08:05 np0005625203.localdomain sudo[126702]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:08:05 np0005625203.localdomain sudo[126702]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49408 DF PROTO=TCP SPT=46368 DPT=9882 SEQ=3543608473 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F168810000000001030307) 
Feb 20 09:08:07 np0005625203.localdomain sudo[126682]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38844 DF PROTO=TCP SPT=44334 DPT=9105 SEQ=734514725 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F170410000000001030307) 
Feb 20 09:08:08 np0005625203.localdomain sudo[126838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gbdmbctpbuhysbgnwfndixfglhtxfwsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578487.9518023-631-21712425615384/AnsiballZ_dnf.py
Feb 20 09:08:08 np0005625203.localdomain sudo[126838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:08:08 np0005625203.localdomain python3.9[126840]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 20 09:08:09 np0005625203.localdomain sshd[126843]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:08:09 np0005625203.localdomain sshd[126843]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:08:09 np0005625203.localdomain sudo[126845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:08:09 np0005625203.localdomain sudo[126845]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:08:09 np0005625203.localdomain sudo[126845]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20390 DF PROTO=TCP SPT=48260 DPT=9105 SEQ=434317176 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F17C810000000001030307) 
Feb 20 09:08:11 np0005625203.localdomain sudo[126838]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:12 np0005625203.localdomain sudo[126949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-acfdihidigbatsmrpkpjbmuziwvpvmuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578492.153454-658-243381043857091/AnsiballZ_dnf.py
Feb 20 09:08:12 np0005625203.localdomain sudo[126949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:08:12 np0005625203.localdomain python3.9[126951]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 20 09:08:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38846 DF PROTO=TCP SPT=44334 DPT=9105 SEQ=734514725 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F188000000000001030307) 
Feb 20 09:08:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39737 DF PROTO=TCP SPT=47798 DPT=9101 SEQ=2042529800 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F193000000000001030307) 
Feb 20 09:08:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46888 DF PROTO=TCP SPT=41040 DPT=9100 SEQ=4174896197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F19F400000000001030307) 
Feb 20 09:08:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19922 DF PROTO=TCP SPT=58832 DPT=9100 SEQ=351581830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F1AA800000000001030307) 
Feb 20 09:08:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46890 DF PROTO=TCP SPT=41040 DPT=9100 SEQ=4174896197 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F1B7000000000001030307) 
Feb 20 09:08:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16124 DF PROTO=TCP SPT=38072 DPT=9102 SEQ=294724731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F1C3150000000001030307) 
Feb 20 09:08:31 np0005625203.localdomain sudo[126949]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:31 np0005625203.localdomain sudo[127116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvqxoxgorxfcbgjdwfbbixtvhgmyfzix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578511.6704292-685-126374052340397/AnsiballZ_dnf.py
Feb 20 09:08:31 np0005625203.localdomain sudo[127116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:08:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39638 DF PROTO=TCP SPT=48010 DPT=9882 SEQ=1514916977 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F1CE000000000001030307) 
Feb 20 09:08:32 np0005625203.localdomain python3.9[127118]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 20 09:08:35 np0005625203.localdomain sudo[127116]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39639 DF PROTO=TCP SPT=48010 DPT=9882 SEQ=1514916977 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F1DDC00000000001030307) 
Feb 20 09:08:36 np0005625203.localdomain sudo[127211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxcffmpmmsoqzofhvpifuhmqsajapljb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578515.9133997-715-59506756592770/AnsiballZ_dnf.py
Feb 20 09:08:36 np0005625203.localdomain sudo[127211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:08:36 np0005625203.localdomain python3.9[127213]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 20 09:08:37 np0005625203.localdomain sshd[127216]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:08:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3830 DF PROTO=TCP SPT=52154 DPT=9105 SEQ=2689623165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F1E5800000000001030307) 
Feb 20 09:08:38 np0005625203.localdomain sshd[127216]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:08:39 np0005625203.localdomain sudo[127211]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:40 np0005625203.localdomain sudo[127310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zuvkbiuurepxnssgccsjyqqthgkefauy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578520.099989-748-96830399541500/AnsiballZ_file.py
Feb 20 09:08:40 np0005625203.localdomain sudo[127310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:08:40 np0005625203.localdomain python3.9[127312]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:08:40 np0005625203.localdomain sudo[127310]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:41 np0005625203.localdomain sudo[127415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xfitdsrywdrmfzyzbjdkutwujligtbti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578520.7820823-772-214045666667970/AnsiballZ_stat.py
Feb 20 09:08:41 np0005625203.localdomain sudo[127415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:08:41 np0005625203.localdomain python3.9[127417]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:08:41 np0005625203.localdomain sudo[127415]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:41 np0005625203.localdomain sudo[127488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yuvbcgqedlwygjteqkbozzlnubknkmqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578520.7820823-772-214045666667970/AnsiballZ_copy.py
Feb 20 09:08:41 np0005625203.localdomain sudo[127488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:08:41 np0005625203.localdomain python3.9[127490]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1771578520.7820823-772-214045666667970/.source.json _original_basename=.8w7lpoyx follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:08:41 np0005625203.localdomain sudo[127488]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:42 np0005625203.localdomain sudo[127580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-owccgymapxiafmxxljjdvwjkqmrwxjxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578522.252448-826-180359679463547/AnsiballZ_podman_image.py
Feb 20 09:08:42 np0005625203.localdomain sudo[127580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:08:42 np0005625203.localdomain python3.9[127582]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 20 09:08:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3832 DF PROTO=TCP SPT=52154 DPT=9105 SEQ=2689623165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F1FD400000000001030307) 
Feb 20 09:08:44 np0005625203.localdomain systemd-journald[48285]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 77.5 (258 of 333 items), suggesting rotation.
Feb 20 09:08:44 np0005625203.localdomain systemd-journald[48285]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating.
Feb 20 09:08:44 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:08:44 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:08:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16128 DF PROTO=TCP SPT=38072 DPT=9102 SEQ=294724731 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F1FE810000000001030307) 
Feb 20 09:08:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56462 DF PROTO=TCP SPT=42768 DPT=9101 SEQ=431837134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F208400000000001030307) 
Feb 20 09:08:49 np0005625203.localdomain podman[127595]: 2026-02-20 09:08:42.994631649 +0000 UTC m=+0.050999131 image pull  quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 20 09:08:49 np0005625203.localdomain sudo[127580]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10800 DF PROTO=TCP SPT=38454 DPT=9100 SEQ=2379477657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F214800000000001030307) 
Feb 20 09:08:50 np0005625203.localdomain sudo[127793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwgscrrjbcojcdqqfjyynfsufglehonl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578530.4790435-859-217522382267691/AnsiballZ_podman_image.py
Feb 20 09:08:50 np0005625203.localdomain sudo[127793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:08:50 np0005625203.localdomain python3.9[127795]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 20 09:08:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56464 DF PROTO=TCP SPT=42768 DPT=9101 SEQ=431837134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F220000000000001030307) 
Feb 20 09:08:53 np0005625203.localdomain sshd[127834]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:08:53 np0005625203.localdomain sshd[127834]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:08:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10802 DF PROTO=TCP SPT=38454 DPT=9100 SEQ=2379477657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F22C400000000001030307) 
Feb 20 09:08:58 np0005625203.localdomain podman[127809]: 2026-02-20 09:08:51.035508681 +0000 UTC m=+0.032017510 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 20 09:08:59 np0005625203.localdomain sudo[127793]: pam_unix(sudo:session): session closed for user root
Feb 20 09:08:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48852 DF PROTO=TCP SPT=53470 DPT=9102 SEQ=4246375103 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F238450000000001030307) 
Feb 20 09:08:59 np0005625203.localdomain sudo[128006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvvezaghmajkngqupkclcwpaesajqtup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578539.5759554-889-169072032507944/AnsiballZ_podman_image.py
Feb 20 09:08:59 np0005625203.localdomain sudo[128006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:09:00 np0005625203.localdomain python3.9[128008]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 20 09:09:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27151 DF PROTO=TCP SPT=35622 DPT=9882 SEQ=4190926581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F243000000000001030307) 
Feb 20 09:09:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27152 DF PROTO=TCP SPT=35622 DPT=9882 SEQ=4190926581 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F252C00000000001030307) 
Feb 20 09:09:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=194 DF PROTO=TCP SPT=33802 DPT=9105 SEQ=1289056758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F25A800000000001030307) 
Feb 20 09:09:09 np0005625203.localdomain sudo[128668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:09:09 np0005625203.localdomain sudo[128668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:09:09 np0005625203.localdomain sudo[128668]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:10 np0005625203.localdomain sudo[128683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:09:10 np0005625203.localdomain sudo[128683]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:09:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38849 DF PROTO=TCP SPT=44334 DPT=9105 SEQ=734514725 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F266800000000001030307) 
Feb 20 09:09:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=196 DF PROTO=TCP SPT=33802 DPT=9105 SEQ=1289056758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F272400000000001030307) 
Feb 20 09:09:14 np0005625203.localdomain podman[128021]: 2026-02-20 09:09:00.187162909 +0000 UTC m=+0.074486923 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 20 09:09:15 np0005625203.localdomain sudo[128683]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:15 np0005625203.localdomain sudo[128006]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:15 np0005625203.localdomain sudo[128815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:09:15 np0005625203.localdomain sudo[128815]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:09:15 np0005625203.localdomain sudo[128815]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:15 np0005625203.localdomain sudo[128844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Feb 20 09:09:15 np0005625203.localdomain sudo[128844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:09:15 np0005625203.localdomain sudo[128844]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54550 DF PROTO=TCP SPT=47740 DPT=9101 SEQ=399701125 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F27D800000000001030307) 
Feb 20 09:09:17 np0005625203.localdomain sudo[128953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpxcnjwqdihtxjblechdzsfjdvdkanho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578556.8978772-916-210522971766487/AnsiballZ_podman_image.py
Feb 20 09:09:17 np0005625203.localdomain sudo[128953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:09:17 np0005625203.localdomain python3.9[128955]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 20 09:09:17 np0005625203.localdomain sudo[128979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:09:17 np0005625203.localdomain sudo[128979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:09:17 np0005625203.localdomain sudo[128979]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:19 np0005625203.localdomain podman[128967]: 2026-02-20 09:09:17.509444076 +0000 UTC m=+0.046372898 image pull  quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Feb 20 09:09:19 np0005625203.localdomain sshd[129018]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:09:19 np0005625203.localdomain sudo[128953]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:19 np0005625203.localdomain sudo[129150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fbhidvolddbgeibcdeqvvzhttgvowlqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578559.6731298-943-97464712062429/AnsiballZ_podman_image.py
Feb 20 09:09:19 np0005625203.localdomain sudo[129150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:09:20 np0005625203.localdomain python3.9[129152]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 20 09:09:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57345 DF PROTO=TCP SPT=38944 DPT=9100 SEQ=2070419404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F289C00000000001030307) 
Feb 20 09:09:20 np0005625203.localdomain sshd[129018]: Invalid user ts3 from 118.99.80.29 port 11430
Feb 20 09:09:20 np0005625203.localdomain sshd[129018]: Received disconnect from 118.99.80.29 port 11430:11: Bye Bye [preauth]
Feb 20 09:09:20 np0005625203.localdomain sshd[129018]: Disconnected from invalid user ts3 118.99.80.29 port 11430 [preauth]
Feb 20 09:09:21 np0005625203.localdomain podman[129165]: 2026-02-20 09:09:20.272152056 +0000 UTC m=+0.047822682 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:09:21 np0005625203.localdomain sudo[129150]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:22 np0005625203.localdomain sudo[129328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dujnsoxyrhmjsnfpwrwronxhulcqqiue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578562.2319093-970-2919523197680/AnsiballZ_podman_image.py
Feb 20 09:09:22 np0005625203.localdomain sudo[129328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:09:22 np0005625203.localdomain python3.9[129330]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 20 09:09:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54552 DF PROTO=TCP SPT=47740 DPT=9101 SEQ=399701125 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F295400000000001030307) 
Feb 20 09:09:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57347 DF PROTO=TCP SPT=38944 DPT=9100 SEQ=2070419404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F2A1800000000001030307) 
Feb 20 09:09:28 np0005625203.localdomain podman[129343]: 2026-02-20 09:09:22.869480569 +0000 UTC m=+0.047861513 image pull  quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Feb 20 09:09:28 np0005625203.localdomain sudo[129328]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62773 DF PROTO=TCP SPT=38706 DPT=9882 SEQ=3635924828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F2AC3D0000000001030307) 
Feb 20 09:09:29 np0005625203.localdomain sudo[129519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rqdweyiuimywxtnxbtdwgunhkjyterdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578568.906249-970-171912502121271/AnsiballZ_podman_image.py
Feb 20 09:09:29 np0005625203.localdomain sudo[129519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:09:29 np0005625203.localdomain python3.9[129521]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Feb 20 09:09:31 np0005625203.localdomain podman[129533]: 2026-02-20 09:09:29.50730511 +0000 UTC m=+0.040536576 image pull  quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c
Feb 20 09:09:31 np0005625203.localdomain sudo[129519]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:32 np0005625203.localdomain sshd[124573]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:09:32 np0005625203.localdomain systemd[1]: session-39.scope: Deactivated successfully.
Feb 20 09:09:32 np0005625203.localdomain systemd[1]: session-39.scope: Consumed 2min 7.363s CPU time.
Feb 20 09:09:32 np0005625203.localdomain systemd-logind[759]: Session 39 logged out. Waiting for processes to exit.
Feb 20 09:09:32 np0005625203.localdomain systemd-logind[759]: Removed session 39.
Feb 20 09:09:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62775 DF PROTO=TCP SPT=38706 DPT=9882 SEQ=3635924828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F2B8400000000001030307) 
Feb 20 09:09:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62776 DF PROTO=TCP SPT=38706 DPT=9882 SEQ=3635924828 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F2C8000000000001030307) 
Feb 20 09:09:37 np0005625203.localdomain sshd[129645]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:09:37 np0005625203.localdomain sshd[129645]: Accepted publickey for zuul from 192.168.122.30 port 41024 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:09:37 np0005625203.localdomain systemd-logind[759]: New session 40 of user zuul.
Feb 20 09:09:37 np0005625203.localdomain systemd[1]: Started Session 40 of User zuul.
Feb 20 09:09:37 np0005625203.localdomain sshd[129645]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:09:37 np0005625203.localdomain sshd[129650]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:09:37 np0005625203.localdomain sshd[129650]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:09:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61014 DF PROTO=TCP SPT=37094 DPT=9105 SEQ=831469297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F2CFC10000000001030307) 
Feb 20 09:09:38 np0005625203.localdomain python3.9[129740]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:09:39 np0005625203.localdomain sudo[129834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ratmyzbuoomgratpnfkcpqoujyzjaddh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578579.3056843-67-81573750615655/AnsiballZ_getent.py
Feb 20 09:09:39 np0005625203.localdomain sudo[129834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:09:39 np0005625203.localdomain python3.9[129836]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Feb 20 09:09:39 np0005625203.localdomain sudo[129834]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:40 np0005625203.localdomain sudo[129927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-svjgwbvorjcmjccfqlrysnaqyzfowbqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578580.3909109-103-61680999264345/AnsiballZ_setup.py
Feb 20 09:09:40 np0005625203.localdomain sudo[129927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:09:40 np0005625203.localdomain python3.9[129929]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:09:41 np0005625203.localdomain sudo[129927]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3835 DF PROTO=TCP SPT=52154 DPT=9105 SEQ=2689623165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F2DC800000000001030307) 
Feb 20 09:09:41 np0005625203.localdomain sudo[129981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pmbiwtbyjlclfckuaxkdlcqzdtxifvym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578580.3909109-103-61680999264345/AnsiballZ_dnf.py
Feb 20 09:09:41 np0005625203.localdomain sudo[129981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:09:43 np0005625203.localdomain python3.9[129983]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch3.3'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 20 09:09:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61016 DF PROTO=TCP SPT=37094 DPT=9105 SEQ=831469297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F2E7800000000001030307) 
Feb 20 09:09:46 np0005625203.localdomain sudo[129981]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26459 DF PROTO=TCP SPT=60152 DPT=9101 SEQ=3352935879 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F2F2C00000000001030307) 
Feb 20 09:09:47 np0005625203.localdomain sudo[130193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bhnuyahdvtglzptpbwayylgpzcipacuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578586.847524-145-162572421338132/AnsiballZ_dnf.py
Feb 20 09:09:47 np0005625203.localdomain sudo[130193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:09:47 np0005625203.localdomain python3.9[130195]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:09:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:09:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 5400.1 total, 600.0 interval
                                                          Cumulative writes: 4844 writes, 21K keys, 4844 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4844 writes, 660 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 09:09:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56467 DF PROTO=TCP SPT=42768 DPT=9101 SEQ=431837134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F2FE800000000001030307) 
Feb 20 09:09:50 np0005625203.localdomain sudo[130193]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:51 np0005625203.localdomain sudo[130287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aefeyamgbjwudbhywmhbdpmadkrllxht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578590.8288589-169-160899834144365/AnsiballZ_systemd.py
Feb 20 09:09:51 np0005625203.localdomain sudo[130287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:09:51 np0005625203.localdomain python3.9[130289]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 20 09:09:51 np0005625203.localdomain sudo[130287]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:09:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 5400.1 total, 600.0 interval
                                                          Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5843 writes, 764 syncs, 7.65 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 09:09:52 np0005625203.localdomain python3.9[130382]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:09:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10805 DF PROTO=TCP SPT=38454 DPT=9100 SEQ=2379477657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F30A870000000001030307) 
Feb 20 09:09:53 np0005625203.localdomain sudo[130472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbppiaqluczzmnlhskwojltpvfnhftvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578592.8964775-226-114987542167952/AnsiballZ_sefcontext.py
Feb 20 09:09:53 np0005625203.localdomain sudo[130472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:09:53 np0005625203.localdomain python3.9[130474]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Feb 20 09:09:55 np0005625203.localdomain kernel: SELinux:  Converting 2743 SID table entries...
Feb 20 09:09:55 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 09:09:55 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 09:09:55 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 09:09:55 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 09:09:55 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 09:09:55 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 09:09:55 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 09:09:55 np0005625203.localdomain sudo[130472]: pam_unix(sudo:session): session closed for user root
Feb 20 09:09:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10340 DF PROTO=TCP SPT=42990 DPT=9100 SEQ=1240806069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F316800000000001030307) 
Feb 20 09:09:56 np0005625203.localdomain python3.9[130569]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:09:57 np0005625203.localdomain sudo[130665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yrdcjbrpswulbeunounsqlwuoaqybeqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578597.0519118-280-200374438926803/AnsiballZ_dnf.py
Feb 20 09:09:57 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=18 res=1
Feb 20 09:09:57 np0005625203.localdomain sudo[130665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:09:57 np0005625203.localdomain python3.9[130667]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:09:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3978 DF PROTO=TCP SPT=45956 DPT=9102 SEQ=3715092404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F322A50000000001030307) 
Feb 20 09:09:59 np0005625203.localdomain sshd[130670]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:10:00 np0005625203.localdomain sshd[130670]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:10:00 np0005625203.localdomain sudo[130665]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:01 np0005625203.localdomain sudo[130761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-krgquxshkvnvehgrgdtxpajptnriidef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578601.16851-304-175747448827013/AnsiballZ_command.py
Feb 20 09:10:01 np0005625203.localdomain sudo[130761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:01 np0005625203.localdomain python3.9[130763]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:10:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3243 DF PROTO=TCP SPT=54654 DPT=9882 SEQ=2445195502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F32D800000000001030307) 
Feb 20 09:10:02 np0005625203.localdomain sudo[130761]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:03 np0005625203.localdomain sudo[131006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chljbkkrgmewvmonnqtdrdsvzjrhsbrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578602.7023704-328-187748641366161/AnsiballZ_file.py
Feb 20 09:10:03 np0005625203.localdomain sudo[131006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:04 np0005625203.localdomain python3.9[131008]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Feb 20 09:10:04 np0005625203.localdomain sudo[131006]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:04 np0005625203.localdomain python3.9[131098]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:10:05 np0005625203.localdomain sudo[131190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tsgbdyrjpskjjxfwbhxogugzcsiqcjom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578605.2853687-382-161339446033688/AnsiballZ_dnf.py
Feb 20 09:10:05 np0005625203.localdomain sudo[131190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:05 np0005625203.localdomain python3.9[131192]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:10:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3244 DF PROTO=TCP SPT=54654 DPT=9882 SEQ=2445195502 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F33D400000000001030307) 
Feb 20 09:10:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13008 DF PROTO=TCP SPT=44970 DPT=9105 SEQ=1817563185 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F345000000000001030307) 
Feb 20 09:10:10 np0005625203.localdomain sudo[131190]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:10 np0005625203.localdomain sudo[131284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sgtsxlfcxitnzmomilqzfiamfjgscylz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578610.3396137-406-181731338852870/AnsiballZ_dnf.py
Feb 20 09:10:10 np0005625203.localdomain sudo[131284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:10 np0005625203.localdomain python3.9[131286]: ansible-ansible.legacy.dnf Invoked with name=['openstack-network-scripts'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:10:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=199 DF PROTO=TCP SPT=33802 DPT=9105 SEQ=1289056758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F350800000000001030307) 
Feb 20 09:10:13 np0005625203.localdomain sudo[131284]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13010 DF PROTO=TCP SPT=44970 DPT=9105 SEQ=1817563185 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F35CC00000000001030307) 
Feb 20 09:10:14 np0005625203.localdomain sudo[131378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xmoqiugdkeskfpxflzcatoamyngazefi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578614.3255117-430-242415231138888/AnsiballZ_systemd.py
Feb 20 09:10:14 np0005625203.localdomain sudo[131378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:14 np0005625203.localdomain python3.9[131380]: ansible-ansible.builtin.systemd Invoked with enabled=True name=network daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 20 09:10:14 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:10:15 np0005625203.localdomain systemd-rc-local-generator[131407]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:10:15 np0005625203.localdomain systemd-sysv-generator[131411]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:10:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:10:15 np0005625203.localdomain sudo[131378]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:15 np0005625203.localdomain sudo[131510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fbmavfjyjchpzldzzftrsusqjgjctfza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578615.5882516-460-117580944263423/AnsiballZ_stat.py
Feb 20 09:10:15 np0005625203.localdomain sudo[131510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:16 np0005625203.localdomain python3.9[131512]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:10:16 np0005625203.localdomain sudo[131510]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:16 np0005625203.localdomain sudo[131602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wciofojcsxwxphlqqdblyuizzzozlyzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578616.3388443-487-43435695768689/AnsiballZ_ini_file.py
Feb 20 09:10:16 np0005625203.localdomain sudo[131602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:16 np0005625203.localdomain python3.9[131604]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:10:16 np0005625203.localdomain sudo[131602]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12321 DF PROTO=TCP SPT=50784 DPT=9101 SEQ=819567956 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F367C00000000001030307) 
Feb 20 09:10:17 np0005625203.localdomain sudo[131696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpaoggjoykjadtzqieniedbwusophsgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578617.111595-511-109468689745460/AnsiballZ_ini_file.py
Feb 20 09:10:17 np0005625203.localdomain sudo[131696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:17 np0005625203.localdomain python3.9[131698]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:10:17 np0005625203.localdomain sudo[131696]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:17 np0005625203.localdomain sudo[131713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:10:17 np0005625203.localdomain sudo[131713]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:10:17 np0005625203.localdomain sudo[131713]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:17 np0005625203.localdomain sudo[131741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:10:17 np0005625203.localdomain sudo[131741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:10:18 np0005625203.localdomain sudo[131818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vowlngrsektvdexqwibgggsiplwgjvhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578617.7747312-535-276333057118300/AnsiballZ_ini_file.py
Feb 20 09:10:18 np0005625203.localdomain sudo[131818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:18 np0005625203.localdomain python3.9[131820]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:10:18 np0005625203.localdomain sudo[131818]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:18 np0005625203.localdomain sudo[131741]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:18 np0005625203.localdomain sudo[131941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-evkvasgqilzfyewudmltdrhlnwktsmgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578618.499087-565-157807237086344/AnsiballZ_stat.py
Feb 20 09:10:18 np0005625203.localdomain sudo[131941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:18 np0005625203.localdomain sudo[131942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:10:18 np0005625203.localdomain sudo[131942]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:10:18 np0005625203.localdomain sudo[131942]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:18 np0005625203.localdomain sudo[131959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -- inventory --format=json-pretty --filter-for-batch
Feb 20 09:10:18 np0005625203.localdomain sudo[131959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:10:18 np0005625203.localdomain python3.9[131956]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:10:18 np0005625203.localdomain sudo[131941]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:19 np0005625203.localdomain sudo[132091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cqpkwvllnashgrigwukwinopqypfqzij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578618.499087-565-157807237086344/AnsiballZ_copy.py
Feb 20 09:10:19 np0005625203.localdomain sudo[132091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:19 np0005625203.localdomain podman[132072]: 
Feb 20 09:10:19 np0005625203.localdomain podman[132072]: 2026-02-20 09:10:19.431986643 +0000 UTC m=+0.078431596 container create 5ee3484d5d150517eb5a5e86b62a8e61b2a07d8a178b5b621c03897673a9098f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_haibt, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, vcs-type=git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, GIT_BRANCH=main)
Feb 20 09:10:19 np0005625203.localdomain systemd[1]: Started libpod-conmon-5ee3484d5d150517eb5a5e86b62a8e61b2a07d8a178b5b621c03897673a9098f.scope.
Feb 20 09:10:19 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:10:19 np0005625203.localdomain podman[132072]: 2026-02-20 09:10:19.39819249 +0000 UTC m=+0.044637483 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:10:19 np0005625203.localdomain podman[132072]: 2026-02-20 09:10:19.505824555 +0000 UTC m=+0.152269498 container init 5ee3484d5d150517eb5a5e86b62a8e61b2a07d8a178b5b621c03897673a9098f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_haibt, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, release=1770267347, name=rhceph)
Feb 20 09:10:19 np0005625203.localdomain podman[132072]: 2026-02-20 09:10:19.515608651 +0000 UTC m=+0.162053594 container start 5ee3484d5d150517eb5a5e86b62a8e61b2a07d8a178b5b621c03897673a9098f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_haibt, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, ceph=True, distribution-scope=public, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, vcs-type=git, com.redhat.component=rhceph-container, RELEASE=main, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:10:19 np0005625203.localdomain podman[132072]: 2026-02-20 09:10:19.515810017 +0000 UTC m=+0.162254960 container attach 5ee3484d5d150517eb5a5e86b62a8e61b2a07d8a178b5b621c03897673a9098f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_haibt, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., ceph=True, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public)
Feb 20 09:10:19 np0005625203.localdomain hungry_haibt[132103]: 167 167
Feb 20 09:10:19 np0005625203.localdomain systemd[1]: libpod-5ee3484d5d150517eb5a5e86b62a8e61b2a07d8a178b5b621c03897673a9098f.scope: Deactivated successfully.
Feb 20 09:10:19 np0005625203.localdomain podman[132072]: 2026-02-20 09:10:19.519475601 +0000 UTC m=+0.165920544 container died 5ee3484d5d150517eb5a5e86b62a8e61b2a07d8a178b5b621c03897673a9098f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_haibt, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-type=git, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, version=7, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64)
Feb 20 09:10:19 np0005625203.localdomain python3.9[132100]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578618.499087-565-157807237086344/.source _original_basename=.kmet57z2 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:10:19 np0005625203.localdomain podman[132108]: 2026-02-20 09:10:19.610369476 +0000 UTC m=+0.078566231 container remove 5ee3484d5d150517eb5a5e86b62a8e61b2a07d8a178b5b621c03897673a9098f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=hungry_haibt, ceph=True, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., release=1770267347, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z)
Feb 20 09:10:19 np0005625203.localdomain systemd[1]: libpod-conmon-5ee3484d5d150517eb5a5e86b62a8e61b2a07d8a178b5b621c03897673a9098f.scope: Deactivated successfully.
Feb 20 09:10:19 np0005625203.localdomain sudo[132091]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:19 np0005625203.localdomain podman[132144]: 
Feb 20 09:10:19 np0005625203.localdomain podman[132144]: 2026-02-20 09:10:19.834110342 +0000 UTC m=+0.078389984 container create 875ed4f71083f9908f59c7157b325bc7183489b2788ea8a807ab453a9c85ef5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_wozniak, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, distribution-scope=public, GIT_CLEAN=True, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vendor=Red Hat, Inc., ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z)
Feb 20 09:10:19 np0005625203.localdomain systemd[1]: Started libpod-conmon-875ed4f71083f9908f59c7157b325bc7183489b2788ea8a807ab453a9c85ef5a.scope.
Feb 20 09:10:19 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:10:19 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca148f1bbc31a28ca62b08839ba26584137cd85a496000129d9dbd5b6342bcbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 09:10:19 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca148f1bbc31a28ca62b08839ba26584137cd85a496000129d9dbd5b6342bcbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 09:10:19 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca148f1bbc31a28ca62b08839ba26584137cd85a496000129d9dbd5b6342bcbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 09:10:19 np0005625203.localdomain podman[132144]: 2026-02-20 09:10:19.801841477 +0000 UTC m=+0.046121169 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:10:19 np0005625203.localdomain podman[132144]: 2026-02-20 09:10:19.904733775 +0000 UTC m=+0.149013427 container init 875ed4f71083f9908f59c7157b325bc7183489b2788ea8a807ab453a9c85ef5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_wozniak, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, io.buildah.version=1.42.2, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Feb 20 09:10:19 np0005625203.localdomain podman[132144]: 2026-02-20 09:10:19.915692757 +0000 UTC m=+0.159972399 container start 875ed4f71083f9908f59c7157b325bc7183489b2788ea8a807ab453a9c85ef5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_wozniak, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, vendor=Red Hat, Inc., version=7, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, architecture=x86_64, RELEASE=main)
Feb 20 09:10:19 np0005625203.localdomain podman[132144]: 2026-02-20 09:10:19.916019457 +0000 UTC m=+0.160299129 container attach 875ed4f71083f9908f59c7157b325bc7183489b2788ea8a807ab453a9c85ef5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_wozniak, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, RELEASE=main, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, ceph=True, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_CLEAN=True, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, version=7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:10:20 np0005625203.localdomain sudo[132240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fdqbnwbidaimvsnzhdikzjvzlqtixiej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578619.8278115-610-213696789241188/AnsiballZ_file.py
Feb 20 09:10:20 np0005625203.localdomain sudo[132240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55091 DF PROTO=TCP SPT=33904 DPT=9100 SEQ=3982407775 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F374000000000001030307) 
Feb 20 09:10:20 np0005625203.localdomain python3.9[132242]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:10:20 np0005625203.localdomain sudo[132240]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-af1f3fbf2124f7b6900cdbd584b5c5f128e218256884b23248f9d1d69b1d008e-merged.mount: Deactivated successfully.
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]: [
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:     {
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:         "available": false,
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:         "ceph_device": false,
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:         "lsm_data": {},
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:         "lvs": [],
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:         "path": "/dev/sr0",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:         "rejected_reasons": [
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "Insufficient space (<5GB)",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "Has a FileSystem"
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:         ],
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:         "sys_api": {
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "actuators": null,
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "device_nodes": "sr0",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "human_readable_size": "482.00 KB",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "id_bus": "ata",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "model": "QEMU DVD-ROM",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "nr_requests": "2",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "partitions": {},
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "path": "/dev/sr0",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "removable": "1",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "rev": "2.5+",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "ro": "0",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "rotational": "1",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "sas_address": "",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "sas_device_handle": "",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "scheduler_mode": "mq-deadline",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "sectors": 0,
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "sectorsize": "2048",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "size": 493568.0,
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "support_discard": "0",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "type": "disk",
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:             "vendor": "QEMU"
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:         }
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]:     }
Feb 20 09:10:20 np0005625203.localdomain vigorous_wozniak[132186]: ]
Feb 20 09:10:20 np0005625203.localdomain systemd[1]: libpod-875ed4f71083f9908f59c7157b325bc7183489b2788ea8a807ab453a9c85ef5a.scope: Deactivated successfully.
Feb 20 09:10:20 np0005625203.localdomain podman[133563]: 2026-02-20 09:10:20.857799395 +0000 UTC m=+0.050643560 container died 875ed4f71083f9908f59c7157b325bc7183489b2788ea8a807ab453a9c85ef5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_wozniak, vcs-type=git, architecture=x86_64, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, RELEASE=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 09:10:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ca148f1bbc31a28ca62b08839ba26584137cd85a496000129d9dbd5b6342bcbb-merged.mount: Deactivated successfully.
Feb 20 09:10:20 np0005625203.localdomain podman[133563]: 2026-02-20 09:10:20.947986047 +0000 UTC m=+0.140830172 container remove 875ed4f71083f9908f59c7157b325bc7183489b2788ea8a807ab453a9c85ef5a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_wozniak, version=7, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, release=1770267347, ceph=True, distribution-scope=public, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.buildah.version=1.42.2, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=)
Feb 20 09:10:20 np0005625203.localdomain systemd[1]: libpod-conmon-875ed4f71083f9908f59c7157b325bc7183489b2788ea8a807ab453a9c85ef5a.scope: Deactivated successfully.
Feb 20 09:10:20 np0005625203.localdomain sudo[131959]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:21 np0005625203.localdomain sudo[133607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hamqupynhwjexxiknetmvsxoshhwokuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578620.509569-634-52454778337029/AnsiballZ_edpm_os_net_config_mappings.py
Feb 20 09:10:21 np0005625203.localdomain sudo[133607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:21 np0005625203.localdomain python3.9[133609]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Feb 20 09:10:21 np0005625203.localdomain sudo[133607]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:21 np0005625203.localdomain sudo[133624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:10:21 np0005625203.localdomain sudo[133624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:10:21 np0005625203.localdomain sudo[133624]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:21 np0005625203.localdomain sudo[133714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbrijpihputmamlwfzqqgrnwtxzmesrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578621.5692775-661-246573529620229/AnsiballZ_file.py
Feb 20 09:10:21 np0005625203.localdomain sudo[133714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:22 np0005625203.localdomain python3.9[133716]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:10:22 np0005625203.localdomain sudo[133714]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:22 np0005625203.localdomain sudo[133806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cafgqkxnpqoiyznclafefvihbqugogzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578622.3331244-691-129392650109996/AnsiballZ_stat.py
Feb 20 09:10:22 np0005625203.localdomain sudo[133806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:22 np0005625203.localdomain python3.9[133808]: ansible-ansible.legacy.stat Invoked with path=/etc/os-net-config/config.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:10:22 np0005625203.localdomain sudo[133806]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12323 DF PROTO=TCP SPT=50784 DPT=9101 SEQ=819567956 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F37F800000000001030307) 
Feb 20 09:10:23 np0005625203.localdomain sudo[133879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwylntqquywinrgscbpxxnlfxwobnjdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578622.3331244-691-129392650109996/AnsiballZ_copy.py
Feb 20 09:10:23 np0005625203.localdomain sudo[133879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:23 np0005625203.localdomain sshd[133882]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:10:23 np0005625203.localdomain sshd[133882]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:10:23 np0005625203.localdomain python3.9[133881]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/os-net-config/config.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578622.3331244-691-129392650109996/.source.yaml _original_basename=.hi7ktpp4 follow=False checksum=06d744ebe702728c19f6d1a8f97158d086012058 force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:10:23 np0005625203.localdomain sudo[133879]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:23 np0005625203.localdomain sudo[133973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxjctfugzbzjcbllyzzkdorultwcuwnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578623.5762734-736-115678914060435/AnsiballZ_slurp.py
Feb 20 09:10:23 np0005625203.localdomain sudo[133973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:24 np0005625203.localdomain python3.9[133975]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Feb 20 09:10:24 np0005625203.localdomain sudo[133973]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:25 np0005625203.localdomain sudo[134078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yvwinnsfkwacrduejyvhwvtwdhbaguna ; ANSIBLE_ASYNC_DIR='~/.ansible_async' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578624.570435-763-43337684055075/async_wrapper.py j224878116950 300 /home/zuul/.ansible/tmp/ansible-tmp-1771578624.570435-763-43337684055075/AnsiballZ_edpm_os_net_config.py _
Feb 20 09:10:25 np0005625203.localdomain sudo[134078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:25 np0005625203.localdomain ansible-async_wrapper.py[134080]: Invoked with j224878116950 300 /home/zuul/.ansible/tmp/ansible-tmp-1771578624.570435-763-43337684055075/AnsiballZ_edpm_os_net_config.py _
Feb 20 09:10:25 np0005625203.localdomain ansible-async_wrapper.py[134083]: Starting module and watcher
Feb 20 09:10:25 np0005625203.localdomain ansible-async_wrapper.py[134083]: Start watching 134084 (300)
Feb 20 09:10:25 np0005625203.localdomain ansible-async_wrapper.py[134084]: Start module (134084)
Feb 20 09:10:25 np0005625203.localdomain ansible-async_wrapper.py[134080]: Return async_wrapper task started.
Feb 20 09:10:25 np0005625203.localdomain sudo[134078]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:25 np0005625203.localdomain python3.9[134085]: ansible-edpm_os_net_config Invoked with cleanup=False config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True remove_config=False safe_defaults=False use_nmstate=False purge_provider=
Feb 20 09:10:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55093 DF PROTO=TCP SPT=33904 DPT=9100 SEQ=3982407775 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F38BC00000000001030307) 
Feb 20 09:10:26 np0005625203.localdomain ansible-async_wrapper.py[134084]: Module complete (134084)
Feb 20 09:10:28 np0005625203.localdomain sudo[134187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rztkdawgfggsgtonmntrgwjassvqjpoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578628.547483-763-83665116071128/AnsiballZ_async_status.py
Feb 20 09:10:28 np0005625203.localdomain sudo[134187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:29 np0005625203.localdomain python3.9[134189]: ansible-ansible.legacy.async_status Invoked with jid=j224878116950.134080 mode=status _async_dir=/root/.ansible_async
Feb 20 09:10:29 np0005625203.localdomain sudo[134187]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36399 DF PROTO=TCP SPT=50876 DPT=9102 SEQ=1298010890 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F397D50000000001030307) 
Feb 20 09:10:29 np0005625203.localdomain sudo[134246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pjmtqiczihdluworidtkaokbxnzfzrgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578628.547483-763-83665116071128/AnsiballZ_async_status.py
Feb 20 09:10:29 np0005625203.localdomain sudo[134246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:29 np0005625203.localdomain python3.9[134248]: ansible-ansible.legacy.async_status Invoked with jid=j224878116950.134080 mode=cleanup _async_dir=/root/.ansible_async
Feb 20 09:10:29 np0005625203.localdomain sudo[134246]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:30 np0005625203.localdomain ansible-async_wrapper.py[134083]: Done in kid B.
Feb 20 09:10:30 np0005625203.localdomain sudo[134338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ooyiuflvrnxudizhuofpycjmqiungwhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578630.4617689-829-224485897024237/AnsiballZ_stat.py
Feb 20 09:10:30 np0005625203.localdomain sudo[134338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:30 np0005625203.localdomain python3.9[134340]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:10:30 np0005625203.localdomain sudo[134338]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:31 np0005625203.localdomain sudo[134411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hxivlhqbzjsmorrljnwodmomnyabdgvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578630.4617689-829-224485897024237/AnsiballZ_copy.py
Feb 20 09:10:31 np0005625203.localdomain sudo[134411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:31 np0005625203.localdomain python3.9[134413]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578630.4617689-829-224485897024237/.source.returncode _original_basename=.cl2nyjte follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:10:31 np0005625203.localdomain sudo[134411]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:31 np0005625203.localdomain sudo[134503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jejooyurpwnnzqbiygyrecevoumtbzcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578631.735612-877-17760395046233/AnsiballZ_stat.py
Feb 20 09:10:31 np0005625203.localdomain sudo[134503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19556 DF PROTO=TCP SPT=37292 DPT=9882 SEQ=2616524588 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F3A2C10000000001030307) 
Feb 20 09:10:32 np0005625203.localdomain python3.9[134505]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:10:32 np0005625203.localdomain sudo[134503]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:32 np0005625203.localdomain sudo[134576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aeoreyjemxnucdyykcflxtldpsdktakz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578631.735612-877-17760395046233/AnsiballZ_copy.py
Feb 20 09:10:32 np0005625203.localdomain sudo[134576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:32 np0005625203.localdomain python3.9[134578]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578631.735612-877-17760395046233/.source.cfg _original_basename=.de3d4cpv follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:10:32 np0005625203.localdomain sudo[134576]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:33 np0005625203.localdomain sudo[134668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ohuqknsqjjoipacijrtgdxblubomymss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578633.038613-922-188333919085495/AnsiballZ_systemd.py
Feb 20 09:10:33 np0005625203.localdomain sudo[134668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:33 np0005625203.localdomain python3.9[134670]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:10:33 np0005625203.localdomain systemd[1]: Reloading Network Manager...
Feb 20 09:10:33 np0005625203.localdomain NetworkManager[5968]: <info>  [1771578633.6636] audit: op="reload" arg="0" pid=134674 uid=0 result="success"
Feb 20 09:10:33 np0005625203.localdomain NetworkManager[5968]: <info>  [1771578633.6642] config: signal: SIGHUP (no changes from disk)
Feb 20 09:10:33 np0005625203.localdomain systemd[1]: Reloaded Network Manager.
Feb 20 09:10:33 np0005625203.localdomain sudo[134668]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:34 np0005625203.localdomain sshd[129645]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:10:34 np0005625203.localdomain systemd-logind[759]: Session 40 logged out. Waiting for processes to exit.
Feb 20 09:10:34 np0005625203.localdomain systemd[1]: session-40.scope: Deactivated successfully.
Feb 20 09:10:34 np0005625203.localdomain systemd[1]: session-40.scope: Consumed 35.113s CPU time.
Feb 20 09:10:34 np0005625203.localdomain systemd-logind[759]: Removed session 40.
Feb 20 09:10:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19557 DF PROTO=TCP SPT=37292 DPT=9882 SEQ=2616524588 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F3B2800000000001030307) 
Feb 20 09:10:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40593 DF PROTO=TCP SPT=39148 DPT=9105 SEQ=103453842 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F3BA400000000001030307) 
Feb 20 09:10:39 np0005625203.localdomain sshd[134689]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:10:39 np0005625203.localdomain sshd[134689]: Accepted publickey for zuul from 192.168.122.30 port 57792 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:10:39 np0005625203.localdomain systemd-logind[759]: New session 41 of user zuul.
Feb 20 09:10:39 np0005625203.localdomain systemd[1]: Started Session 41 of User zuul.
Feb 20 09:10:39 np0005625203.localdomain sshd[134689]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:10:40 np0005625203.localdomain python3.9[134782]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:10:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61019 DF PROTO=TCP SPT=37094 DPT=9105 SEQ=831469297 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F3C6800000000001030307) 
Feb 20 09:10:41 np0005625203.localdomain python3.9[134876]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:10:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40595 DF PROTO=TCP SPT=39148 DPT=9105 SEQ=103453842 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F3D2010000000001030307) 
Feb 20 09:10:44 np0005625203.localdomain python3.9[135021]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:10:45 np0005625203.localdomain sshd[134689]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:10:45 np0005625203.localdomain systemd[1]: session-41.scope: Deactivated successfully.
Feb 20 09:10:45 np0005625203.localdomain systemd[1]: session-41.scope: Consumed 2.125s CPU time.
Feb 20 09:10:45 np0005625203.localdomain systemd-logind[759]: Session 41 logged out. Waiting for processes to exit.
Feb 20 09:10:45 np0005625203.localdomain systemd-logind[759]: Removed session 41.
Feb 20 09:10:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13403 DF PROTO=TCP SPT=60016 DPT=9101 SEQ=928388840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F3DD000000000001030307) 
Feb 20 09:10:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56412 DF PROTO=TCP SPT=35366 DPT=9100 SEQ=1400215322 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F3E9400000000001030307) 
Feb 20 09:10:51 np0005625203.localdomain sshd[135037]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:10:51 np0005625203.localdomain sshd[135037]: Accepted publickey for zuul from 192.168.122.30 port 38448 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:10:51 np0005625203.localdomain systemd-logind[759]: New session 42 of user zuul.
Feb 20 09:10:51 np0005625203.localdomain systemd[1]: Started Session 42 of User zuul.
Feb 20 09:10:51 np0005625203.localdomain sshd[135037]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:10:52 np0005625203.localdomain python3.9[135130]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:10:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10343 DF PROTO=TCP SPT=42990 DPT=9100 SEQ=1240806069 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F3F4810000000001030307) 
Feb 20 09:10:53 np0005625203.localdomain python3.9[135224]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:10:54 np0005625203.localdomain sudo[135318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjulvjaawmzzelwxdbeimafkniwpounb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578653.8270066-76-78337801629022/AnsiballZ_setup.py
Feb 20 09:10:54 np0005625203.localdomain sudo[135318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:54 np0005625203.localdomain python3.9[135320]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:10:54 np0005625203.localdomain sudo[135318]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:55 np0005625203.localdomain sudo[135372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wjfogkuyzbqfwksjxahluorzsxyhjihh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578653.8270066-76-78337801629022/AnsiballZ_dnf.py
Feb 20 09:10:55 np0005625203.localdomain sudo[135372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:55 np0005625203.localdomain python3.9[135374]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:10:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56414 DF PROTO=TCP SPT=35366 DPT=9100 SEQ=1400215322 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F401000000000001030307) 
Feb 20 09:10:58 np0005625203.localdomain sudo[135372]: pam_unix(sudo:session): session closed for user root
Feb 20 09:10:59 np0005625203.localdomain sudo[135466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rcllbjgjatqyptlniotktolkzndewvlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578658.956508-112-203566495733072/AnsiballZ_setup.py
Feb 20 09:10:59 np0005625203.localdomain sudo[135466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:10:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25935 DF PROTO=TCP SPT=38228 DPT=9102 SEQ=1591738217 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F40D050000000001030307) 
Feb 20 09:10:59 np0005625203.localdomain python3.9[135468]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:10:59 np0005625203.localdomain sudo[135466]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:00 np0005625203.localdomain sudo[135613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmythsjheaafjtofdyhednwhrphyuytw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578660.220524-145-55469519036223/AnsiballZ_file.py
Feb 20 09:11:00 np0005625203.localdomain sudo[135613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:00 np0005625203.localdomain python3.9[135615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:11:00 np0005625203.localdomain sudo[135613]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:01 np0005625203.localdomain sudo[135705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-afntsazulyeudhliqfptseacogazjnav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578661.0771604-169-249492545591461/AnsiballZ_command.py
Feb 20 09:11:01 np0005625203.localdomain sudo[135705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:01 np0005625203.localdomain python3.9[135707]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:11:01 np0005625203.localdomain sudo[135705]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9499 DF PROTO=TCP SPT=37430 DPT=9882 SEQ=2516251810 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F417C00000000001030307) 
Feb 20 09:11:02 np0005625203.localdomain sudo[135808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jsqgypvaaeebfatqgulwsiyefbdeposz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578662.0172486-193-206698362313884/AnsiballZ_stat.py
Feb 20 09:11:02 np0005625203.localdomain sudo[135808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:02 np0005625203.localdomain python3.9[135810]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:11:02 np0005625203.localdomain sudo[135808]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:02 np0005625203.localdomain sudo[135856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ivzhpiqnibwhrsooowrimsflccjugpuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578662.0172486-193-206698362313884/AnsiballZ_file.py
Feb 20 09:11:02 np0005625203.localdomain sudo[135856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:03 np0005625203.localdomain python3.9[135858]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:11:03 np0005625203.localdomain sudo[135856]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:03 np0005625203.localdomain sudo[135948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kcyrdgfhtfxaagmijhjkyucyanbtdjjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578663.369143-229-228657767116665/AnsiballZ_stat.py
Feb 20 09:11:03 np0005625203.localdomain sudo[135948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:04 np0005625203.localdomain python3.9[135950]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:11:04 np0005625203.localdomain sudo[135948]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:04 np0005625203.localdomain sudo[135996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqhiaajnnhsxioyrigdaqkaetzdzowrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578663.369143-229-228657767116665/AnsiballZ_file.py
Feb 20 09:11:04 np0005625203.localdomain sudo[135996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:04 np0005625203.localdomain python3.9[135998]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:11:04 np0005625203.localdomain sudo[135996]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:05 np0005625203.localdomain sudo[136088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbxjuibqbjwcxwocjwxtdmaqkimcbsqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578664.890299-268-86422572259538/AnsiballZ_ini_file.py
Feb 20 09:11:05 np0005625203.localdomain sudo[136088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:05 np0005625203.localdomain python3.9[136090]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:11:05 np0005625203.localdomain sudo[136088]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9500 DF PROTO=TCP SPT=37430 DPT=9882 SEQ=2516251810 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F427810000000001030307) 
Feb 20 09:11:06 np0005625203.localdomain sudo[136180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhbjwcsgvcvchaqeupopquedtchcjmok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578665.9140835-268-99726621915445/AnsiballZ_ini_file.py
Feb 20 09:11:06 np0005625203.localdomain sudo[136180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:06 np0005625203.localdomain python3.9[136182]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:11:06 np0005625203.localdomain sudo[136180]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:06 np0005625203.localdomain sudo[136272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sviypwasimclzqmhqcnjxojjxghppswp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578666.5010371-268-60302323167128/AnsiballZ_ini_file.py
Feb 20 09:11:06 np0005625203.localdomain sudo[136272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:06 np0005625203.localdomain python3.9[136274]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:11:06 np0005625203.localdomain sudo[136272]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:07 np0005625203.localdomain sudo[136364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-skhoepsqnxadbaamvoggkbpmyjuhjisx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578667.0533783-268-6664964894211/AnsiballZ_ini_file.py
Feb 20 09:11:07 np0005625203.localdomain sudo[136364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:07 np0005625203.localdomain python3.9[136366]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:11:07 np0005625203.localdomain sudo[136364]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48201 DF PROTO=TCP SPT=51830 DPT=9105 SEQ=1266363376 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F42F400000000001030307) 
Feb 20 09:11:08 np0005625203.localdomain sudo[136456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dttlmzmfshdudmvattkznelealuaapgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578667.8432982-361-77599420744446/AnsiballZ_dnf.py
Feb 20 09:11:08 np0005625203.localdomain sudo[136456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:08 np0005625203.localdomain python3.9[136458]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:11:09 np0005625203.localdomain sshd[136461]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:11:09 np0005625203.localdomain sshd[136461]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:11:11 np0005625203.localdomain sudo[136456]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:12 np0005625203.localdomain sudo[136552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xeouzxpuiiofcowbgsmdiuaagrjkptwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578672.2334986-394-190147887016309/AnsiballZ_setup.py
Feb 20 09:11:12 np0005625203.localdomain sudo[136552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:12 np0005625203.localdomain python3.9[136554]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:11:12 np0005625203.localdomain sudo[136552]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:13 np0005625203.localdomain sudo[136646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orpjoshfprvdoptoqfmcgvwwuvgcqmzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578673.0102415-418-74317803263855/AnsiballZ_stat.py
Feb 20 09:11:13 np0005625203.localdomain sudo[136646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:13 np0005625203.localdomain python3.9[136648]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:11:13 np0005625203.localdomain sudo[136646]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:14 np0005625203.localdomain sudo[136738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hcmliwubbemzzlzwjatjduxjcvkashvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578673.7139497-445-9258673910830/AnsiballZ_stat.py
Feb 20 09:11:14 np0005625203.localdomain sudo[136738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48203 DF PROTO=TCP SPT=51830 DPT=9105 SEQ=1266363376 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F447010000000001030307) 
Feb 20 09:11:14 np0005625203.localdomain python3.9[136740]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:11:14 np0005625203.localdomain sudo[136738]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9501 DF PROTO=TCP SPT=37430 DPT=9882 SEQ=2516251810 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F448810000000001030307) 
Feb 20 09:11:14 np0005625203.localdomain sudo[136830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jhlfpfzusqzutfvwxsqotizpcdfdqwbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578674.5874605-475-165235381114680/AnsiballZ_command.py
Feb 20 09:11:14 np0005625203.localdomain sudo[136830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:15 np0005625203.localdomain python3.9[136832]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:11:15 np0005625203.localdomain sudo[136830]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:15 np0005625203.localdomain sudo[136923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zmvjkfujphqchqjjiesuxloyinuwsjff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578675.3580315-505-113183970318940/AnsiballZ_service_facts.py
Feb 20 09:11:15 np0005625203.localdomain sudo[136923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:15 np0005625203.localdomain python3.9[136925]: ansible-service_facts Invoked
Feb 20 09:11:16 np0005625203.localdomain network[136942]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 09:11:16 np0005625203.localdomain network[136943]: 'network-scripts' will be removed from distribution in near future.
Feb 20 09:11:16 np0005625203.localdomain network[136944]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 09:11:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54091 DF PROTO=TCP SPT=33944 DPT=9101 SEQ=2165838472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F452410000000001030307) 
Feb 20 09:11:17 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:11:19 np0005625203.localdomain sudo[136923]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64662 DF PROTO=TCP SPT=43422 DPT=9100 SEQ=2601941510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F45E800000000001030307) 
Feb 20 09:11:20 np0005625203.localdomain sshd[137069]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:11:20 np0005625203.localdomain sshd[137069]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:11:21 np0005625203.localdomain sudo[137137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:11:21 np0005625203.localdomain sudo[137137]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:11:21 np0005625203.localdomain sudo[137137]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:21 np0005625203.localdomain sudo[137178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vveptmuqftdjwicjjyruqhiawrhcdelq ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1771578681.3203452-550-25266935544002/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1771578681.3203452-550-25266935544002/args
Feb 20 09:11:21 np0005625203.localdomain sudo[137178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:21 np0005625203.localdomain sudo[137172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:11:21 np0005625203.localdomain sudo[137172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:11:21 np0005625203.localdomain sudo[137178]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:22 np0005625203.localdomain sudo[137316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpjiotegihydxqkibtbryzcordzcwtav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578682.0247977-583-139563887159625/AnsiballZ_dnf.py
Feb 20 09:11:22 np0005625203.localdomain sudo[137316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:22 np0005625203.localdomain sudo[137172]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:22 np0005625203.localdomain python3.9[137326]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:11:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54093 DF PROTO=TCP SPT=33944 DPT=9101 SEQ=2165838472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F46A010000000001030307) 
Feb 20 09:11:25 np0005625203.localdomain sudo[137316]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:25 np0005625203.localdomain sudo[137347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:11:25 np0005625203.localdomain sudo[137347]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:11:25 np0005625203.localdomain sudo[137347]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64664 DF PROTO=TCP SPT=43422 DPT=9100 SEQ=2601941510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F476400000000001030307) 
Feb 20 09:11:26 np0005625203.localdomain sudo[137437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbklbajwqypggwfghsrravmyijzjbspk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578686.3234725-622-127183669317777/AnsiballZ_package_facts.py
Feb 20 09:11:26 np0005625203.localdomain sudo[137437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:27 np0005625203.localdomain python3.9[137439]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Feb 20 09:11:27 np0005625203.localdomain sudo[137437]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:28 np0005625203.localdomain sudo[137529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trwuxyjofqlvtqzedcoaqkwgglviyour ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578688.2209024-653-254849100627171/AnsiballZ_stat.py
Feb 20 09:11:28 np0005625203.localdomain sudo[137529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:28 np0005625203.localdomain python3.9[137531]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:11:28 np0005625203.localdomain sudo[137529]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:29 np0005625203.localdomain sudo[137604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xbnuzdgpxoxdprgfubjszrxiddamctqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578688.2209024-653-254849100627171/AnsiballZ_copy.py
Feb 20 09:11:29 np0005625203.localdomain sudo[137604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61274 DF PROTO=TCP SPT=50460 DPT=9102 SEQ=3053792029 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F482350000000001030307) 
Feb 20 09:11:29 np0005625203.localdomain python3.9[137606]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578688.2209024-653-254849100627171/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:11:29 np0005625203.localdomain sudo[137604]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:30 np0005625203.localdomain sudo[137698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlryoetnxwcpqssdgwkdxfkqybxgvahl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578689.7595274-698-233059806858764/AnsiballZ_stat.py
Feb 20 09:11:30 np0005625203.localdomain sudo[137698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:30 np0005625203.localdomain python3.9[137700]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:11:30 np0005625203.localdomain sudo[137698]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:30 np0005625203.localdomain sudo[137773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqqrtzwkozgqykmtchlbvizpzqmtghey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578689.7595274-698-233059806858764/AnsiballZ_copy.py
Feb 20 09:11:30 np0005625203.localdomain sudo[137773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:30 np0005625203.localdomain python3.9[137775]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578689.7595274-698-233059806858764/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:11:30 np0005625203.localdomain sudo[137773]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52430 DF PROTO=TCP SPT=50050 DPT=9882 SEQ=181705517 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F48D000000000001030307) 
Feb 20 09:11:32 np0005625203.localdomain sudo[137867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-egiojykwmqpzgfhmfetsnhmzcmjjyvcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578691.78294-761-187803834390000/AnsiballZ_lineinfile.py
Feb 20 09:11:32 np0005625203.localdomain sudo[137867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:32 np0005625203.localdomain python3.9[137869]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:11:32 np0005625203.localdomain sudo[137867]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:33 np0005625203.localdomain sudo[137961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tencotpxrdjkbttlmczkgtkzokxvwvuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578693.4427643-806-132922790255359/AnsiballZ_setup.py
Feb 20 09:11:33 np0005625203.localdomain sudo[137961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:34 np0005625203.localdomain python3.9[137963]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:11:34 np0005625203.localdomain sudo[137961]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:34 np0005625203.localdomain sudo[138015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-azdnypqmqllivjprmlccbvhjlswyhnhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578693.4427643-806-132922790255359/AnsiballZ_systemd.py
Feb 20 09:11:34 np0005625203.localdomain sudo[138015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:35 np0005625203.localdomain python3.9[138017]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:11:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52431 DF PROTO=TCP SPT=50050 DPT=9882 SEQ=181705517 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F49CC00000000001030307) 
Feb 20 09:11:36 np0005625203.localdomain sudo[138015]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:37 np0005625203.localdomain sudo[138109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lkywfiinyaaxycxxeozrnjhhmivwphmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578696.981441-853-50758632735478/AnsiballZ_setup.py
Feb 20 09:11:37 np0005625203.localdomain sudo[138109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:37 np0005625203.localdomain python3.9[138111]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:11:37 np0005625203.localdomain sudo[138109]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18033 DF PROTO=TCP SPT=34540 DPT=9105 SEQ=1797338293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F4A4800000000001030307) 
Feb 20 09:11:38 np0005625203.localdomain sudo[138163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aztexreakhptuasyawazskubrpxurlbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578696.981441-853-50758632735478/AnsiballZ_systemd.py
Feb 20 09:11:38 np0005625203.localdomain sudo[138163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:38 np0005625203.localdomain python3.9[138165]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:11:38 np0005625203.localdomain chronyd[26395]: chronyd exiting
Feb 20 09:11:38 np0005625203.localdomain systemd[1]: Stopping NTP client/server...
Feb 20 09:11:38 np0005625203.localdomain systemd[1]: chronyd.service: Deactivated successfully.
Feb 20 09:11:38 np0005625203.localdomain systemd[1]: Stopped NTP client/server.
Feb 20 09:11:38 np0005625203.localdomain systemd[1]: Starting NTP client/server...
Feb 20 09:11:38 np0005625203.localdomain chronyd[138173]: chronyd version 4.3 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Feb 20 09:11:38 np0005625203.localdomain chronyd[138173]: Frequency -30.675 +/- 0.256 ppm read from /var/lib/chrony/drift
Feb 20 09:11:38 np0005625203.localdomain chronyd[138173]: Loaded seccomp filter (level 2)
Feb 20 09:11:38 np0005625203.localdomain systemd[1]: Started NTP client/server.
Feb 20 09:11:38 np0005625203.localdomain sudo[138163]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:38 np0005625203.localdomain sshd[135037]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:11:38 np0005625203.localdomain systemd[1]: session-42.scope: Deactivated successfully.
Feb 20 09:11:38 np0005625203.localdomain systemd[1]: session-42.scope: Consumed 28.662s CPU time.
Feb 20 09:11:38 np0005625203.localdomain systemd-logind[759]: Session 42 logged out. Waiting for processes to exit.
Feb 20 09:11:38 np0005625203.localdomain systemd-logind[759]: Removed session 42.
Feb 20 09:11:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40598 DF PROTO=TCP SPT=39148 DPT=9105 SEQ=103453842 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F4B0800000000001030307) 
Feb 20 09:11:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18035 DF PROTO=TCP SPT=34540 DPT=9105 SEQ=1797338293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F4BC410000000001030307) 
Feb 20 09:11:44 np0005625203.localdomain sshd[138189]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:11:44 np0005625203.localdomain sshd[138189]: Accepted publickey for zuul from 192.168.122.30 port 56726 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:11:44 np0005625203.localdomain systemd-logind[759]: New session 43 of user zuul.
Feb 20 09:11:44 np0005625203.localdomain systemd[1]: Started Session 43 of User zuul.
Feb 20 09:11:44 np0005625203.localdomain sshd[138189]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:11:45 np0005625203.localdomain python3.9[138282]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:11:46 np0005625203.localdomain sudo[138376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bumxpxxxxojwirzpuymangnfqwzmryip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578706.1428127-55-206194323798654/AnsiballZ_file.py
Feb 20 09:11:46 np0005625203.localdomain sudo[138376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:46 np0005625203.localdomain python3.9[138378]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:11:46 np0005625203.localdomain sudo[138376]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29240 DF PROTO=TCP SPT=50142 DPT=9101 SEQ=2466999040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F4C7800000000001030307) 
Feb 20 09:11:47 np0005625203.localdomain sudo[138481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jcnwwhwnqhxgfgnqmkazxwcxsaptagdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578706.9254668-79-111144298263844/AnsiballZ_stat.py
Feb 20 09:11:47 np0005625203.localdomain sudo[138481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:47 np0005625203.localdomain python3.9[138483]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:11:47 np0005625203.localdomain sudo[138481]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:47 np0005625203.localdomain sudo[138529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kiulxrzbwjsfmgagptophujbantgvuqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578706.9254668-79-111144298263844/AnsiballZ_file.py
Feb 20 09:11:47 np0005625203.localdomain sudo[138529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:47 np0005625203.localdomain python3.9[138531]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.70go9gx5 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:11:48 np0005625203.localdomain sudo[138529]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:48 np0005625203.localdomain sudo[138621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ksxcrimftbwcljezohnyxcyvbyzdwrkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578708.4556372-139-32110706111982/AnsiballZ_stat.py
Feb 20 09:11:48 np0005625203.localdomain sudo[138621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:48 np0005625203.localdomain python3.9[138623]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:11:48 np0005625203.localdomain sudo[138621]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:49 np0005625203.localdomain sudo[138696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nphbtwjcshfxzdrrkvqcewrvrrxjweuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578708.4556372-139-32110706111982/AnsiballZ_copy.py
Feb 20 09:11:49 np0005625203.localdomain sudo[138696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:49 np0005625203.localdomain python3.9[138698]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578708.4556372-139-32110706111982/.source _original_basename=.mksvge4a follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:11:49 np0005625203.localdomain sudo[138696]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11903 DF PROTO=TCP SPT=36948 DPT=9100 SEQ=102799012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F4D3800000000001030307) 
Feb 20 09:11:50 np0005625203.localdomain sudo[138788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kohyfpbdusywfpdiqdxhqxhjmkuqtejx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578709.8781893-187-46551615347477/AnsiballZ_file.py
Feb 20 09:11:50 np0005625203.localdomain sudo[138788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:50 np0005625203.localdomain python3.9[138790]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:11:50 np0005625203.localdomain sudo[138788]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:50 np0005625203.localdomain sudo[138880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rpagwntzsbtabfyxvxvqpuacmueefmjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578710.5123236-211-223430266239454/AnsiballZ_stat.py
Feb 20 09:11:50 np0005625203.localdomain sudo[138880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:50 np0005625203.localdomain python3.9[138882]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:11:50 np0005625203.localdomain sudo[138880]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:51 np0005625203.localdomain sudo[138953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cstkhacfawdvntkitoqyxovgudhbhwib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578710.5123236-211-223430266239454/AnsiballZ_copy.py
Feb 20 09:11:51 np0005625203.localdomain sudo[138953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:51 np0005625203.localdomain python3.9[138955]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578710.5123236-211-223430266239454/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:11:51 np0005625203.localdomain sudo[138953]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:51 np0005625203.localdomain sudo[139045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wcsjmqlwnubibcccwnhrzwfqnaxnaykl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578711.5635219-211-149733354026884/AnsiballZ_stat.py
Feb 20 09:11:51 np0005625203.localdomain sudo[139045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:52 np0005625203.localdomain python3.9[139047]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:11:52 np0005625203.localdomain sudo[139045]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:52 np0005625203.localdomain sudo[139118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhwgilpgxslnypuuiwesuzalkagtqzps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578711.5635219-211-149733354026884/AnsiballZ_copy.py
Feb 20 09:11:52 np0005625203.localdomain sudo[139118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:52 np0005625203.localdomain python3.9[139120]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578711.5635219-211-149733354026884/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:11:52 np0005625203.localdomain sudo[139118]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29242 DF PROTO=TCP SPT=50142 DPT=9101 SEQ=2466999040 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F4DF400000000001030307) 
Feb 20 09:11:53 np0005625203.localdomain sudo[139210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gdoxsgkixdtwphadfhwvdhjkvmycdhwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578712.9024415-298-188447726858383/AnsiballZ_file.py
Feb 20 09:11:53 np0005625203.localdomain sudo[139210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:53 np0005625203.localdomain python3.9[139212]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:11:53 np0005625203.localdomain sudo[139210]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:53 np0005625203.localdomain sudo[139302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bjtamtvttrnlyjjafdkwovatcsugxqjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578713.528878-322-118778614850105/AnsiballZ_stat.py
Feb 20 09:11:53 np0005625203.localdomain sudo[139302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:54 np0005625203.localdomain python3.9[139304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:11:54 np0005625203.localdomain sudo[139302]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:54 np0005625203.localdomain sshd[139357]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:11:54 np0005625203.localdomain sudo[139377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qsqscoejfsflpifxtczlmiviglwzidso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578713.528878-322-118778614850105/AnsiballZ_copy.py
Feb 20 09:11:54 np0005625203.localdomain sudo[139377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:54 np0005625203.localdomain sshd[139357]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:11:54 np0005625203.localdomain python3.9[139379]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578713.528878-322-118778614850105/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:11:54 np0005625203.localdomain sudo[139377]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:55 np0005625203.localdomain sudo[139469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ilonbowsfiypyewkpsamsmetpwjjtylm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578714.7908237-367-226152449809592/AnsiballZ_stat.py
Feb 20 09:11:55 np0005625203.localdomain sudo[139469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:55 np0005625203.localdomain python3.9[139471]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:11:55 np0005625203.localdomain sudo[139469]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:55 np0005625203.localdomain sudo[139542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-svdrfzshpcqgauludjqyjlvvsdssqnrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578714.7908237-367-226152449809592/AnsiballZ_copy.py
Feb 20 09:11:55 np0005625203.localdomain sudo[139542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:55 np0005625203.localdomain python3.9[139544]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578714.7908237-367-226152449809592/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:11:55 np0005625203.localdomain sudo[139542]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11905 DF PROTO=TCP SPT=36948 DPT=9100 SEQ=102799012 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F4EB400000000001030307) 
Feb 20 09:11:56 np0005625203.localdomain sudo[139634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ffpkeydorvquvikuaejclzlvmgersvbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578716.0754328-412-263488729214711/AnsiballZ_systemd.py
Feb 20 09:11:56 np0005625203.localdomain sudo[139634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:56 np0005625203.localdomain python3.9[139636]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:11:56 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:11:57 np0005625203.localdomain systemd-rc-local-generator[139659]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:11:57 np0005625203.localdomain systemd-sysv-generator[139664]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:11:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:11:57 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:11:57 np0005625203.localdomain systemd-rc-local-generator[139699]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:11:57 np0005625203.localdomain systemd-sysv-generator[139702]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:11:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:11:57 np0005625203.localdomain systemd[1]: Starting EDPM Container Shutdown...
Feb 20 09:11:57 np0005625203.localdomain systemd[1]: Finished EDPM Container Shutdown.
Feb 20 09:11:57 np0005625203.localdomain sudo[139634]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:58 np0005625203.localdomain sudo[139803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-geltsspatgtprewxeyzczhoftrmsjqga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578717.795789-436-73194552798185/AnsiballZ_stat.py
Feb 20 09:11:58 np0005625203.localdomain sudo[139803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:58 np0005625203.localdomain python3.9[139805]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:11:58 np0005625203.localdomain sudo[139803]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:58 np0005625203.localdomain sudo[139876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jmbtufbrvavyeuazyxfhgrfnvkyjxtgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578717.795789-436-73194552798185/AnsiballZ_copy.py
Feb 20 09:11:58 np0005625203.localdomain sudo[139876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:58 np0005625203.localdomain python3.9[139878]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578717.795789-436-73194552798185/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:11:58 np0005625203.localdomain sudo[139876]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5387 DF PROTO=TCP SPT=34866 DPT=9102 SEQ=1174776284 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F4F7650000000001030307) 
Feb 20 09:11:59 np0005625203.localdomain sudo[139968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-psfxvxtbkrzmaciwjrvahivipanpydml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578719.0783498-481-193233824233250/AnsiballZ_stat.py
Feb 20 09:11:59 np0005625203.localdomain sudo[139968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:11:59 np0005625203.localdomain python3.9[139970]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:11:59 np0005625203.localdomain sudo[139968]: pam_unix(sudo:session): session closed for user root
Feb 20 09:11:59 np0005625203.localdomain sudo[140041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jejjcnwjsxmfehdfqoeilskhchldyqhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578719.0783498-481-193233824233250/AnsiballZ_copy.py
Feb 20 09:11:59 np0005625203.localdomain sudo[140041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:00 np0005625203.localdomain python3.9[140043]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578719.0783498-481-193233824233250/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:00 np0005625203.localdomain sudo[140041]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:00 np0005625203.localdomain sudo[140133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-arqhgpjuwqtsjdejsresjglnbzhahgdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578720.2476387-526-78566258535302/AnsiballZ_systemd.py
Feb 20 09:12:00 np0005625203.localdomain sudo[140133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:00 np0005625203.localdomain python3.9[140135]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:12:00 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:12:00 np0005625203.localdomain systemd-rc-local-generator[140159]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:12:00 np0005625203.localdomain systemd-sysv-generator[140165]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:12:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:12:01 np0005625203.localdomain systemd[1]: Starting Create netns directory...
Feb 20 09:12:01 np0005625203.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 20 09:12:01 np0005625203.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 20 09:12:01 np0005625203.localdomain systemd[1]: Finished Create netns directory.
Feb 20 09:12:01 np0005625203.localdomain sudo[140133]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46499 DF PROTO=TCP SPT=52354 DPT=9882 SEQ=1573805171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F502410000000001030307) 
Feb 20 09:12:02 np0005625203.localdomain python3.9[140267]: ansible-ansible.builtin.service_facts Invoked
Feb 20 09:12:03 np0005625203.localdomain network[140284]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 09:12:03 np0005625203.localdomain network[140285]: 'network-scripts' will be removed from distribution in near future.
Feb 20 09:12:03 np0005625203.localdomain network[140286]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 09:12:04 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:12:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46500 DF PROTO=TCP SPT=52354 DPT=9882 SEQ=1573805171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F512010000000001030307) 
Feb 20 09:12:07 np0005625203.localdomain sudo[140487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qboiqihsftzwlbwabuhvpshduvdqlnwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578727.580611-604-69428084926060/AnsiballZ_stat.py
Feb 20 09:12:07 np0005625203.localdomain sudo[140487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:08 np0005625203.localdomain python3.9[140489]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:12:08 np0005625203.localdomain sudo[140487]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60316 DF PROTO=TCP SPT=34360 DPT=9105 SEQ=4052389491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F519C10000000001030307) 
Feb 20 09:12:08 np0005625203.localdomain sudo[140562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mrudududgozvzrbttesqirjyeodopqmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578727.580611-604-69428084926060/AnsiballZ_copy.py
Feb 20 09:12:08 np0005625203.localdomain sudo[140562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:08 np0005625203.localdomain python3.9[140564]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578727.580611-604-69428084926060/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:08 np0005625203.localdomain sudo[140562]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:09 np0005625203.localdomain sudo[140655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vbltzdipbtbryeahoxloqmclahdtktgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578728.9100502-649-110532304035490/AnsiballZ_systemd.py
Feb 20 09:12:09 np0005625203.localdomain sudo[140655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:09 np0005625203.localdomain python3.9[140657]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:12:09 np0005625203.localdomain systemd[1]: Reloading OpenSSH server daemon...
Feb 20 09:12:09 np0005625203.localdomain systemd[1]: Reloaded OpenSSH server daemon.
Feb 20 09:12:09 np0005625203.localdomain sshd[120046]: Received SIGHUP; restarting.
Feb 20 09:12:09 np0005625203.localdomain sshd[120046]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:12:09 np0005625203.localdomain sshd[120046]: Server listening on 0.0.0.0 port 22.
Feb 20 09:12:09 np0005625203.localdomain sshd[120046]: Server listening on :: port 22.
Feb 20 09:12:09 np0005625203.localdomain sudo[140655]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:10 np0005625203.localdomain sudo[140751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sxkzlquiadwwdlcmvwdbqsrnjvpjppet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578729.7721434-673-167134243130403/AnsiballZ_file.py
Feb 20 09:12:10 np0005625203.localdomain sudo[140751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:10 np0005625203.localdomain python3.9[140753]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:10 np0005625203.localdomain sudo[140751]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:10 np0005625203.localdomain sudo[140843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qdlshqjetvdlhmmsrllspzuczxkhexqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578730.4886441-698-211870671547148/AnsiballZ_stat.py
Feb 20 09:12:10 np0005625203.localdomain sudo[140843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:10 np0005625203.localdomain python3.9[140845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:12:10 np0005625203.localdomain sudo[140843]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:11 np0005625203.localdomain sudo[140916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjgblhabhyheewwotinxdrpktvyxirac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578730.4886441-698-211870671547148/AnsiballZ_copy.py
Feb 20 09:12:11 np0005625203.localdomain sudo[140916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:11 np0005625203.localdomain python3.9[140918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578730.4886441-698-211870671547148/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:11 np0005625203.localdomain sudo[140916]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:12 np0005625203.localdomain sudo[141008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydllriyiftccfmcdwxlrlbzuxgrdvxvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578731.8255696-751-57632035696791/AnsiballZ_timezone.py
Feb 20 09:12:12 np0005625203.localdomain sudo[141008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:12 np0005625203.localdomain python3.9[141010]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Feb 20 09:12:12 np0005625203.localdomain systemd[1]: Starting Time & Date Service...
Feb 20 09:12:12 np0005625203.localdomain systemd[1]: Started Time & Date Service.
Feb 20 09:12:12 np0005625203.localdomain sudo[141008]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:13 np0005625203.localdomain sudo[141104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wsebdtnbyjvgrfrnvavjrrjdrsjmvskw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578732.8576498-778-230526952693191/AnsiballZ_file.py
Feb 20 09:12:13 np0005625203.localdomain sudo[141104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:13 np0005625203.localdomain python3.9[141106]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:13 np0005625203.localdomain sudo[141104]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:13 np0005625203.localdomain sudo[141196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vwjynxwmxxrbnlirkxtxpsrchvxxdofj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578733.4863625-802-136866009323290/AnsiballZ_stat.py
Feb 20 09:12:13 np0005625203.localdomain sudo[141196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:13 np0005625203.localdomain python3.9[141198]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:12:13 np0005625203.localdomain sudo[141196]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60318 DF PROTO=TCP SPT=34360 DPT=9105 SEQ=4052389491 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F531810000000001030307) 
Feb 20 09:12:14 np0005625203.localdomain sudo[141269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmtjdrorhiqnfsorsieikxouylsmyels ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578733.4863625-802-136866009323290/AnsiballZ_copy.py
Feb 20 09:12:14 np0005625203.localdomain sudo[141269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46501 DF PROTO=TCP SPT=52354 DPT=9882 SEQ=1573805171 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F532810000000001030307) 
Feb 20 09:12:14 np0005625203.localdomain python3.9[141271]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578733.4863625-802-136866009323290/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:14 np0005625203.localdomain sudo[141269]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:14 np0005625203.localdomain sudo[141361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqtlguymokzaoruzdhaheiiikqmriluk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578734.674458-847-213102464560828/AnsiballZ_stat.py
Feb 20 09:12:14 np0005625203.localdomain sudo[141361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:15 np0005625203.localdomain python3.9[141363]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:12:15 np0005625203.localdomain sudo[141361]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:15 np0005625203.localdomain sudo[141434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pgcobsmegsusndwhmphhsgarwwjeppfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578734.674458-847-213102464560828/AnsiballZ_copy.py
Feb 20 09:12:15 np0005625203.localdomain sudo[141434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:15 np0005625203.localdomain python3.9[141436]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578734.674458-847-213102464560828/.source.yaml _original_basename=.ooc89e0u follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:15 np0005625203.localdomain sudo[141434]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:16 np0005625203.localdomain sudo[141526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wipoapidhxtcizppgjtjydvicbwwjelz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578735.9083846-893-64965727474273/AnsiballZ_stat.py
Feb 20 09:12:16 np0005625203.localdomain sudo[141526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:16 np0005625203.localdomain python3.9[141528]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:12:16 np0005625203.localdomain sudo[141526]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:16 np0005625203.localdomain sudo[141601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jzkvsbqriqhtqxwiramgsewldqbyusli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578735.9083846-893-64965727474273/AnsiballZ_copy.py
Feb 20 09:12:16 np0005625203.localdomain sudo[141601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:16 np0005625203.localdomain python3.9[141603]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578735.9083846-893-64965727474273/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:16 np0005625203.localdomain sudo[141601]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34084 DF PROTO=TCP SPT=60540 DPT=9101 SEQ=1358584751 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F53C810000000001030307) 
Feb 20 09:12:17 np0005625203.localdomain sudo[141693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzobximyfhbsqtjptbzutbepsblejshy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578737.174195-937-66245931388696/AnsiballZ_command.py
Feb 20 09:12:17 np0005625203.localdomain sudo[141693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:17 np0005625203.localdomain python3.9[141695]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:12:17 np0005625203.localdomain sudo[141693]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:18 np0005625203.localdomain sudo[141786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vzspomphxxwryrtcguhzywwxkraqlcup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578738.002356-961-277417284636656/AnsiballZ_command.py
Feb 20 09:12:18 np0005625203.localdomain sudo[141786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:18 np0005625203.localdomain python3.9[141788]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:12:18 np0005625203.localdomain sudo[141786]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:19 np0005625203.localdomain sudo[141879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zgxaqvhrgwscaiaasmdmefnclvxydala ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771578738.7482445-985-231784975393300/AnsiballZ_edpm_nftables_from_files.py
Feb 20 09:12:19 np0005625203.localdomain sudo[141879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:19 np0005625203.localdomain python3[141881]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 20 09:12:19 np0005625203.localdomain sudo[141879]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54096 DF PROTO=TCP SPT=33944 DPT=9101 SEQ=2165838472 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F548810000000001030307) 
Feb 20 09:12:20 np0005625203.localdomain sudo[141971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmorbrinbqfautdbohpsxrrfpjkfqhyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578740.0010939-1009-266435779823094/AnsiballZ_stat.py
Feb 20 09:12:20 np0005625203.localdomain sudo[141971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:20 np0005625203.localdomain python3.9[141973]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:12:20 np0005625203.localdomain sudo[141971]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:21 np0005625203.localdomain sudo[142044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hfrrazxbxhveynozusojjfjjbedhlfsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578740.0010939-1009-266435779823094/AnsiballZ_copy.py
Feb 20 09:12:21 np0005625203.localdomain sudo[142044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:21 np0005625203.localdomain python3.9[142046]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578740.0010939-1009-266435779823094/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:21 np0005625203.localdomain sudo[142044]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:21 np0005625203.localdomain sudo[142136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ilebkeiuryoavmyelioovfrjelkxtdqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578741.5073225-1054-140455582249691/AnsiballZ_stat.py
Feb 20 09:12:21 np0005625203.localdomain sudo[142136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:21 np0005625203.localdomain python3.9[142138]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:12:22 np0005625203.localdomain sudo[142136]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:22 np0005625203.localdomain sudo[142209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gaoqggofggfnzvonwdibpkanqizfcnuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578741.5073225-1054-140455582249691/AnsiballZ_copy.py
Feb 20 09:12:22 np0005625203.localdomain sudo[142209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:22 np0005625203.localdomain python3.9[142211]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578741.5073225-1054-140455582249691/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:22 np0005625203.localdomain sudo[142209]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:22 np0005625203.localdomain sudo[142301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tagkyzqpunwrtzihynjhrwyeqrrokarf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578742.7255225-1099-135783563962717/AnsiballZ_stat.py
Feb 20 09:12:22 np0005625203.localdomain sudo[142301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64667 DF PROTO=TCP SPT=43422 DPT=9100 SEQ=2601941510 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F554800000000001030307) 
Feb 20 09:12:23 np0005625203.localdomain python3.9[142303]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:12:23 np0005625203.localdomain sudo[142301]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:23 np0005625203.localdomain sudo[142374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ipuydzrvduzjiyfjybnkusgqvebarglk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578742.7255225-1099-135783563962717/AnsiballZ_copy.py
Feb 20 09:12:23 np0005625203.localdomain sudo[142374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:23 np0005625203.localdomain python3.9[142376]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578742.7255225-1099-135783563962717/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:23 np0005625203.localdomain sudo[142374]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:24 np0005625203.localdomain sudo[142466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lwsngujsorzdvahmmbxdtujeewifjqfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578743.8704088-1144-183517877125546/AnsiballZ_stat.py
Feb 20 09:12:24 np0005625203.localdomain sudo[142466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:24 np0005625203.localdomain python3.9[142468]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:12:24 np0005625203.localdomain sudo[142466]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:24 np0005625203.localdomain sudo[142539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qslyuyelsqgjrevvaqwmlsehwgkzvujn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578743.8704088-1144-183517877125546/AnsiballZ_copy.py
Feb 20 09:12:24 np0005625203.localdomain sudo[142539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:24 np0005625203.localdomain python3.9[142541]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578743.8704088-1144-183517877125546/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:24 np0005625203.localdomain sudo[142539]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:25 np0005625203.localdomain sudo[142631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xarjcnudnsnqzqsicxvtfxrimeorqsdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578745.073518-1189-19867764102138/AnsiballZ_stat.py
Feb 20 09:12:25 np0005625203.localdomain sudo[142631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:25 np0005625203.localdomain python3.9[142633]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:12:25 np0005625203.localdomain sudo[142631]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:25 np0005625203.localdomain sudo[142704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-comkitrjzatsnpdkibewegnudwiuafjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578745.073518-1189-19867764102138/AnsiballZ_copy.py
Feb 20 09:12:25 np0005625203.localdomain sudo[142704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:25 np0005625203.localdomain sudo[142706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:12:26 np0005625203.localdomain sudo[142706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:12:26 np0005625203.localdomain sudo[142706]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:26 np0005625203.localdomain sudo[142722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 09:12:26 np0005625203.localdomain sudo[142722]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:12:26 np0005625203.localdomain python3.9[142716]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578745.073518-1189-19867764102138/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:26 np0005625203.localdomain sudo[142704]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:26 np0005625203.localdomain sudo[142722]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:26 np0005625203.localdomain sudo[142830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:12:26 np0005625203.localdomain sudo[142830]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:12:26 np0005625203.localdomain sudo[142830]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:26 np0005625203.localdomain sudo[142859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjpcohmeuzlfhomwigsnlxclbscpmjfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578746.4052994-1234-164794758907387/AnsiballZ_file.py
Feb 20 09:12:26 np0005625203.localdomain sudo[142859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:26 np0005625203.localdomain sudo[142864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:12:26 np0005625203.localdomain sudo[142864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:12:26 np0005625203.localdomain python3.9[142863]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:26 np0005625203.localdomain sudo[142859]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:27 np0005625203.localdomain sudo[142987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-asccfgyfahuwdfhlcphrzvwokoxmeizr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578747.0945752-1258-252466410916419/AnsiballZ_command.py
Feb 20 09:12:27 np0005625203.localdomain sudo[142987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:27 np0005625203.localdomain sudo[142864]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:27 np0005625203.localdomain python3.9[142995]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:12:27 np0005625203.localdomain sudo[142987]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:28 np0005625203.localdomain sudo[143094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rdazvulgjwxspabmdbbsmwkutknrygfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578747.7262208-1282-103329490884404/AnsiballZ_blockinfile.py
Feb 20 09:12:28 np0005625203.localdomain sudo[143094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:28 np0005625203.localdomain python3.9[143096]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:28 np0005625203.localdomain sudo[143094]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:28 np0005625203.localdomain sudo[143112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:12:28 np0005625203.localdomain sudo[143112]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:12:28 np0005625203.localdomain sudo[143112]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:28 np0005625203.localdomain sudo[143202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ddvpsxbcqumsnqbdoxoktklyxeelmflp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578748.6142204-1309-10051321886111/AnsiballZ_file.py
Feb 20 09:12:28 np0005625203.localdomain sudo[143202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35665 DF PROTO=TCP SPT=40350 DPT=9882 SEQ=1705264573 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F56B5C0000000001030307) 
Feb 20 09:12:29 np0005625203.localdomain python3.9[143204]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:29 np0005625203.localdomain sudo[143202]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20328 DF PROTO=TCP SPT=35958 DPT=9102 SEQ=1897074498 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F56C950000000001030307) 
Feb 20 09:12:29 np0005625203.localdomain sudo[143294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xakvmdavgvlwkekyhdissjgukzjbptbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578749.1746864-1309-163062491000447/AnsiballZ_file.py
Feb 20 09:12:29 np0005625203.localdomain sudo[143294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:29 np0005625203.localdomain sshd[143297]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:12:29 np0005625203.localdomain python3.9[143296]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:29 np0005625203.localdomain sudo[143294]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:30 np0005625203.localdomain sudo[143388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xteszrmehiiobfotcxtevjxvydxgbxxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578749.9900925-1354-275251563502712/AnsiballZ_mount.py
Feb 20 09:12:30 np0005625203.localdomain sudo[143388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:30 np0005625203.localdomain python3.9[143390]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 20 09:12:30 np0005625203.localdomain sudo[143388]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:31 np0005625203.localdomain sudo[143481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-olmedovidfkkyfuskpuzajojrwpajtap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578750.764899-1354-131042304262473/AnsiballZ_mount.py
Feb 20 09:12:31 np0005625203.localdomain sudo[143481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:31 np0005625203.localdomain sshd[143297]: Received disconnect from 103.61.123.132 port 50496:11: Bye Bye [preauth]
Feb 20 09:12:31 np0005625203.localdomain sshd[143297]: Disconnected from authenticating user root 103.61.123.132 port 50496 [preauth]
Feb 20 09:12:31 np0005625203.localdomain python3.9[143483]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Feb 20 09:12:31 np0005625203.localdomain sudo[143481]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:31 np0005625203.localdomain sshd[138189]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:12:31 np0005625203.localdomain systemd[1]: session-43.scope: Deactivated successfully.
Feb 20 09:12:31 np0005625203.localdomain systemd[1]: session-43.scope: Consumed 27.992s CPU time.
Feb 20 09:12:31 np0005625203.localdomain systemd-logind[759]: Session 43 logged out. Waiting for processes to exit.
Feb 20 09:12:31 np0005625203.localdomain systemd-logind[759]: Removed session 43.
Feb 20 09:12:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52434 DF PROTO=TCP SPT=50050 DPT=9882 SEQ=181705517 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F57A800000000001030307) 
Feb 20 09:12:34 np0005625203.localdomain sshd[143499]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:12:34 np0005625203.localdomain sshd[143499]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:12:35 np0005625203.localdomain sshd[143501]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:12:36 np0005625203.localdomain sshd[143501]: Invalid user develop from 194.107.115.2 port 19532
Feb 20 09:12:36 np0005625203.localdomain sshd[143501]: Received disconnect from 194.107.115.2 port 19532:11: Bye Bye [preauth]
Feb 20 09:12:36 np0005625203.localdomain sshd[143501]: Disconnected from invalid user develop 194.107.115.2 port 19532 [preauth]
Feb 20 09:12:36 np0005625203.localdomain sshd[143503]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:12:37 np0005625203.localdomain sshd[143503]: Accepted publickey for zuul from 192.168.122.30 port 41072 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:12:37 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27670 DF PROTO=TCP SPT=32876 DPT=9105 SEQ=3642097321 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F58ADA0000000001030307) 
Feb 20 09:12:37 np0005625203.localdomain systemd-logind[759]: New session 44 of user zuul.
Feb 20 09:12:37 np0005625203.localdomain systemd[1]: Started Session 44 of User zuul.
Feb 20 09:12:37 np0005625203.localdomain sshd[143503]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:12:37 np0005625203.localdomain sudo[143596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-judkhbraatwlqijgbwoakpyvhvwnwyrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578757.1568868-22-105755359448548/AnsiballZ_tempfile.py
Feb 20 09:12:37 np0005625203.localdomain sudo[143596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:37 np0005625203.localdomain python3.9[143598]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Feb 20 09:12:37 np0005625203.localdomain sudo[143596]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:38 np0005625203.localdomain sshd[143613]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:12:38 np0005625203.localdomain sshd[143613]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:12:39 np0005625203.localdomain sudo[143690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-upldgcwcxfxaunfsmpvlhbsxfrfjcqpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578758.7008815-94-77245839330239/AnsiballZ_stat.py
Feb 20 09:12:39 np0005625203.localdomain sudo[143690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:39 np0005625203.localdomain python3.9[143692]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:12:39 np0005625203.localdomain sudo[143690]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:40 np0005625203.localdomain sudo[143784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yoaghfdlghmqdqattfsqyaohhighomwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578760.0529647-142-256452493499632/AnsiballZ_slurp.py
Feb 20 09:12:40 np0005625203.localdomain sudo[143784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:40 np0005625203.localdomain python3.9[143786]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Feb 20 09:12:40 np0005625203.localdomain sudo[143784]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18038 DF PROTO=TCP SPT=34540 DPT=9105 SEQ=1797338293 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F59A800000000001030307) 
Feb 20 09:12:41 np0005625203.localdomain sudo[143876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nybeubonvqlbpguexjwbwbbscnnszajh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578761.4210992-190-162264339308654/AnsiballZ_stat.py
Feb 20 09:12:41 np0005625203.localdomain sudo[143876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:41 np0005625203.localdomain python3.9[143878]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.iief8mvw follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:12:41 np0005625203.localdomain sudo[143876]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:42 np0005625203.localdomain sudo[143951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nyggethobiimqoahswyjrawvbjgjlzdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578761.4210992-190-162264339308654/AnsiballZ_copy.py
Feb 20 09:12:42 np0005625203.localdomain sudo[143951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:42 np0005625203.localdomain python3.9[143953]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.iief8mvw mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578761.4210992-190-162264339308654/.source.iief8mvw _original_basename=.ngnlgxqx follow=False checksum=831757da1f03f9732785943fa2a05c0d9424aa2f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:42 np0005625203.localdomain sudo[143951]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:42 np0005625203.localdomain systemd[1]: systemd-timedated.service: Deactivated successfully.
Feb 20 09:12:44 np0005625203.localdomain sudo[144045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ilhkzkpmlmulutesjxvavelzporpdqfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578763.8432949-280-102612106456549/AnsiballZ_setup.py
Feb 20 09:12:44 np0005625203.localdomain sudo[144045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:44 np0005625203.localdomain python3.9[144047]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:12:44 np0005625203.localdomain sudo[144045]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:45 np0005625203.localdomain sudo[144137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xodmnrrsryicxyjksuzpcuswfpnzjrlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578765.559096-329-181460532660263/AnsiballZ_blockinfile.py
Feb 20 09:12:45 np0005625203.localdomain sudo[144137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:45 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15661 DF PROTO=TCP SPT=34038 DPT=9101 SEQ=1938099010 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F5ADBF0000000001030307) 
Feb 20 09:12:46 np0005625203.localdomain python3.9[144139]: ansible-ansible.builtin.blockinfile Invoked with block=np0005625201.localdomain,192.168.122.105,np0005625201* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyGkX26ECIsvqnvJegedSF6KicDAAqjaifawEd//OuK9zdHIWqO3XmlEszZqWPsdQhPFkelfzXR+sy3gbPNv+yjT7phsw1sq7zHXeogQFlP5iOQZrf6hCnfXxVk2ckIXMT0UJVZ8FCTwsQi+HKkR/IEj08pR7EjrXGWxHkjv5wNj76spF3FJxtwycS4+KzY3UFy7gYWVn2jB0ha966YgjHMPhzQnT33W9myxGH33M1L5ZCGlfH19hLnqTUNMfzIfw3afxHkL5BFZbhthUPmIfLdLtKmZEkpSTBO/CrNA6CmMfY6xnT78hmwXytEQ+jeiRdKXdr9xQ2j6wVmPzckFKBsBYRe4DprKGt93fnKS9Z6A3Sv626DyZgDa8/NXbtAaBxtyix5Vdt872hYvCzYyB/OuSV6PR5DOq8z3fquOwgtka3rA6qL5gxhFJcO5TqtBM76DzOLd9OLM9bIO1yK9sCmbYynMojkXylzhDfcI8kytS5xs9FJEfwTElZRHkEIQE=
                                                            np0005625201.localdomain,192.168.122.105,np0005625201* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINiFV2XLGVf9PGXF0NE4rbupw+vH23sDv10vB3wGrrmN
                                                            np0005625201.localdomain,192.168.122.105,np0005625201* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM/mxytSzwSYcezRRSD4AjPi1j6Bxso/MLXC/NAewzvKThRznoUobc02vzGaO4FrwuZIZ/YHJyAHrQRbtdSPUTU=
                                                            np0005625202.localdomain,192.168.122.106,np0005625202* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDDr8sejencX7nSCX6AegGtTuiZL3yclu/L7ZVN4B6dKPdmHqVr33QJD40sEk28GHpx8BrkPU2Qj1de9H6mGtrlwhmJr7Pccg/YqzKoTCQD5rZQ4youU8H70As6YX5ZlXyulwI1SH70XjMm37x4ptKALFOjRnHg0WIXah/tAmzrY/orh+/eCcns7APVjN9B1o+MqP4r47WrWrGU/KxtsHc6dflWxZW7BWUCCNS0e3C4yWLRjy8Hhj7Qkpssv/UBcj+olVHadUUOYiaQZ5Y33MjxwIg8o1MuC7C1dNIn8eXOXXiA8jd/lJd9kImrCGUtkVqj8VQgsMh4vRYMD+0SNLYRDVwxdemOzJYgwQhgiWZ0G+cVhnTBpMmXyIws2OpOKU8R3HjTC3jz+BxvjwEvMDoQfpGgsHB9NCXnkQzs2F8EA8LpA823Ef1SMgPdDCaQzvN5oQPZkWAPMVHvq31xpN9q+KXg/bg0uDaIZXUxW2rGnem7pFS78rRUGL6MfSMn1zs=
                                                            np0005625202.localdomain,192.168.122.106,np0005625202* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHIvGY3AHSeC6TXoQUOT+qZPpfcpbcCaqWpewY2PaUdr
                                                            np0005625202.localdomain,192.168.122.106,np0005625202* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNhJMOoHTPuI+cufoglj5k5xopCSTjiletXnoJ15KnCBclkNCXy9DqMn/ZeknN3AqFVQZhJfknnRkCXvgtRg7lc=
                                                            np0005625200.localdomain,192.168.122.104,np0005625200* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDW88346W6zU6nxCpqapHtIr5nRG8Jn9LFit3r5klBfauCkmAGONb4X8IwKjo8MD9etebUVbo6aX9gBMBMSs7bSoHzsEQuMLpBDrweSbahQj+gqZ5TmQ/xvwbhws04z3/IJxapAk2xWu7khVGjvOPUE1CROkP+1LiGktQ6Xj1ar1TbLNud2Dq/R5ZalbpK0OT3+no3x0oAJT3W649tW4nmCWcNaxykPsLREsUlH2qVoceAzLEDCSde9/1TONc/URyB4acVqmEwJDHeX51bh31tpQwp/WSe0vKQ6eUw63Tmpn+dRI9xbnFhc6mgGAPcEw7cAUkM7oM6bYMSvVxYDmzMhuXUU/9i3mdMnDBkMyZ5Oed6ZSmFQIJe5k7cz3783d35ZXfl/HsYMqoZ3lmDgbeS59pQrI+BldKyv3sTnoCDahfcmzmiHssxqa7tT5KOuR444q7Nj6wJEIZMEEJEHtMlh1iSBRJZOEOaKjo7h+jV7KMe75aPRasvu9K1v0dqyG6U=
                                                            np0005625200.localdomain,192.168.122.104,np0005625200* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKZd4BJQ7FPHukFUlQ3fRSVsRqMpZA9FFzC98e6Nz+hC
                                                            np0005625200.localdomain,192.168.122.104,np0005625200* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJgelHBDBResuC/7QDQA12qTpLPW1xHX6eUvY/QfQ0s1DYziYEKuSHQhUQMzxPcUq9IVVPnxkoRvZdWPxsh2Cmk=
                                                            np0005625199.localdomain,192.168.122.103,np0005625199* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDrnsozeOPJKYg9sx2Tj6QOLRhujK5RVh5RZQ3sb0pk+DbWHQKqS1YvJUg2hV4WxbxPnNUCBtJ+RZ8lVm6RLM+hc3ffe2sOMOz5upO/hTlIpBSfJpQORkiNW+XIXdDVxgE418veFd2hASFmiCmKoFSKXsvnmFU9oTEpja1plcXSqCobFMVYKlhcRo66O0ySlGOR+o3Ar2yNJQjFErEGvZLoDEa/VlA6zreYmTaIsnlUDie0gbm5teTlsCcEYkvWcTzcfOEX2kXQRQbS5qlPtGg7c+KMv5e40rE+2QOigLmOOPVGwNYuLuhb/EHT0C8hK8otW4tiXxBlSZ5ONKY6YYQOpy7krNkWRxNXzK0LfXo2bt2apDaMzebPOvuBj1YyBiLpa6/aLvS/dtGolQNPDpFivPbP/mSpat1qTs0W3/2HyBovwWSGJDW8MMYxbZJ0Z6tnuOwdrPTdkhIibfW9wxgL7EHrDYrGx5CvA2vUM4KDKRntz/cCMGE/zKacSJ48nNk=
                                                            np0005625199.localdomain,192.168.122.103,np0005625199* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIENpQQgr9IVl8UWbQ9CANzH6ET+G2aHJkzVgu9ObE0o0
                                                            np0005625199.localdomain,192.168.122.103,np0005625199* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJUcn4Y73wlRXKxRegM8lRt5GQ//hAORn8IqrcrC5ZJyjHCZmp+wutQeuPqPsTK4OVK+uH/93l/3Av8AKvpXG3A=
                                                            np0005625203.localdomain,192.168.122.107,np0005625203* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtf1NXQ3EGQGdpLLLxuODKBdTGwqsiHL2QZ6zcfpGAa7EhDIxuEcLboqOGjQO0FM3u+kl2gIgKF0UsY5Vjcv4mDCMp7A7srq7TVo5lE5cCppbbXr0/PH2L/naHU3W+W83aT5RE17XPJ0Acn3W51WFBoICCCc4jjWTGmkNEgurKBJmdr0n8NeIcUWZ7Abrs/N2xzNftEFIjAPwebxgEwgCx0hMbdjTFhKbB/V7CjKaCU/UjirWMW5aDQJQEfrCM9u4NHuGaWKzJgar4/shNHaRvkCDbVrRPTCyfNebE04J/R42X3yWmvww4TMZVpRROd/u6Pgg1P2tbPGfQ0XvS0rfY6W4/VnHcyRDqxILH5BoeCAbTuVFmR0hbQu9fNbNxTP+o+na9mHEbNxbhcREnkal8+M0l11YftCRkr4132JITxe7y93gN/dwxE3nJLHLXRuRskWc3GTDT2MVU2Sj64yizD9KOM3oiMBXdPbNbgZywu3hqQvpO00GVg6QRjEJoiFc=
                                                            np0005625203.localdomain,192.168.122.107,np0005625203* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPIEBJz4VBziYqCcr9UT9NnbvRxFLoAcnVJLavCpXqHm
                                                            np0005625203.localdomain,192.168.122.107,np0005625203* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBM9k0T2/IFyFrBAAoi3QqwBKC9bi/bemQO6MNZhrO12MSG3WZcjS1bhOFPw5LuM+f11BFCm5wNyBNY/QmALZTgE=
                                                            np0005625204.localdomain,192.168.122.108,np0005625204* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAo6exxFtNk/Y5qEGYenJyhnCsS7iZmCGsFaQtJElNSeTTX9a1P0P2EmjtHolRxnljCZ2X8HgWx/irhJvWLoS+dzF5l+KcyQy83+048h51mbnj7zV2uG9i8LkO0egs1uBBp5E+hauHMsuf0nIDFl45W86ZXuf+MfFEKCInhjB5gfE9tTjwmKwKhgO1DE7Vpx3OYy1FHkq0YDBCqQHuuhYPrLZPjfVv3vGOaHH/XCsxX3h8/ixsZbobD56dDBKF/8CFyC/guH8pNUhZHG0dEhz5BT8PcE2Q/M9pPttzmRQksfg9+q7lVy9eCoOVpzqfTgjE1cm5yISwuMZzaNxwjJKB54EWpfl5xxnkC14B+xdvowxpl1PcMNZ0q1fWofJF4TrJAwWCUYZf45aUV2yb5R8WavUT0pX32xmd4zFbXusoafiw2FcgnxoGz3N4ZgIxTPPmgUe13blr1SK44huXWPioaolFBo82xVVFHc+01vfLF3xvs86d6EpqpLH+yaCeUjE=
                                                            np0005625204.localdomain,192.168.122.108,np0005625204* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDTY+/nqIDkr9+7jl3LUu4apuQeFzQYkXiSihEezHlEw
                                                            np0005625204.localdomain,192.168.122.108,np0005625204* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPuq/q6JwPgXzS/TgJ6dhP0gZvq89Vk1r9Ou051lEnMdt+NHYUjJx2Tv1oS9A+wQXivor03/iqWU5nj5QHdvHx4=
                                                             create=True mode=0644 path=/tmp/ansible.iief8mvw state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:46 np0005625203.localdomain sudo[144137]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:47 np0005625203.localdomain sudo[144229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhyayeslufothcsxrnimwwcmkqacpyjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578766.9005861-377-121325448980178/AnsiballZ_command.py
Feb 20 09:12:47 np0005625203.localdomain sudo[144229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:47 np0005625203.localdomain python3.9[144231]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.iief8mvw' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:12:47 np0005625203.localdomain sudo[144229]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:48 np0005625203.localdomain sudo[144323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-euntipwngaroyxzgkpvuhagtxumdqlka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578768.2503572-425-65482604557619/AnsiballZ_file.py
Feb 20 09:12:48 np0005625203.localdomain sudo[144323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:48 np0005625203.localdomain python3.9[144325]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.iief8mvw state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:12:48 np0005625203.localdomain sudo[144323]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:49 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43614 DF PROTO=TCP SPT=51650 DPT=9100 SEQ=4277648089 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F5B9F00000000001030307) 
Feb 20 09:12:49 np0005625203.localdomain sshd[143503]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:12:49 np0005625203.localdomain systemd-logind[759]: Session 44 logged out. Waiting for processes to exit.
Feb 20 09:12:49 np0005625203.localdomain systemd[1]: session-44.scope: Deactivated successfully.
Feb 20 09:12:49 np0005625203.localdomain systemd[1]: session-44.scope: Consumed 4.083s CPU time.
Feb 20 09:12:49 np0005625203.localdomain systemd-logind[759]: Removed session 44.
Feb 20 09:12:55 np0005625203.localdomain sshd[144340]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:12:55 np0005625203.localdomain sshd[144340]: Accepted publickey for zuul from 192.168.122.30 port 50882 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:12:55 np0005625203.localdomain systemd-logind[759]: New session 45 of user zuul.
Feb 20 09:12:55 np0005625203.localdomain systemd[1]: Started Session 45 of User zuul.
Feb 20 09:12:55 np0005625203.localdomain sshd[144340]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:12:56 np0005625203.localdomain python3.9[144433]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:12:57 np0005625203.localdomain sudo[144527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iniuvoxhzougxnyumsggidgojlxxsawx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578776.728513-52-31605562108834/AnsiballZ_systemd.py
Feb 20 09:12:57 np0005625203.localdomain sudo[144527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:57 np0005625203.localdomain python3.9[144529]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 20 09:12:57 np0005625203.localdomain sudo[144527]: pam_unix(sudo:session): session closed for user root
Feb 20 09:12:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19006 DF PROTO=TCP SPT=39644 DPT=9882 SEQ=2893642318 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F5E08C0000000001030307) 
Feb 20 09:12:59 np0005625203.localdomain sudo[144621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bkdxeypqxwouvnxrqqpuyyaoqzpnusmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578778.8828447-76-92098326213313/AnsiballZ_systemd.py
Feb 20 09:12:59 np0005625203.localdomain sudo[144621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:12:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3977 DF PROTO=TCP SPT=38280 DPT=9102 SEQ=1171553561 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F5E1C60000000001030307) 
Feb 20 09:12:59 np0005625203.localdomain python3.9[144623]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:13:00 np0005625203.localdomain sudo[144621]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:01 np0005625203.localdomain sudo[144714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wiywdyoakdqnneasnjxohjtipfrmbgrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578780.7417386-103-268810569730007/AnsiballZ_command.py
Feb 20 09:13:01 np0005625203.localdomain sudo[144714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:01 np0005625203.localdomain python3.9[144716]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:13:01 np0005625203.localdomain sudo[144714]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:01 np0005625203.localdomain sudo[144807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wstdadpfrltoesirmjtkremuxzqsclyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578781.5376315-127-180642796906723/AnsiballZ_stat.py
Feb 20 09:13:01 np0005625203.localdomain sudo[144807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:02 np0005625203.localdomain python3.9[144809]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:13:02 np0005625203.localdomain sudo[144807]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:02 np0005625203.localdomain sudo[144901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xnbbctoxriqftwhazmyrwuhrmuovmwpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578782.3275094-151-120675434016668/AnsiballZ_command.py
Feb 20 09:13:02 np0005625203.localdomain sudo[144901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:02 np0005625203.localdomain python3.9[144903]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:13:02 np0005625203.localdomain sudo[144901]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:03 np0005625203.localdomain sudo[144996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jfswfatopbigdfqkiqoxrgtahgpnsksf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578783.0330465-175-214021914462220/AnsiballZ_file.py
Feb 20 09:13:03 np0005625203.localdomain sudo[144996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:03 np0005625203.localdomain python3.9[144998]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:03 np0005625203.localdomain sudo[144996]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:03 np0005625203.localdomain sshd[144340]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:13:03 np0005625203.localdomain systemd-logind[759]: Session 45 logged out. Waiting for processes to exit.
Feb 20 09:13:03 np0005625203.localdomain systemd[1]: session-45.scope: Deactivated successfully.
Feb 20 09:13:03 np0005625203.localdomain systemd[1]: session-45.scope: Consumed 3.906s CPU time.
Feb 20 09:13:03 np0005625203.localdomain systemd-logind[759]: Removed session 45.
Feb 20 09:13:07 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8225 DF PROTO=TCP SPT=58730 DPT=9105 SEQ=532740398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F600090000000001030307) 
Feb 20 09:13:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8226 DF PROTO=TCP SPT=58730 DPT=9105 SEQ=532740398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F604010000000001030307) 
Feb 20 09:13:09 np0005625203.localdomain sshd[145013]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:13:09 np0005625203.localdomain sshd[145013]: Accepted publickey for zuul from 192.168.122.30 port 55184 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:13:09 np0005625203.localdomain systemd-logind[759]: New session 46 of user zuul.
Feb 20 09:13:09 np0005625203.localdomain systemd[1]: Started Session 46 of User zuul.
Feb 20 09:13:09 np0005625203.localdomain sshd[145013]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:13:10 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8227 DF PROTO=TCP SPT=58730 DPT=9105 SEQ=532740398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F60C000000000001030307) 
Feb 20 09:13:10 np0005625203.localdomain python3.9[145106]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:13:11 np0005625203.localdomain sudo[145200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-avcxjdptmrwvvshtnoursznpqtzgecxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578791.466972-58-63035887956135/AnsiballZ_setup.py
Feb 20 09:13:11 np0005625203.localdomain sudo[145200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:12 np0005625203.localdomain python3.9[145202]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:13:12 np0005625203.localdomain sudo[145200]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:12 np0005625203.localdomain sudo[145254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hrwdvahqqiofkzbusbdwkbveahvskwwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578791.466972-58-63035887956135/AnsiballZ_dnf.py
Feb 20 09:13:12 np0005625203.localdomain sudo[145254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:13 np0005625203.localdomain python3.9[145256]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Feb 20 09:13:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8228 DF PROTO=TCP SPT=58730 DPT=9105 SEQ=532740398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F61BC00000000001030307) 
Feb 20 09:13:15 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7527 DF PROTO=TCP SPT=34820 DPT=9101 SEQ=1449341093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F622ED0000000001030307) 
Feb 20 09:13:16 np0005625203.localdomain sudo[145254]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:17 np0005625203.localdomain python3.9[145348]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:13:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7528 DF PROTO=TCP SPT=34820 DPT=9101 SEQ=1449341093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F627010000000001030307) 
Feb 20 09:13:18 np0005625203.localdomain sudo[145439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ujkizuvancpbalymyuxjrrrsreyzefok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578797.909288-121-28622349287302/AnsiballZ_file.py
Feb 20 09:13:18 np0005625203.localdomain sudo[145439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:18 np0005625203.localdomain python3.9[145441]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:18 np0005625203.localdomain sudo[145439]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:19 np0005625203.localdomain sudo[145531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fbwvgbjyzzusmimurktwjejjafzichxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578798.754393-145-9258121028550/AnsiballZ_file.py
Feb 20 09:13:19 np0005625203.localdomain sudo[145531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:19 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7529 DF PROTO=TCP SPT=34820 DPT=9101 SEQ=1449341093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F62F000000000001030307) 
Feb 20 09:13:19 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62290 DF PROTO=TCP SPT=40672 DPT=9100 SEQ=1974699355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F62F1F0000000001030307) 
Feb 20 09:13:19 np0005625203.localdomain python3.9[145533]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:19 np0005625203.localdomain sudo[145531]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:19 np0005625203.localdomain sudo[145623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exatrmwgtlopadtnksvvaptipenccihs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578799.4012034-169-225668807115370/AnsiballZ_lineinfile.py
Feb 20 09:13:19 np0005625203.localdomain sudo[145623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:20 np0005625203.localdomain python3.9[145625]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Not root, Subscription Management repositories not updated
                                                            Core libraries or services have been updated since boot-up:
                                                              * systemd
                                                            
                                                            Reboot is required to fully utilize these updates.
                                                            More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:20 np0005625203.localdomain sudo[145623]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62291 DF PROTO=TCP SPT=40672 DPT=9100 SEQ=1974699355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F633400000000001030307) 
Feb 20 09:13:20 np0005625203.localdomain python3.9[145715]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 20 09:13:21 np0005625203.localdomain python3.9[145805]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:13:22 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62292 DF PROTO=TCP SPT=40672 DPT=9100 SEQ=1974699355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F63B400000000001030307) 
Feb 20 09:13:22 np0005625203.localdomain python3.9[145897]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:13:22 np0005625203.localdomain sshd[145914]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:13:22 np0005625203.localdomain sshd[145914]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:13:22 np0005625203.localdomain sshd[145013]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:13:22 np0005625203.localdomain systemd[1]: session-46.scope: Deactivated successfully.
Feb 20 09:13:22 np0005625203.localdomain systemd[1]: session-46.scope: Consumed 8.652s CPU time.
Feb 20 09:13:22 np0005625203.localdomain systemd-logind[759]: Session 46 logged out. Waiting for processes to exit.
Feb 20 09:13:22 np0005625203.localdomain systemd-logind[759]: Removed session 46.
Feb 20 09:13:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7530 DF PROTO=TCP SPT=34820 DPT=9101 SEQ=1449341093 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F63EC10000000001030307) 
Feb 20 09:13:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62293 DF PROTO=TCP SPT=40672 DPT=9100 SEQ=1974699355 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F64B010000000001030307) 
Feb 20 09:13:28 np0005625203.localdomain sshd[145916]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:13:28 np0005625203.localdomain sshd[145916]: Accepted publickey for zuul from 192.168.122.30 port 36984 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:13:28 np0005625203.localdomain systemd-logind[759]: New session 47 of user zuul.
Feb 20 09:13:28 np0005625203.localdomain systemd[1]: Started Session 47 of User zuul.
Feb 20 09:13:28 np0005625203.localdomain sudo[145918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:13:28 np0005625203.localdomain sshd[145916]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:13:28 np0005625203.localdomain sudo[145918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:13:28 np0005625203.localdomain sudo[145918]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:28 np0005625203.localdomain sudo[145935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:13:28 np0005625203.localdomain sudo[145935]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:13:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32829 DF PROTO=TCP SPT=33672 DPT=9882 SEQ=4131526098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F655BD0000000001030307) 
Feb 20 09:13:29 np0005625203.localdomain sudo[145935]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:29 np0005625203.localdomain python3.9[146059]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:13:30 np0005625203.localdomain sudo[146090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:13:30 np0005625203.localdomain sudo[146090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:13:30 np0005625203.localdomain sudo[146090]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:31 np0005625203.localdomain sudo[146180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zlzpqwynnsuahpfmjhjyeylilzvlorlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578811.2707386-154-73653978136326/AnsiballZ_file.py
Feb 20 09:13:31 np0005625203.localdomain sudo[146180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:31 np0005625203.localdomain python3.9[146182]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:13:31 np0005625203.localdomain sudo[146180]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32831 DF PROTO=TCP SPT=33672 DPT=9882 SEQ=4131526098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F661C00000000001030307) 
Feb 20 09:13:32 np0005625203.localdomain sudo[146272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jtiqhqifdxsjpyzqzkrwgdhjijdppnyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578812.0310085-178-199991854671083/AnsiballZ_stat.py
Feb 20 09:13:32 np0005625203.localdomain sudo[146272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:32 np0005625203.localdomain python3.9[146274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:13:32 np0005625203.localdomain sudo[146272]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:33 np0005625203.localdomain sudo[146345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pprdpucpromjpmvwhybfuddbzvhwnduq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578812.0310085-178-199991854671083/AnsiballZ_copy.py
Feb 20 09:13:33 np0005625203.localdomain sudo[146345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:33 np0005625203.localdomain python3.9[146347]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578812.0310085-178-199991854671083/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:33 np0005625203.localdomain sudo[146345]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:33 np0005625203.localdomain sudo[146437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cphzvkhmzbwenigpgwkgkxaxrimzyrmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578813.4339583-227-65559740639936/AnsiballZ_file.py
Feb 20 09:13:33 np0005625203.localdomain sudo[146437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:33 np0005625203.localdomain python3.9[146439]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-sriov setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:13:33 np0005625203.localdomain sudo[146437]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:34 np0005625203.localdomain sudo[146529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ortqudranxnyhqviuadqgeoxffysblcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578814.052609-252-35976784661256/AnsiballZ_stat.py
Feb 20 09:13:34 np0005625203.localdomain sudo[146529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:34 np0005625203.localdomain python3.9[146531]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:13:34 np0005625203.localdomain sudo[146529]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:34 np0005625203.localdomain sudo[146602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-urouxcrxejukzwrqrizyqrelnnelllpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578814.052609-252-35976784661256/AnsiballZ_copy.py
Feb 20 09:13:34 np0005625203.localdomain sudo[146602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:35 np0005625203.localdomain python3.9[146604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578814.052609-252-35976784661256/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:35 np0005625203.localdomain sudo[146602]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:35 np0005625203.localdomain sudo[146694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pbkdrmlnkcgabbbegquspojchqupiuso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578815.277916-301-10436740155104/AnsiballZ_file.py
Feb 20 09:13:35 np0005625203.localdomain sudo[146694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:35 np0005625203.localdomain python3.9[146696]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-dhcp setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:13:35 np0005625203.localdomain sudo[146694]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32832 DF PROTO=TCP SPT=33672 DPT=9882 SEQ=4131526098 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F671800000000001030307) 
Feb 20 09:13:36 np0005625203.localdomain sudo[146786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qqfmcciwiovdujfmekibupwbqimoonye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578815.9507825-327-214653832505793/AnsiballZ_stat.py
Feb 20 09:13:36 np0005625203.localdomain sudo[146786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:36 np0005625203.localdomain python3.9[146788]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:13:36 np0005625203.localdomain sudo[146786]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:36 np0005625203.localdomain sudo[146859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zrnebtmlivwgpgnsdazhuveagitrzngq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578815.9507825-327-214653832505793/AnsiballZ_copy.py
Feb 20 09:13:36 np0005625203.localdomain sudo[146859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:36 np0005625203.localdomain python3.9[146861]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578815.9507825-327-214653832505793/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:36 np0005625203.localdomain sudo[146859]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:37 np0005625203.localdomain sudo[146951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kobaqvzpkcgkfmqwaabwgmukzayvzvph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578817.1353264-374-207080138097487/AnsiballZ_file.py
Feb 20 09:13:37 np0005625203.localdomain sudo[146951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:37 np0005625203.localdomain python3.9[146953]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:13:37 np0005625203.localdomain sudo[146951]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:37 np0005625203.localdomain sudo[147043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltrckvkwnumywtjpnuwbermthdijarrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578817.7122786-396-248325107014985/AnsiballZ_stat.py
Feb 20 09:13:37 np0005625203.localdomain sudo[147043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12648 DF PROTO=TCP SPT=37962 DPT=9105 SEQ=3565801478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F679400000000001030307) 
Feb 20 09:13:38 np0005625203.localdomain python3.9[147045]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:13:38 np0005625203.localdomain sudo[147043]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:38 np0005625203.localdomain sudo[147116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sdinleebvcanrplbrvbsmbdffadlzcyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578817.7122786-396-248325107014985/AnsiballZ_copy.py
Feb 20 09:13:38 np0005625203.localdomain sudo[147116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:38 np0005625203.localdomain python3.9[147118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578817.7122786-396-248325107014985/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:38 np0005625203.localdomain sudo[147116]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:39 np0005625203.localdomain sudo[147208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ivgoymxvssctzdimjudclodvddfopeel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578818.8388753-441-271630694411392/AnsiballZ_file.py
Feb 20 09:13:39 np0005625203.localdomain sudo[147208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:39 np0005625203.localdomain python3.9[147210]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:13:39 np0005625203.localdomain sudo[147208]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:39 np0005625203.localdomain sudo[147300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ocnnoofppxkgvoegbxhaphjjihnxwnwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578819.4383156-465-68805125207585/AnsiballZ_stat.py
Feb 20 09:13:39 np0005625203.localdomain sudo[147300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:39 np0005625203.localdomain python3.9[147302]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:13:39 np0005625203.localdomain sudo[147300]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:40 np0005625203.localdomain sudo[147373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fiydnvnbwpsbytlrfzngtihzehzqyaor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578819.4383156-465-68805125207585/AnsiballZ_copy.py
Feb 20 09:13:40 np0005625203.localdomain sudo[147373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:40 np0005625203.localdomain python3.9[147375]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578819.4383156-465-68805125207585/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:40 np0005625203.localdomain sudo[147373]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:40 np0005625203.localdomain sudo[147465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zujzjkolezvdbphxgdmckpdnqyrqqgru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578820.639403-511-214496100393231/AnsiballZ_file.py
Feb 20 09:13:40 np0005625203.localdomain sudo[147465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:41 np0005625203.localdomain python3.9[147467]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:13:41 np0005625203.localdomain sudo[147465]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:41 np0005625203.localdomain sudo[147557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-razzzmjbhyurxlqwijfocyngcjkfisqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578821.2203326-536-235270939816618/AnsiballZ_stat.py
Feb 20 09:13:41 np0005625203.localdomain sudo[147557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:41 np0005625203.localdomain python3.9[147559]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:13:41 np0005625203.localdomain sudo[147557]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:42 np0005625203.localdomain sudo[147630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fiwoctlmjeutxxshrjsquzpzjspynpyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578821.2203326-536-235270939816618/AnsiballZ_copy.py
Feb 20 09:13:42 np0005625203.localdomain sudo[147630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:42 np0005625203.localdomain python3.9[147632]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578821.2203326-536-235270939816618/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:42 np0005625203.localdomain sudo[147630]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:42 np0005625203.localdomain sudo[147722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jkmqssbumzvzhhaztuaplkfqqrkkmchr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578822.404557-583-125004013227132/AnsiballZ_file.py
Feb 20 09:13:42 np0005625203.localdomain sudo[147722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:42 np0005625203.localdomain python3.9[147724]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:13:42 np0005625203.localdomain sudo[147722]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:43 np0005625203.localdomain sudo[147814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qbrekynmhatqmpygjxfllydwtfredwex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578823.0319254-610-234042100182973/AnsiballZ_stat.py
Feb 20 09:13:43 np0005625203.localdomain sudo[147814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:43 np0005625203.localdomain python3.9[147816]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:13:43 np0005625203.localdomain sudo[147814]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:43 np0005625203.localdomain sshd[147830]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:13:43 np0005625203.localdomain sshd[147830]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:13:43 np0005625203.localdomain sudo[147889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cgqxcruojyskqxjvorcjelylwxitbitn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578823.0319254-610-234042100182973/AnsiballZ_copy.py
Feb 20 09:13:43 np0005625203.localdomain sudo[147889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:43 np0005625203.localdomain python3.9[147891]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578823.0319254-610-234042100182973/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:43 np0005625203.localdomain sudo[147889]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12650 DF PROTO=TCP SPT=37962 DPT=9105 SEQ=3565801478 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F691000000000001030307) 
Feb 20 09:13:44 np0005625203.localdomain sudo[147981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ockctyimntqgdmqcypskqepwwfzgtgvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578824.1581638-657-237109772412610/AnsiballZ_file.py
Feb 20 09:13:44 np0005625203.localdomain sudo[147981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45640 DF PROTO=TCP SPT=59594 DPT=9102 SEQ=1550390657 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F692800000000001030307) 
Feb 20 09:13:44 np0005625203.localdomain python3.9[147983]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:13:44 np0005625203.localdomain sudo[147981]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:45 np0005625203.localdomain sudo[148073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-omfdsrfuaewqkawxdauyelzrcllcgxsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578824.7648683-681-151568803818700/AnsiballZ_stat.py
Feb 20 09:13:45 np0005625203.localdomain sudo[148073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:45 np0005625203.localdomain python3.9[148075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:13:45 np0005625203.localdomain sudo[148073]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:45 np0005625203.localdomain sudo[148146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwoppxpbwlzchunrswnwrvpaxykaaygd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578824.7648683-681-151568803818700/AnsiballZ_copy.py
Feb 20 09:13:45 np0005625203.localdomain sudo[148146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:45 np0005625203.localdomain python3.9[148148]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578824.7648683-681-151568803818700/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=cb143134597e8d09980d1dcc1949f9a4232e36a1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:45 np0005625203.localdomain sudo[148146]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:46 np0005625203.localdomain sshd[145916]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:13:46 np0005625203.localdomain systemd[1]: session-47.scope: Deactivated successfully.
Feb 20 09:13:46 np0005625203.localdomain systemd[1]: session-47.scope: Consumed 11.391s CPU time.
Feb 20 09:13:46 np0005625203.localdomain systemd-logind[759]: Session 47 logged out. Waiting for processes to exit.
Feb 20 09:13:46 np0005625203.localdomain systemd-logind[759]: Removed session 47.
Feb 20 09:13:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57867 DF PROTO=TCP SPT=35284 DPT=9101 SEQ=710917554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F69C400000000001030307) 
Feb 20 09:13:47 np0005625203.localdomain chronyd[138173]: Selected source 149.56.19.163 (pool.ntp.org)
Feb 20 09:13:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10247 DF PROTO=TCP SPT=44932 DPT=9100 SEQ=4186917260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F6A8400000000001030307) 
Feb 20 09:13:51 np0005625203.localdomain sshd[148163]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:13:51 np0005625203.localdomain sshd[148163]: Accepted publickey for zuul from 192.168.122.30 port 60452 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:13:51 np0005625203.localdomain systemd-logind[759]: New session 48 of user zuul.
Feb 20 09:13:51 np0005625203.localdomain systemd[1]: Started Session 48 of User zuul.
Feb 20 09:13:51 np0005625203.localdomain sshd[148163]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:13:52 np0005625203.localdomain sudo[148256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvnnzsrgbnozfbrputzriohbkusdxzcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578831.9986098-22-244182309005463/AnsiballZ_file.py
Feb 20 09:13:52 np0005625203.localdomain sudo[148256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:52 np0005625203.localdomain python3.9[148258]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:52 np0005625203.localdomain sudo[148256]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57869 DF PROTO=TCP SPT=35284 DPT=9101 SEQ=710917554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F6B4010000000001030307) 
Feb 20 09:13:53 np0005625203.localdomain sudo[148348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nbiubbskweomnuscrgpcrsaipxmouplm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578832.8338172-58-154036241820719/AnsiballZ_stat.py
Feb 20 09:13:53 np0005625203.localdomain sudo[148348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:53 np0005625203.localdomain python3.9[148350]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:13:53 np0005625203.localdomain sudo[148348]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:53 np0005625203.localdomain sudo[148421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjfjtgudpztbhqnzoyepbdjzwyispkis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578832.8338172-58-154036241820719/AnsiballZ_copy.py
Feb 20 09:13:53 np0005625203.localdomain sudo[148421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:54 np0005625203.localdomain python3.9[148423]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578832.8338172-58-154036241820719/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=8e2004121a34320613d32710ae37702da8d027e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:54 np0005625203.localdomain sudo[148421]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:54 np0005625203.localdomain sudo[148513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-srdqrtbcnjcidgxgsysguqoxvewptmpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578834.2012792-58-50913862101083/AnsiballZ_stat.py
Feb 20 09:13:54 np0005625203.localdomain sudo[148513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:54 np0005625203.localdomain python3.9[148515]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:13:54 np0005625203.localdomain sudo[148513]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:54 np0005625203.localdomain sudo[148586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mijyosxbqnhyywxjefkmzurehczwdhfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578834.2012792-58-50913862101083/AnsiballZ_copy.py
Feb 20 09:13:54 np0005625203.localdomain sudo[148586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:13:55 np0005625203.localdomain python3.9[148588]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578834.2012792-58-50913862101083/.source.conf _original_basename=ceph.conf follow=False checksum=936d449f31af670125791fe297b02d275b2ba4b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:13:55 np0005625203.localdomain sudo[148586]: pam_unix(sudo:session): session closed for user root
Feb 20 09:13:55 np0005625203.localdomain sshd[148163]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:13:55 np0005625203.localdomain systemd[1]: session-48.scope: Deactivated successfully.
Feb 20 09:13:55 np0005625203.localdomain systemd[1]: session-48.scope: Consumed 2.227s CPU time.
Feb 20 09:13:55 np0005625203.localdomain systemd-logind[759]: Session 48 logged out. Waiting for processes to exit.
Feb 20 09:13:55 np0005625203.localdomain systemd-logind[759]: Removed session 48.
Feb 20 09:13:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=10249 DF PROTO=TCP SPT=44932 DPT=9100 SEQ=4186917260 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F6C0010000000001030307) 
Feb 20 09:13:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39051 DF PROTO=TCP SPT=33944 DPT=9102 SEQ=1738981179 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F6CC250000000001030307) 
Feb 20 09:14:00 np0005625203.localdomain sshd[148603]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:14:00 np0005625203.localdomain sshd[148603]: Accepted publickey for zuul from 192.168.122.30 port 60456 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:14:00 np0005625203.localdomain systemd-logind[759]: New session 49 of user zuul.
Feb 20 09:14:00 np0005625203.localdomain systemd[1]: Started Session 49 of User zuul.
Feb 20 09:14:00 np0005625203.localdomain sshd[148603]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:14:01 np0005625203.localdomain python3.9[148696]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:14:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1792 DF PROTO=TCP SPT=42086 DPT=9882 SEQ=2528612278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F6D7000000000001030307) 
Feb 20 09:14:02 np0005625203.localdomain sudo[148790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ecrbdnzhizdegovwstbodaeapjnzcujm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578842.406373-58-67013331201900/AnsiballZ_file.py
Feb 20 09:14:02 np0005625203.localdomain sudo[148790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:03 np0005625203.localdomain python3.9[148792]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:14:03 np0005625203.localdomain sudo[148790]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:03 np0005625203.localdomain sudo[148882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kuklimvmjaivbbtovebyfxcsrylccoji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578843.3464663-58-83883063088274/AnsiballZ_file.py
Feb 20 09:14:03 np0005625203.localdomain sudo[148882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:04 np0005625203.localdomain python3.9[148884]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:14:04 np0005625203.localdomain sudo[148882]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:04 np0005625203.localdomain python3.9[148974]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:14:05 np0005625203.localdomain sudo[149064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjmcfurzfqtfdczlyzvgoeafhzfimrsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578845.0132587-127-103992269975015/AnsiballZ_seboolean.py
Feb 20 09:14:05 np0005625203.localdomain sudo[149064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:05 np0005625203.localdomain python3.9[149066]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 20 09:14:05 np0005625203.localdomain sudo[149064]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1793 DF PROTO=TCP SPT=42086 DPT=9882 SEQ=2528612278 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F6E6C00000000001030307) 
Feb 20 09:14:06 np0005625203.localdomain sudo[149156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqqxxnsuznrarwkzmppofuwqfhcynmav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578846.1608143-157-236067032285314/AnsiballZ_setup.py
Feb 20 09:14:06 np0005625203.localdomain sudo[149156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:06 np0005625203.localdomain python3.9[149158]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:14:06 np0005625203.localdomain sshd[149163]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:14:07 np0005625203.localdomain sudo[149156]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:07 np0005625203.localdomain sshd[149163]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:14:07 np0005625203.localdomain sudo[149212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rpwjgaeaisxlxgjibzqgzmxlvkzpsewg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578846.1608143-157-236067032285314/AnsiballZ_dnf.py
Feb 20 09:14:07 np0005625203.localdomain sudo[149212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:07 np0005625203.localdomain python3.9[149214]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:14:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38010 DF PROTO=TCP SPT=41744 DPT=9105 SEQ=3210094760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F6EE810000000001030307) 
Feb 20 09:14:10 np0005625203.localdomain sudo[149212]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8231 DF PROTO=TCP SPT=58730 DPT=9105 SEQ=532740398 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F6FA800000000001030307) 
Feb 20 09:14:12 np0005625203.localdomain sudo[149306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ondotbbvvzalpczexkqqmqkhlafcdkhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578851.6983478-193-66498418896910/AnsiballZ_systemd.py
Feb 20 09:14:12 np0005625203.localdomain sudo[149306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:12 np0005625203.localdomain python3.9[149308]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 20 09:14:12 np0005625203.localdomain sudo[149306]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:13 np0005625203.localdomain sudo[149401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uclytambrsmswtneagvqwnsqwzwvdlyh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771578852.7944715-217-58685407280166/AnsiballZ_edpm_nftables_snippet.py
Feb 20 09:14:13 np0005625203.localdomain sudo[149401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:13 np0005625203.localdomain python3[149403]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                                            rule:
                                                              proto: udp
                                                              dport: 4789
                                                          - rule_name: 119 neutron geneve networks
                                                            rule:
                                                              proto: udp
                                                              dport: 6081
                                                              state: ["UNTRACKED"]
                                                          - rule_name: 120 neutron geneve networks no conntrack
                                                            rule:
                                                              proto: udp
                                                              dport: 6081
                                                              table: raw
                                                              chain: OUTPUT
                                                              jump: NOTRACK
                                                              action: append
                                                              state: []
                                                          - rule_name: 121 neutron geneve networks no conntrack
                                                            rule:
                                                              proto: udp
                                                              dport: 6081
                                                              table: raw
                                                              chain: PREROUTING
                                                              jump: NOTRACK
                                                              action: append
                                                              state: []
                                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Feb 20 09:14:13 np0005625203.localdomain sudo[149401]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:13 np0005625203.localdomain sudo[149493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxrxqojbhfjgbkdgkuwybkhaazztxtly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578853.6708963-244-56643917051123/AnsiballZ_file.py
Feb 20 09:14:13 np0005625203.localdomain sudo[149493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:14 np0005625203.localdomain python3.9[149495]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:14 np0005625203.localdomain sudo[149493]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38012 DF PROTO=TCP SPT=41744 DPT=9105 SEQ=3210094760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F706410000000001030307) 
Feb 20 09:14:14 np0005625203.localdomain sudo[149585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izqjmcxznxvtsonstrtqypojddnykxfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578854.3271854-268-1417810455490/AnsiballZ_stat.py
Feb 20 09:14:14 np0005625203.localdomain sudo[149585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:14 np0005625203.localdomain python3.9[149587]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:14 np0005625203.localdomain sudo[149585]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:15 np0005625203.localdomain sudo[149633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wmxtgxrefkywvhhdxfvuieshjqtzekct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578854.3271854-268-1417810455490/AnsiballZ_file.py
Feb 20 09:14:15 np0005625203.localdomain sudo[149633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:15 np0005625203.localdomain python3.9[149635]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:15 np0005625203.localdomain sudo[149633]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:15 np0005625203.localdomain sudo[149725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qzjyenxpowsabhrfozggokgdzgupgolf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578855.5023048-304-214963443797848/AnsiballZ_stat.py
Feb 20 09:14:15 np0005625203.localdomain sudo[149725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:15 np0005625203.localdomain python3.9[149727]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:15 np0005625203.localdomain sudo[149725]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:16 np0005625203.localdomain sudo[149773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lqqyyalsroczkncmrjgcjmbwnadaghwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578855.5023048-304-214963443797848/AnsiballZ_file.py
Feb 20 09:14:16 np0005625203.localdomain sudo[149773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:16 np0005625203.localdomain python3.9[149775]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.6_ppnhob recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:16 np0005625203.localdomain sudo[149773]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:16 np0005625203.localdomain sudo[149865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xarjbwnamvxlibajehodqiqqjsenhare ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578856.553743-340-208172698047051/AnsiballZ_stat.py
Feb 20 09:14:16 np0005625203.localdomain sudo[149865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:16 np0005625203.localdomain python3.9[149867]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2445 DF PROTO=TCP SPT=37984 DPT=9101 SEQ=3143231733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F711400000000001030307) 
Feb 20 09:14:18 np0005625203.localdomain sudo[149865]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:18 np0005625203.localdomain sudo[149913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-whnfcoismnxyxhffkfmsyedzgpyjltsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578856.553743-340-208172698047051/AnsiballZ_file.py
Feb 20 09:14:18 np0005625203.localdomain sudo[149913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:18 np0005625203.localdomain python3.9[149915]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:18 np0005625203.localdomain sudo[149913]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:18 np0005625203.localdomain sshd[149935]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:14:19 np0005625203.localdomain sudo[150007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xoijzlzbkmhlhbfngxsshdrfoyadmgvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578858.6692545-379-226466542196321/AnsiballZ_command.py
Feb 20 09:14:19 np0005625203.localdomain sudo[150007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:19 np0005625203.localdomain python3.9[150009]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:14:19 np0005625203.localdomain sudo[150007]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:19 np0005625203.localdomain sudo[150100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fmyzqtnjvcyywuatyljuhohugkoxjgcr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771578859.458445-403-4398497281183/AnsiballZ_edpm_nftables_from_files.py
Feb 20 09:14:19 np0005625203.localdomain sudo[150100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:20 np0005625203.localdomain sshd[149935]: Invalid user claude from 118.99.80.29 port 12748
Feb 20 09:14:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3183 DF PROTO=TCP SPT=33614 DPT=9100 SEQ=2330337088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F71D810000000001030307) 
Feb 20 09:14:20 np0005625203.localdomain python3[150102]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 20 09:14:20 np0005625203.localdomain sudo[150100]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:20 np0005625203.localdomain sshd[149935]: Received disconnect from 118.99.80.29 port 12748:11: Bye Bye [preauth]
Feb 20 09:14:20 np0005625203.localdomain sshd[149935]: Disconnected from invalid user claude 118.99.80.29 port 12748 [preauth]
Feb 20 09:14:20 np0005625203.localdomain sudo[150192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hmvmyspgcgazofeurpjwtcnlwhzgmykm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578860.2910972-427-256164813292959/AnsiballZ_stat.py
Feb 20 09:14:20 np0005625203.localdomain sudo[150192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:20 np0005625203.localdomain python3.9[150194]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:20 np0005625203.localdomain sudo[150192]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:21 np0005625203.localdomain sudo[150267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmonqjocnhgmwfwtnkycvqblqijmzpnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578860.2910972-427-256164813292959/AnsiballZ_copy.py
Feb 20 09:14:21 np0005625203.localdomain sudo[150267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:21 np0005625203.localdomain python3.9[150269]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578860.2910972-427-256164813292959/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:21 np0005625203.localdomain sudo[150267]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:21 np0005625203.localdomain sudo[150359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nutvqrhmedkvfbhyamwbxryjbaclchgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578861.589553-472-158392639156065/AnsiballZ_stat.py
Feb 20 09:14:21 np0005625203.localdomain sudo[150359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:22 np0005625203.localdomain python3.9[150361]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:22 np0005625203.localdomain sudo[150359]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:22 np0005625203.localdomain sudo[150434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vgzuyxkgxejyldquatgyueiucbbjvduh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578861.589553-472-158392639156065/AnsiballZ_copy.py
Feb 20 09:14:22 np0005625203.localdomain sudo[150434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:22 np0005625203.localdomain python3.9[150436]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578861.589553-472-158392639156065/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:22 np0005625203.localdomain sudo[150434]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=2447 DF PROTO=TCP SPT=37984 DPT=9101 SEQ=3143231733 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F729000000000001030307) 
Feb 20 09:14:23 np0005625203.localdomain sudo[150526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xobvjqfhpmptzvllqcidtdjxojyrjqsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578862.8429053-517-21970750522888/AnsiballZ_stat.py
Feb 20 09:14:23 np0005625203.localdomain sudo[150526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:23 np0005625203.localdomain python3.9[150528]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:23 np0005625203.localdomain sudo[150526]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:23 np0005625203.localdomain sudo[150601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dgrhidelmqrnxqztyhhmjxzbqvgswlnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578862.8429053-517-21970750522888/AnsiballZ_copy.py
Feb 20 09:14:23 np0005625203.localdomain sudo[150601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:23 np0005625203.localdomain python3.9[150603]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578862.8429053-517-21970750522888/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:23 np0005625203.localdomain sudo[150601]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:24 np0005625203.localdomain sudo[150693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xgapifhdzzzjfhizelrfzclmlscoajil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578863.983377-562-218958530232444/AnsiballZ_stat.py
Feb 20 09:14:24 np0005625203.localdomain sudo[150693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:24 np0005625203.localdomain python3.9[150695]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:24 np0005625203.localdomain sudo[150693]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:24 np0005625203.localdomain sudo[150768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xuiffzknzkbyhgciatefklygpqlfoeea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578863.983377-562-218958530232444/AnsiballZ_copy.py
Feb 20 09:14:24 np0005625203.localdomain sudo[150768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:24 np0005625203.localdomain python3.9[150770]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578863.983377-562-218958530232444/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:25 np0005625203.localdomain sudo[150768]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:25 np0005625203.localdomain sudo[150860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dgemlncerasqcxwnjviwyxnhseshyxjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578865.1960468-607-166906726075298/AnsiballZ_stat.py
Feb 20 09:14:25 np0005625203.localdomain sudo[150860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:25 np0005625203.localdomain python3.9[150862]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:25 np0005625203.localdomain sudo[150860]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3185 DF PROTO=TCP SPT=33614 DPT=9100 SEQ=2330337088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F735400000000001030307) 
Feb 20 09:14:26 np0005625203.localdomain sudo[150935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ycfbuyphuicarykzyyabxtuwrsnnbuek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578865.1960468-607-166906726075298/AnsiballZ_copy.py
Feb 20 09:14:26 np0005625203.localdomain sudo[150935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:26 np0005625203.localdomain python3.9[150937]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771578865.1960468-607-166906726075298/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:26 np0005625203.localdomain sudo[150935]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:26 np0005625203.localdomain sudo[151027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nlczxevgtbfthexjgyffxclpamsjwoqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578866.630558-652-122280847618664/AnsiballZ_file.py
Feb 20 09:14:26 np0005625203.localdomain sudo[151027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:27 np0005625203.localdomain python3.9[151029]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:27 np0005625203.localdomain sudo[151027]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:27 np0005625203.localdomain sudo[151119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gelgkpdepzpovrrnamynicxxhifiqiym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578867.254617-676-89519020110486/AnsiballZ_command.py
Feb 20 09:14:27 np0005625203.localdomain sudo[151119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:27 np0005625203.localdomain python3.9[151121]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:14:27 np0005625203.localdomain sudo[151119]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:28 np0005625203.localdomain sudo[151214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbvptorscvfimqugvwqaiagrfazbvsrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578867.9404826-700-17203610045392/AnsiballZ_blockinfile.py
Feb 20 09:14:28 np0005625203.localdomain sudo[151214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:28 np0005625203.localdomain python3.9[151216]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:28 np0005625203.localdomain sudo[151214]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:29 np0005625203.localdomain sudo[151306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtoobxxlhjcunpuzslpvudipfwunvgzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578868.8426123-727-56629624877955/AnsiballZ_command.py
Feb 20 09:14:29 np0005625203.localdomain sudo[151306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:29 np0005625203.localdomain python3.9[151308]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:14:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34548 DF PROTO=TCP SPT=50376 DPT=9102 SEQ=236940475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F741550000000001030307) 
Feb 20 09:14:29 np0005625203.localdomain sudo[151306]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:29 np0005625203.localdomain sudo[151399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cgsrhkswzzxulmthcfcpwawvtpcxtyyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578869.506352-751-161631424767293/AnsiballZ_stat.py
Feb 20 09:14:29 np0005625203.localdomain sudo[151399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:29 np0005625203.localdomain python3.9[151401]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:14:29 np0005625203.localdomain sudo[151399]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:30 np0005625203.localdomain sudo[151431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:14:30 np0005625203.localdomain sudo[151431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:14:30 np0005625203.localdomain sudo[151431]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:30 np0005625203.localdomain sudo[151478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:14:30 np0005625203.localdomain sudo[151478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:14:30 np0005625203.localdomain sudo[151523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aldtnbsqsifrvpzbgylrjhzuwzimqsba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578870.1055129-775-7383001782045/AnsiballZ_command.py
Feb 20 09:14:30 np0005625203.localdomain sudo[151523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:30 np0005625203.localdomain python3.9[151525]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:14:30 np0005625203.localdomain sudo[151523]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:31 np0005625203.localdomain podman[151645]: 2026-02-20 09:14:31.126543372 +0000 UTC m=+0.093995037 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, vcs-type=git, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, release=1770267347, vendor=Red Hat, Inc., GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main)
Feb 20 09:14:31 np0005625203.localdomain sudo[151707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mcaykajgeipksrgkfwiqwomwhaaxxlvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578870.9492402-799-52480732478454/AnsiballZ_file.py
Feb 20 09:14:31 np0005625203.localdomain sudo[151707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:31 np0005625203.localdomain podman[151645]: 2026-02-20 09:14:31.232245722 +0000 UTC m=+0.199697327 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vendor=Red Hat, Inc., io.buildah.version=1.42.2, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, architecture=x86_64, GIT_CLEAN=True, RELEASE=main, distribution-scope=public)
Feb 20 09:14:31 np0005625203.localdomain python3.9[151709]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:31 np0005625203.localdomain sudo[151707]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:31 np0005625203.localdomain sudo[151478]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:31 np0005625203.localdomain sudo[151770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:14:31 np0005625203.localdomain sudo[151770]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:14:31 np0005625203.localdomain sudo[151770]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:31 np0005625203.localdomain sudo[151785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:14:31 np0005625203.localdomain sudo[151785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:14:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4632 DF PROTO=TCP SPT=52684 DPT=9882 SEQ=1917045596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F74C400000000001030307) 
Feb 20 09:14:32 np0005625203.localdomain sudo[151785]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:32 np0005625203.localdomain python3.9[151906]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:14:32 np0005625203.localdomain sudo[151922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:14:32 np0005625203.localdomain sudo[151922]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:14:32 np0005625203.localdomain sudo[151922]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:33 np0005625203.localdomain sudo[152012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jwaqbhdschfgburfudskkmijtwebvfyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578873.373549-922-31112191406005/AnsiballZ_command.py
Feb 20 09:14:33 np0005625203.localdomain sudo[152012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:33 np0005625203.localdomain python3.9[152014]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=np0005625203.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:e8:77:41:0b" external_ids:ovn-encap-ip=172.19.0.107 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:14:33 np0005625203.localdomain ovs-vsctl[152015]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=np0005625203.localdomain external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:e8:77:41:0b external_ids:ovn-encap-ip=172.19.0.107 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=tcp:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Feb 20 09:14:33 np0005625203.localdomain sudo[152012]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:34 np0005625203.localdomain sudo[152105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hnacdkesblmiksbyfxthjdmljeturwkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578874.1739192-949-101476334902328/AnsiballZ_command.py
Feb 20 09:14:34 np0005625203.localdomain sudo[152105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:34 np0005625203.localdomain python3.9[152107]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                                            ovs-vsctl show | grep -q "Manager"
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:14:34 np0005625203.localdomain sudo[152105]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:35 np0005625203.localdomain python3.9[152200]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:14:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4633 DF PROTO=TCP SPT=52684 DPT=9882 SEQ=1917045596 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F75C010000000001030307) 
Feb 20 09:14:36 np0005625203.localdomain sudo[152292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rfrwhzelxqyavdfayskmaplyeakdklac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578876.2026398-1003-180654657136555/AnsiballZ_file.py
Feb 20 09:14:36 np0005625203.localdomain sudo[152292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:36 np0005625203.localdomain python3.9[152294]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:14:36 np0005625203.localdomain sudo[152292]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:37 np0005625203.localdomain sudo[152384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kelyvbuqxenveljaouxiamshbgjziknp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578876.9394546-1027-227469955881965/AnsiballZ_stat.py
Feb 20 09:14:37 np0005625203.localdomain sudo[152384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:37 np0005625203.localdomain python3.9[152386]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:37 np0005625203.localdomain sudo[152384]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:37 np0005625203.localdomain sudo[152432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gthghidieszulleokirwgcciyeuoojbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578876.9394546-1027-227469955881965/AnsiballZ_file.py
Feb 20 09:14:37 np0005625203.localdomain sudo[152432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9690 DF PROTO=TCP SPT=42674 DPT=9105 SEQ=4169940105 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F763C10000000001030307) 
Feb 20 09:14:38 np0005625203.localdomain python3.9[152434]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:14:38 np0005625203.localdomain sudo[152432]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:38 np0005625203.localdomain sudo[152524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-swzfpiuyuqroaodxtslpvukzxokpzirm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578878.2718112-1027-63500945480814/AnsiballZ_stat.py
Feb 20 09:14:38 np0005625203.localdomain sudo[152524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:38 np0005625203.localdomain python3.9[152526]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:38 np0005625203.localdomain sudo[152524]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:38 np0005625203.localdomain sudo[152572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-axlkaqdycpjfodfwijymxupehudsznsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578878.2718112-1027-63500945480814/AnsiballZ_file.py
Feb 20 09:14:38 np0005625203.localdomain sudo[152572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:39 np0005625203.localdomain python3.9[152574]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:14:39 np0005625203.localdomain sudo[152572]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:39 np0005625203.localdomain sudo[152664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzjgogmzgxtvbiayuvrggobxrpylmhdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578879.3528862-1096-130402450467287/AnsiballZ_file.py
Feb 20 09:14:39 np0005625203.localdomain sudo[152664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:39 np0005625203.localdomain python3.9[152666]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:39 np0005625203.localdomain sudo[152664]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:40 np0005625203.localdomain sudo[152756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bihjsttfnkajlhnjxjrroaawkqxdxiyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578879.9914682-1120-133182409588910/AnsiballZ_stat.py
Feb 20 09:14:40 np0005625203.localdomain sudo[152756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:40 np0005625203.localdomain python3.9[152758]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:40 np0005625203.localdomain sudo[152756]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:40 np0005625203.localdomain sudo[152804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yocdhtfxitlkuvsnpirrjavorbvwsvju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578879.9914682-1120-133182409588910/AnsiballZ_file.py
Feb 20 09:14:40 np0005625203.localdomain sudo[152804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:40 np0005625203.localdomain python3.9[152806]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:40 np0005625203.localdomain sudo[152804]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:41 np0005625203.localdomain sudo[152896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-olwmdwblupjwhjqzyuvjeklapxbmhaxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578881.1006687-1156-122164775632718/AnsiballZ_stat.py
Feb 20 09:14:41 np0005625203.localdomain sudo[152896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:41 np0005625203.localdomain python3.9[152898]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:41 np0005625203.localdomain sudo[152896]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:41 np0005625203.localdomain sudo[152944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pzmduzodldjbximigqctgkfqvuhlewpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578881.1006687-1156-122164775632718/AnsiballZ_file.py
Feb 20 09:14:41 np0005625203.localdomain sudo[152944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:41 np0005625203.localdomain python3.9[152946]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:41 np0005625203.localdomain sudo[152944]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:42 np0005625203.localdomain sudo[153036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wcymwwystuobwawggrahliyeotfakqpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578882.1556904-1192-255856447385381/AnsiballZ_systemd.py
Feb 20 09:14:42 np0005625203.localdomain sudo[153036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:42 np0005625203.localdomain python3.9[153038]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:14:42 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:14:42 np0005625203.localdomain systemd-sysv-generator[153064]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:14:42 np0005625203.localdomain systemd-rc-local-generator[153060]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:14:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:14:43 np0005625203.localdomain sudo[153036]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:43 np0005625203.localdomain sudo[153166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvlblvgbvxxdspytgfecvidnfqhkheez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578883.6207895-1216-57127288461622/AnsiballZ_stat.py
Feb 20 09:14:43 np0005625203.localdomain sudo[153166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:44 np0005625203.localdomain python3.9[153168]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:44 np0005625203.localdomain sudo[153166]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9692 DF PROTO=TCP SPT=42674 DPT=9105 SEQ=4169940105 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F77B800000000001030307) 
Feb 20 09:14:44 np0005625203.localdomain sudo[153214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-piisdofqbmlbfbogpxmejtpixebmzcye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578883.6207895-1216-57127288461622/AnsiballZ_file.py
Feb 20 09:14:44 np0005625203.localdomain sudo[153214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34552 DF PROTO=TCP SPT=50376 DPT=9102 SEQ=236940475 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F77C800000000001030307) 
Feb 20 09:14:44 np0005625203.localdomain python3.9[153216]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:44 np0005625203.localdomain sudo[153214]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:44 np0005625203.localdomain sudo[153306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fhmdcfxxwofxowtbmepzompmntpiuafs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578884.7360787-1252-32084767542227/AnsiballZ_stat.py
Feb 20 09:14:45 np0005625203.localdomain sudo[153306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:45 np0005625203.localdomain python3.9[153308]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:45 np0005625203.localdomain sudo[153306]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:45 np0005625203.localdomain sudo[153354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mheykqbehdbduyokolpbuhsjpjjsmmop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578884.7360787-1252-32084767542227/AnsiballZ_file.py
Feb 20 09:14:45 np0005625203.localdomain sudo[153354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:45 np0005625203.localdomain python3.9[153356]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:45 np0005625203.localdomain sudo[153354]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46006 DF PROTO=TCP SPT=57466 DPT=9101 SEQ=3128467287 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F786800000000001030307) 
Feb 20 09:14:47 np0005625203.localdomain sudo[153446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-quorjumggqjulsqhgcsjdiomvmtfatvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578886.8661296-1288-229395172173669/AnsiballZ_systemd.py
Feb 20 09:14:47 np0005625203.localdomain sudo[153446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:47 np0005625203.localdomain python3.9[153448]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:14:47 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:14:47 np0005625203.localdomain systemd-rc-local-generator[153474]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:14:47 np0005625203.localdomain systemd-sysv-generator[153479]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:14:47 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:14:47 np0005625203.localdomain systemd[1]: Starting Create netns directory...
Feb 20 09:14:47 np0005625203.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 20 09:14:47 np0005625203.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 20 09:14:47 np0005625203.localdomain systemd[1]: Finished Create netns directory.
Feb 20 09:14:47 np0005625203.localdomain sudo[153446]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:49 np0005625203.localdomain sudo[153584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wagfofsctoiqioqxswwcwambydpdcrgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578888.835166-1318-223668921218110/AnsiballZ_file.py
Feb 20 09:14:49 np0005625203.localdomain sudo[153584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:49 np0005625203.localdomain python3.9[153586]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:14:49 np0005625203.localdomain sudo[153584]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:49 np0005625203.localdomain sudo[153676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxlvdmmrizuegifvbhfdorjwwukywtnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578889.4580545-1342-229589897345578/AnsiballZ_stat.py
Feb 20 09:14:49 np0005625203.localdomain sudo[153676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:49 np0005625203.localdomain python3.9[153678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:49 np0005625203.localdomain sudo[153676]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57872 DF PROTO=TCP SPT=35284 DPT=9101 SEQ=710917554 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F792800000000001030307) 
Feb 20 09:14:50 np0005625203.localdomain sshd[153740]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:14:50 np0005625203.localdomain sudo[153750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqtnhoyhrrrevvwhtjfdmbqevnpjdisy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578889.4580545-1342-229589897345578/AnsiballZ_copy.py
Feb 20 09:14:50 np0005625203.localdomain sudo[153750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:50 np0005625203.localdomain python3.9[153752]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578889.4580545-1342-229589897345578/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:14:50 np0005625203.localdomain sudo[153750]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:50 np0005625203.localdomain sshd[153768]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:14:50 np0005625203.localdomain sshd[153740]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:14:50 np0005625203.localdomain sshd[153768]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:14:51 np0005625203.localdomain sudo[153845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-shqfvnudopgjoaqvzxphdivatalxjyio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578890.9579175-1393-140059121574409/AnsiballZ_file.py
Feb 20 09:14:51 np0005625203.localdomain sudo[153845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:51 np0005625203.localdomain python3.9[153847]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:51 np0005625203.localdomain sudo[153845]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:51 np0005625203.localdomain sudo[153937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pexxziackudxvopgjcuyluzhvlunjpyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578891.624831-1417-153978922335359/AnsiballZ_file.py
Feb 20 09:14:51 np0005625203.localdomain sudo[153937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:52 np0005625203.localdomain python3.9[153939]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:14:52 np0005625203.localdomain sudo[153937]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:52 np0005625203.localdomain sudo[154029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rdtyykhtxzerkqyfjhirgbawzbwhtrfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578892.3897812-1441-239362039517936/AnsiballZ_stat.py
Feb 20 09:14:52 np0005625203.localdomain sudo[154029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:52 np0005625203.localdomain python3.9[154031]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:14:52 np0005625203.localdomain sudo[154029]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46008 DF PROTO=TCP SPT=57466 DPT=9101 SEQ=3128467287 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F79E400000000001030307) 
Feb 20 09:14:53 np0005625203.localdomain sudo[154104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mitcochsyrhdoahajwyvusyimuretuyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578892.3897812-1441-239362039517936/AnsiballZ_copy.py
Feb 20 09:14:53 np0005625203.localdomain sudo[154104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:53 np0005625203.localdomain python3.9[154106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578892.3897812-1441-239362039517936/.source.json _original_basename=.aj3v0btf follow=False checksum=38f75f59f5c2ef6b5da12297bfd31cd1e97012ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:53 np0005625203.localdomain sudo[154104]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:54 np0005625203.localdomain python3.9[154196]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:14:56 np0005625203.localdomain sudo[154447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wlqppcdksrlheeuggutivdmyvofwlzvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578895.67284-1561-230254551126351/AnsiballZ_container_config_data.py
Feb 20 09:14:56 np0005625203.localdomain sudo[154447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23579 DF PROTO=TCP SPT=39042 DPT=9100 SEQ=3052421995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F7AA800000000001030307) 
Feb 20 09:14:56 np0005625203.localdomain python3.9[154449]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Feb 20 09:14:56 np0005625203.localdomain sudo[154447]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:57 np0005625203.localdomain sudo[154539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxxjszlypeivhhjxrqyvfbhipweiaoih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578896.7020144-1594-217715369107766/AnsiballZ_container_config_hash.py
Feb 20 09:14:57 np0005625203.localdomain sudo[154539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:57 np0005625203.localdomain python3.9[154541]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 20 09:14:57 np0005625203.localdomain sudo[154539]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:58 np0005625203.localdomain sudo[154631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wqsshfmdssvvpzdmbuokswviqnnhpvah ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771578897.9455378-1624-95944907307819/AnsiballZ_edpm_container_manage.py
Feb 20 09:14:58 np0005625203.localdomain sudo[154631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:58 np0005625203.localdomain python3[154633]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Feb 20 09:14:58 np0005625203.localdomain python3[154633]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "9f8c6308802db66f6c1100257e3fa9593740e85d82f038b4185cf756493dc94e",
                                                                    "Digest": "sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:099d88ae13fa2b3409da5310cdcba7fa01d2c87a8bc98296299a57054b9a075e"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2026-01-30T06:38:56.623500445Z",
                                                                    "Config": {
                                                                         "User": "root",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20260127",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 346422728,
                                                                    "VirtualSize": 346422728,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/5f30d5cd30916d88e24f21a5c8313738088a285d6d2d0efec09cc705e86eb786/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/ba6f0be74a40197166410c33403600ee466dbd9d2ddae7d7f49f78c9646720b2/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/ba6f0be74a40197166410c33403600ee466dbd9d2ddae7d7f49f78c9646720b2/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",
                                                                              "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",
                                                                              "sha256:033e0289d512b27a678c3feb7195acb9c5f2fbb27c9b2d8c8b5b5f6156f0d11f",
                                                                              "sha256:f848a534c5dfe59c31c3da34c3d2466bdea7e8da7def4225acdd3ffef1544d2f"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20260127",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "root",
                                                                    "History": [
                                                                         {
                                                                              "created": "2026-01-28T05:56:51.126388624Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-28T05:56:51.126459235Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20260127\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-28T05:56:53.726938221Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890429494Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890534417Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890553228Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890570688Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890616649Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890659121Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:19.232761948Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:52.670543613Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:55.650316471Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:55.970652058Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:56.274301506Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:56.82928237Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.134416869Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.444274899Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.746599531Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.041383545Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.352119949Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.671042058Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.969834612Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.264649297Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.518696627Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.800434902Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:00.115933627Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:00.41398479Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:03.414738437Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:03.709666444Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:04.019868523Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:05.41751141Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124324267Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124384329Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124399349Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124410339Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:08.028503475Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:00.623406883Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:55.918991169Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openvswitch openvswitch-ovn-common python3-netifaces python3-openvswitch tcpdump && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:57.814850041Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:38:21.443386852Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ovn-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:38:56.622512308Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openvswitch-ovn-host && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:38:57.466949121Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 20 09:14:59 np0005625203.localdomain podman[154682]: 2026-02-20 09:14:59.096218598 +0000 UTC m=+0.088095626 container remove d870110a511a1bb728db1dfbe9467dd904954bfe13b03cdd10de8315cd192933 (image=registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1, name=ovn_controller, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=openstack-ovn-controller-container, io.buildah.version=1.41.5, distribution-scope=public, config_id=tripleo_step4, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ovn-controller, io.k8s.description=Red Hat OpenStack Platform 17.1 ovn-controller, build-date=2026-01-12T22:36:40Z, description=Red Hat OpenStack Platform 17.1 ovn-controller, org.opencontainers.image.revision=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, managed_by=tripleo_ansible, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ovn-controller, version=17.1.13, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 ovn-controller, tcib_managed=true, io.openshift.expose-services=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'test': '/openstack/healthcheck 6642'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ovn-controller:17.1', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'user': 'root', 'volumes': ['/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/log/containers/openvswitch:/var/log/openvswitch:z', '/var/log/containers/openvswitch:/var/log/ovn:z']}, container_name=ovn_controller, vendor=Red Hat, Inc., architecture=x86_64, release=1766032510, vcs-type=git, name=rhosp-rhel9/openstack-ovn-controller, maintainer=OpenStack TripleO Team, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, vcs-ref=ac1441bb241fd7ea83ec557f8c760d89937b0b0c, org.opencontainers.image.created=2026-01-12T22:36:40Z)
Feb 20 09:14:59 np0005625203.localdomain python3[154633]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_controller
Feb 20 09:14:59 np0005625203.localdomain podman[154696]: 
Feb 20 09:14:59 np0005625203.localdomain podman[154696]: 2026-02-20 09:14:59.194650822 +0000 UTC m=+0.080606918 container create efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:14:59 np0005625203.localdomain podman[154696]: 2026-02-20 09:14:59.157845425 +0000 UTC m=+0.043801541 image pull  quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 20 09:14:59 np0005625203.localdomain python3[154633]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Feb 20 09:14:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5140 DF PROTO=TCP SPT=47082 DPT=9102 SEQ=4143847480 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F7B6850000000001030307) 
Feb 20 09:14:59 np0005625203.localdomain sudo[154631]: pam_unix(sudo:session): session closed for user root
Feb 20 09:14:59 np0005625203.localdomain sudo[154823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qvgsudndcxbrkoqxuaycpsjpumkhpyup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578899.5703523-1648-207794715183038/AnsiballZ_stat.py
Feb 20 09:14:59 np0005625203.localdomain sudo[154823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:14:59 np0005625203.localdomain python3.9[154825]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:15:00 np0005625203.localdomain sudo[154823]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:00 np0005625203.localdomain sudo[154917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ypoahcqgdjdpqsmbaauigugarscxkakx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578900.264103-1675-19483183540841/AnsiballZ_file.py
Feb 20 09:15:00 np0005625203.localdomain sudo[154917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:00 np0005625203.localdomain python3.9[154919]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:15:00 np0005625203.localdomain sudo[154917]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:00 np0005625203.localdomain sudo[154963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-diukpgqhzqktorowjvisxvegdidnumhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578900.264103-1675-19483183540841/AnsiballZ_stat.py
Feb 20 09:15:00 np0005625203.localdomain sudo[154963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:01 np0005625203.localdomain python3.9[154965]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:15:01 np0005625203.localdomain sudo[154963]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:01 np0005625203.localdomain sudo[155054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zycovcuudgefwcwasvdnhwzpjmienxtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578901.1388052-1675-161954359856550/AnsiballZ_copy.py
Feb 20 09:15:01 np0005625203.localdomain sudo[155054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:01 np0005625203.localdomain python3.9[155056]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771578901.1388052-1675-161954359856550/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:15:01 np0005625203.localdomain sudo[155054]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:02 np0005625203.localdomain sudo[155100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bmhsporwniyyxwapuwbkcwkaqyudcxqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578901.1388052-1675-161954359856550/AnsiballZ_systemd.py
Feb 20 09:15:02 np0005625203.localdomain sudo[155100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30185 DF PROTO=TCP SPT=54666 DPT=9882 SEQ=1722845691 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F7C1400000000001030307) 
Feb 20 09:15:02 np0005625203.localdomain python3.9[155102]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:15:02 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:15:02 np0005625203.localdomain systemd-rc-local-generator[155129]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:15:02 np0005625203.localdomain systemd-sysv-generator[155132]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:15:02 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:15:02 np0005625203.localdomain sudo[155100]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:02 np0005625203.localdomain sudo[155182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vksjowhfrxjjrwczxlddcbayuuclvyiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578901.1388052-1675-161954359856550/AnsiballZ_systemd.py
Feb 20 09:15:02 np0005625203.localdomain sudo[155182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:03 np0005625203.localdomain python3.9[155184]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:15:03 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:15:03 np0005625203.localdomain systemd-rc-local-generator[155212]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:15:03 np0005625203.localdomain systemd-sysv-generator[155215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:15:03 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:15:03 np0005625203.localdomain systemd[1]: Starting ovn_controller container...
Feb 20 09:15:03 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:15:03 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cec84c6398d162d6fcd8e404f18cb61929280006fa76eed68b767be40830cd69/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 20 09:15:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:15:03 np0005625203.localdomain podman[155227]: 2026-02-20 09:15:03.79203724 +0000 UTC m=+0.147841509 container init efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:15:03 np0005625203.localdomain ovn_controller[155241]: + sudo -E kolla_set_configs
Feb 20 09:15:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:15:03 np0005625203.localdomain podman[155227]: 2026-02-20 09:15:03.823253745 +0000 UTC m=+0.179058014 container start efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:15:03 np0005625203.localdomain edpm-start-podman-container[155227]: ovn_controller
Feb 20 09:15:03 np0005625203.localdomain systemd[1]: Created slice User Slice of UID 0.
Feb 20 09:15:03 np0005625203.localdomain systemd[1]: Starting User Runtime Directory /run/user/0...
Feb 20 09:15:03 np0005625203.localdomain systemd[1]: Finished User Runtime Directory /run/user/0.
Feb 20 09:15:03 np0005625203.localdomain systemd[1]: Starting User Manager for UID 0...
Feb 20 09:15:03 np0005625203.localdomain systemd[155274]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Feb 20 09:15:03 np0005625203.localdomain edpm-start-podman-container[155226]: Creating additional drop-in dependency for "ovn_controller" (efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41)
Feb 20 09:15:03 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:15:03 np0005625203.localdomain podman[155248]: 2026-02-20 09:15:03.987345243 +0000 UTC m=+0.157728134 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Queued start job for default target Main User Target.
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Created slice User Application Slice.
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Feb 20 09:15:04 np0005625203.localdomain systemd-journald[48285]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 75.1 (250 of 333 items), suggesting rotation.
Feb 20 09:15:04 np0005625203.localdomain systemd-journald[48285]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating.
Feb 20 09:15:04 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Started Daily Cleanup of User's Temporary Directories.
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Reached target Paths.
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Reached target Timers.
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Starting D-Bus User Message Bus Socket...
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Starting Create User's Volatile Files and Directories...
Feb 20 09:15:04 np0005625203.localdomain podman[155248]: 2026-02-20 09:15:04.035357666 +0000 UTC m=+0.205740617 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Listening on D-Bus User Message Bus Socket.
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Reached target Sockets.
Feb 20 09:15:04 np0005625203.localdomain podman[155248]: unhealthy
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Finished Create User's Volatile Files and Directories.
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Reached target Basic System.
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Reached target Main User Target.
Feb 20 09:15:04 np0005625203.localdomain systemd[155274]: Startup finished in 130ms.
Feb 20 09:15:04 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:15:04 np0005625203.localdomain systemd-rc-local-generator[155327]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:15:04 np0005625203.localdomain systemd-sysv-generator[155330]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:15:04 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:15:04 np0005625203.localdomain systemd[1]: Started User Manager for UID 0.
Feb 20 09:15:04 np0005625203.localdomain systemd[1]: Started ovn_controller container.
Feb 20 09:15:04 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:15:04 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Failed with result 'exit-code'.
Feb 20 09:15:04 np0005625203.localdomain systemd[1]: Started Session c11 of User root.
Feb 20 09:15:04 np0005625203.localdomain sudo[155182]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: INFO:__main__:Validating config file
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: INFO:__main__:Writing out command to execute
Feb 20 09:15:04 np0005625203.localdomain systemd[1]: session-c11.scope: Deactivated successfully.
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: ++ cat /run_command
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: + ARGS=
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: + sudo kolla_copy_cacerts
Feb 20 09:15:04 np0005625203.localdomain systemd[1]: Started Session c12 of User root.
Feb 20 09:15:04 np0005625203.localdomain systemd[1]: session-c12.scope: Deactivated successfully.
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: + [[ ! -n '' ]]
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: + . kolla_extend_start
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '\'''
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock '
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: + umask 0022
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00005|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00007|reconnect|INFO|tcp:ovsdbserver-sb.openstack.svc:6642: connected
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00011|features|INFO|OVS Feature: ct_zero_snat, state: supported
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00012|features|INFO|OVS Feature: ct_flush, state: supported
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00014|main|INFO|OVS feature set changed, force recompute.
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Feb 20 09:15:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:04Z|00021|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Feb 20 09:15:05 np0005625203.localdomain python3.9[155435]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 20 09:15:06 np0005625203.localdomain sudo[155525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-strzvozxtuaqhxeoqkjhwnhfhgmmmbst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578905.7886655-1810-171426520165782/AnsiballZ_stat.py
Feb 20 09:15:06 np0005625203.localdomain sudo[155525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30186 DF PROTO=TCP SPT=54666 DPT=9882 SEQ=1722845691 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F7D1000000000001030307) 
Feb 20 09:15:06 np0005625203.localdomain python3.9[155527]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:06 np0005625203.localdomain sudo[155525]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:06 np0005625203.localdomain sudo[155598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bklghwyfacwqpwscwlnxyoqkhriwbwme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578905.7886655-1810-171426520165782/AnsiballZ_copy.py
Feb 20 09:15:06 np0005625203.localdomain sudo[155598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:06 np0005625203.localdomain python3.9[155600]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578905.7886655-1810-171426520165782/.source.yaml _original_basename=.x3x6y8z2 follow=False checksum=035aea7be6ab20b22f84818c544954f904d1fea8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:15:06 np0005625203.localdomain sudo[155598]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:07 np0005625203.localdomain sudo[155690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-stihobvmdxxaxlgnohmzwobxruxnkndx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578906.9918988-1855-116185998552066/AnsiballZ_command.py
Feb 20 09:15:07 np0005625203.localdomain sudo[155690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:07 np0005625203.localdomain python3.9[155692]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:15:07 np0005625203.localdomain ovs-vsctl[155693]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Feb 20 09:15:07 np0005625203.localdomain sudo[155690]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:07 np0005625203.localdomain sudo[155783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nrsnqlqxaywagveyijpnqxcyeolcbuyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578907.7085307-1879-176524669587042/AnsiballZ_command.py
Feb 20 09:15:07 np0005625203.localdomain sudo[155783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=35477 DF PROTO=TCP SPT=49102 DPT=9105 SEQ=602867338 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F7D8C10000000001030307) 
Feb 20 09:15:08 np0005625203.localdomain python3.9[155785]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:15:08 np0005625203.localdomain ovs-vsctl[155787]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Feb 20 09:15:08 np0005625203.localdomain sudo[155783]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:09 np0005625203.localdomain sudo[155878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxkixcvlrjnyyvorwnsxgfzfnjpgueka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578908.8293333-1921-38550350467288/AnsiballZ_command.py
Feb 20 09:15:09 np0005625203.localdomain sudo[155878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:09 np0005625203.localdomain python3.9[155880]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:15:09 np0005625203.localdomain ovs-vsctl[155881]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Feb 20 09:15:09 np0005625203.localdomain sudo[155878]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:09 np0005625203.localdomain sshd[148603]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:15:09 np0005625203.localdomain systemd[1]: session-49.scope: Deactivated successfully.
Feb 20 09:15:09 np0005625203.localdomain systemd[1]: session-49.scope: Consumed 40.562s CPU time.
Feb 20 09:15:09 np0005625203.localdomain systemd-logind[759]: Session 49 logged out. Waiting for processes to exit.
Feb 20 09:15:09 np0005625203.localdomain systemd-logind[759]: Removed session 49.
Feb 20 09:15:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38015 DF PROTO=TCP SPT=41744 DPT=9105 SEQ=3210094760 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F7E4810000000001030307) 
Feb 20 09:15:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30187 DF PROTO=TCP SPT=54666 DPT=9882 SEQ=1722845691 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F7F0800000000001030307) 
Feb 20 09:15:14 np0005625203.localdomain sshd[155896]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:15:14 np0005625203.localdomain systemd[1]: Stopping User Manager for UID 0...
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Activating special unit Exit the Session...
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Stopped target Main User Target.
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Stopped target Basic System.
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Stopped target Paths.
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Stopped target Sockets.
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Stopped target Timers.
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Closed D-Bus User Message Bus Socket.
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Stopped Create User's Volatile Files and Directories.
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Removed slice User Application Slice.
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Reached target Shutdown.
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Finished Exit the Session.
Feb 20 09:15:14 np0005625203.localdomain systemd[155274]: Reached target Exit the Session.
Feb 20 09:15:14 np0005625203.localdomain systemd[1]: user@0.service: Deactivated successfully.
Feb 20 09:15:14 np0005625203.localdomain systemd[1]: Stopped User Manager for UID 0.
Feb 20 09:15:14 np0005625203.localdomain systemd[1]: Stopping User Runtime Directory /run/user/0...
Feb 20 09:15:14 np0005625203.localdomain systemd[1]: run-user-0.mount: Deactivated successfully.
Feb 20 09:15:14 np0005625203.localdomain systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Feb 20 09:15:14 np0005625203.localdomain systemd[1]: Stopped User Runtime Directory /run/user/0.
Feb 20 09:15:14 np0005625203.localdomain systemd[1]: Removed slice User Slice of UID 0.
Feb 20 09:15:14 np0005625203.localdomain sshd[155896]: Accepted publickey for zuul from 192.168.122.30 port 50418 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:15:14 np0005625203.localdomain systemd-logind[759]: New session 51 of user zuul.
Feb 20 09:15:14 np0005625203.localdomain sshd[155901]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:15:14 np0005625203.localdomain systemd[1]: Started Session 51 of User zuul.
Feb 20 09:15:14 np0005625203.localdomain sshd[155896]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:15:15 np0005625203.localdomain python3.9[155994]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:15:16 np0005625203.localdomain sshd[155901]: Invalid user systemd from 152.32.129.236 port 45642
Feb 20 09:15:16 np0005625203.localdomain sshd[155901]: Received disconnect from 152.32.129.236 port 45642:11: Bye Bye [preauth]
Feb 20 09:15:16 np0005625203.localdomain sshd[155901]: Disconnected from invalid user systemd 152.32.129.236 port 45642 [preauth]
Feb 20 09:15:16 np0005625203.localdomain sudo[156088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xvmmofgkyrqkoocmhbjvmhwgwketnipz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578916.4647644-58-107286628165178/AnsiballZ_file.py
Feb 20 09:15:16 np0005625203.localdomain sudo[156088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30532 DF PROTO=TCP SPT=42240 DPT=9101 SEQ=3397848419 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F7FBC00000000001030307) 
Feb 20 09:15:17 np0005625203.localdomain python3.9[156090]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:17 np0005625203.localdomain sudo[156088]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:17 np0005625203.localdomain sudo[156180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-btmzjvvroaqejhwylqduzspldgrkmubt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578917.1711438-58-167499436434668/AnsiballZ_file.py
Feb 20 09:15:17 np0005625203.localdomain sudo[156180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:17 np0005625203.localdomain python3.9[156182]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:17 np0005625203.localdomain sudo[156180]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:17 np0005625203.localdomain sudo[156272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fctjlpfnbwhjomcdnejlkkkmdqectagp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578917.7329834-58-215800422922862/AnsiballZ_file.py
Feb 20 09:15:17 np0005625203.localdomain sudo[156272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:18 np0005625203.localdomain python3.9[156274]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:18 np0005625203.localdomain sudo[156272]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:18 np0005625203.localdomain sudo[156364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ugmlxrqgqvgpfhqtqidsypgrghhikxrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578918.333977-58-268909171650211/AnsiballZ_file.py
Feb 20 09:15:18 np0005625203.localdomain sudo[156364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:18 np0005625203.localdomain python3.9[156366]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:18 np0005625203.localdomain sudo[156364]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:19 np0005625203.localdomain sudo[156456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-biuhtmzfxunouahrticvsyunajtsqcxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578918.9511263-58-20403897072834/AnsiballZ_file.py
Feb 20 09:15:19 np0005625203.localdomain sudo[156456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:19 np0005625203.localdomain python3.9[156458]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:19 np0005625203.localdomain sudo[156456]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:20 np0005625203.localdomain python3.9[156548]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:15:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36090 DF PROTO=TCP SPT=58750 DPT=9100 SEQ=1839977412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F808000000000001030307) 
Feb 20 09:15:20 np0005625203.localdomain sudo[156638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jtljnyswwpvpqtdvbbjicawbftjkkdpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578920.3491511-190-109404622644890/AnsiballZ_seboolean.py
Feb 20 09:15:20 np0005625203.localdomain sudo[156638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:20 np0005625203.localdomain python3.9[156640]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Feb 20 09:15:21 np0005625203.localdomain sudo[156638]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:21 np0005625203.localdomain python3.9[156730]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:22 np0005625203.localdomain python3.9[156804]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578921.3163042-214-44198814849863/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30534 DF PROTO=TCP SPT=42240 DPT=9101 SEQ=3397848419 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F813800000000001030307) 
Feb 20 09:15:23 np0005625203.localdomain python3.9[156894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:23 np0005625203.localdomain python3.9[156967]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578922.9912727-259-131556114267010/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:24 np0005625203.localdomain sudo[157057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hitmyimquayddsyxadwrooxltujklxws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578924.419906-310-21609561292414/AnsiballZ_setup.py
Feb 20 09:15:24 np0005625203.localdomain sudo[157057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:25 np0005625203.localdomain python3.9[157059]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:15:25 np0005625203.localdomain sudo[157057]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:25 np0005625203.localdomain sudo[157111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-avzxsqdsubtdzkahqqhcysvdtzgdrcfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578924.419906-310-21609561292414/AnsiballZ_dnf.py
Feb 20 09:15:25 np0005625203.localdomain sudo[157111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:25 np0005625203.localdomain python3.9[157113]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:15:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36092 DF PROTO=TCP SPT=58750 DPT=9100 SEQ=1839977412 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F81FC00000000001030307) 
Feb 20 09:15:29 np0005625203.localdomain sudo[157111]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50867 DF PROTO=TCP SPT=43990 DPT=9102 SEQ=4122273435 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F82BB50000000001030307) 
Feb 20 09:15:30 np0005625203.localdomain sudo[157205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yrtpjjqvedwdatvpmrqdqzuaefvutvus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578929.4941454-346-37753373677622/AnsiballZ_systemd.py
Feb 20 09:15:30 np0005625203.localdomain sudo[157205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:30 np0005625203.localdomain python3.9[157207]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 20 09:15:31 np0005625203.localdomain sudo[157205]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:32 np0005625203.localdomain python3.9[157300]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42401 DF PROTO=TCP SPT=46812 DPT=9882 SEQ=1402124375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F836810000000001030307) 
Feb 20 09:15:32 np0005625203.localdomain python3.9[157371]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578931.6431737-370-107173171277886/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:33 np0005625203.localdomain sudo[157459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:15:33 np0005625203.localdomain sudo[157459]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:15:33 np0005625203.localdomain sudo[157459]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:33 np0005625203.localdomain sudo[157477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:15:33 np0005625203.localdomain sudo[157477]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:15:33 np0005625203.localdomain python3.9[157465]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:33 np0005625203.localdomain python3.9[157577]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578932.7056935-370-248134958616071/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:33 np0005625203.localdomain sudo[157477]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:34 np0005625203.localdomain sudo[157610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:15:34 np0005625203.localdomain sudo[157610]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:15:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:15:34 np0005625203.localdomain sudo[157610]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:34 np0005625203.localdomain podman[157655]: 2026-02-20 09:15:34.565590916 +0000 UTC m=+0.084619666 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:15:34 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:34Z|00022|memory|INFO|13392 kB peak resident set size after 30.1 seconds
Feb 20 09:15:34 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:15:34Z|00023|memory|INFO|idl-cells-OVN_Southbound:4072 idl-cells-Open_vSwitch:813 ofctrl_desired_flow_usage-KB:9 ofctrl_installed_flow_usage-KB:7 ofctrl_sb_flow_ref_usage-KB:3
Feb 20 09:15:34 np0005625203.localdomain podman[157655]: 2026-02-20 09:15:34.62535809 +0000 UTC m=+0.144386820 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2)
Feb 20 09:15:34 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:15:34 np0005625203.localdomain python3.9[157724]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:35 np0005625203.localdomain sshd[157772]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:15:35 np0005625203.localdomain sshd[157772]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:15:35 np0005625203.localdomain python3.9[157797]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578934.414523-502-112159636809828/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=aa9e89725fbcebf7a5c773d7b97083445b7b7759 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:35 np0005625203.localdomain python3.9[157887]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42402 DF PROTO=TCP SPT=46812 DPT=9882 SEQ=1402124375 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F846400000000001030307) 
Feb 20 09:15:36 np0005625203.localdomain python3.9[157958]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578935.5146408-502-72194598867786/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=979187b925479d81d0609f4188e5b95fe1f92c18 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:36 np0005625203.localdomain python3.9[158048]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:15:37 np0005625203.localdomain sudo[158140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aiaqbidvfzfnuukezahilxgzbanopkhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578937.2376225-616-255907652206504/AnsiballZ_file.py
Feb 20 09:15:37 np0005625203.localdomain sudo[158140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:37 np0005625203.localdomain python3.9[158142]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:37 np0005625203.localdomain sudo[158140]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11128 DF PROTO=TCP SPT=56684 DPT=9105 SEQ=2739171354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F84E000000000001030307) 
Feb 20 09:15:38 np0005625203.localdomain sudo[158232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-croxdpnytlwoupjwrdjivnultjvgcmfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578938.048974-640-278747573637432/AnsiballZ_stat.py
Feb 20 09:15:38 np0005625203.localdomain sudo[158232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:38 np0005625203.localdomain python3.9[158234]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:38 np0005625203.localdomain sudo[158232]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:38 np0005625203.localdomain sudo[158280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhsqsirhfedtugqvvshwxlltruvlimbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578938.048974-640-278747573637432/AnsiballZ_file.py
Feb 20 09:15:38 np0005625203.localdomain sudo[158280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:38 np0005625203.localdomain python3.9[158282]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:38 np0005625203.localdomain sudo[158280]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:39 np0005625203.localdomain sudo[158372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdmhddpgaosxnzndyzadjcywscixhlge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578939.039572-640-79863187582801/AnsiballZ_stat.py
Feb 20 09:15:39 np0005625203.localdomain sudo[158372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:39 np0005625203.localdomain python3.9[158374]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:39 np0005625203.localdomain sudo[158372]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:39 np0005625203.localdomain sudo[158420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tygyciqpcvmjuflgyhbnqyjzazvbyfbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578939.039572-640-79863187582801/AnsiballZ_file.py
Feb 20 09:15:39 np0005625203.localdomain sudo[158420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:39 np0005625203.localdomain python3.9[158422]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:39 np0005625203.localdomain sudo[158420]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:40 np0005625203.localdomain sudo[158512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dlqvsbvccodzhzkpwriahbiqrzsptwck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578940.2204797-709-266654466102962/AnsiballZ_file.py
Feb 20 09:15:40 np0005625203.localdomain sudo[158512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:40 np0005625203.localdomain python3.9[158514]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:15:40 np0005625203.localdomain sudo[158512]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:41 np0005625203.localdomain sudo[158604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vfvyvwkxcnvkppaqajdlmdgeciwknohf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578940.929125-733-191969260837818/AnsiballZ_stat.py
Feb 20 09:15:41 np0005625203.localdomain sudo[158604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9695 DF PROTO=TCP SPT=42674 DPT=9105 SEQ=4169940105 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F85A800000000001030307) 
Feb 20 09:15:41 np0005625203.localdomain python3.9[158606]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:41 np0005625203.localdomain sudo[158604]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:41 np0005625203.localdomain sudo[158652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-amhdxnwpyrvsqblmzisjdmejbthrdhsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578940.929125-733-191969260837818/AnsiballZ_file.py
Feb 20 09:15:41 np0005625203.localdomain sudo[158652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:41 np0005625203.localdomain python3.9[158654]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:15:41 np0005625203.localdomain sudo[158652]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:42 np0005625203.localdomain sudo[158744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-asvykwpwghuctjoroloxrpylkqyglgba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578941.9708855-769-91993262258403/AnsiballZ_stat.py
Feb 20 09:15:42 np0005625203.localdomain sudo[158744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:42 np0005625203.localdomain python3.9[158746]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:42 np0005625203.localdomain sudo[158744]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:42 np0005625203.localdomain sudo[158792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vtlgqdtjvbekgpsahjmkxjulnatxbgfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578941.9708855-769-91993262258403/AnsiballZ_file.py
Feb 20 09:15:42 np0005625203.localdomain sudo[158792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:42 np0005625203.localdomain python3.9[158794]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:15:42 np0005625203.localdomain sudo[158792]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:43 np0005625203.localdomain sudo[158884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-prjbicpdupcmdjqjzxclnatfvngmyliv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578943.3046105-805-51449385913769/AnsiballZ_systemd.py
Feb 20 09:15:43 np0005625203.localdomain sudo[158884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11130 DF PROTO=TCP SPT=56684 DPT=9105 SEQ=2739171354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F865C00000000001030307) 
Feb 20 09:15:44 np0005625203.localdomain python3.9[158886]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:15:44 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:15:44 np0005625203.localdomain systemd-sysv-generator[158917]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:15:44 np0005625203.localdomain systemd-rc-local-generator[158913]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:15:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:15:44 np0005625203.localdomain sudo[158884]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:45 np0005625203.localdomain sudo[159014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wtycbeibasdliegwovnlvlvygrcmsnfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578944.9701748-829-5309824561318/AnsiballZ_stat.py
Feb 20 09:15:45 np0005625203.localdomain sudo[159014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:45 np0005625203.localdomain python3.9[159016]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:45 np0005625203.localdomain sudo[159014]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:45 np0005625203.localdomain sudo[159062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pspmnqrsqclejevjdngubjnoekdbjbqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578944.9701748-829-5309824561318/AnsiballZ_file.py
Feb 20 09:15:45 np0005625203.localdomain sudo[159062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:45 np0005625203.localdomain python3.9[159064]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:15:45 np0005625203.localdomain sudo[159062]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:46 np0005625203.localdomain sudo[159154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjpfmixnnmgbdwqzwidjwoarrqaxpuji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578946.07311-865-178692443569447/AnsiballZ_stat.py
Feb 20 09:15:46 np0005625203.localdomain sudo[159154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:46 np0005625203.localdomain python3.9[159156]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:46 np0005625203.localdomain sudo[159154]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:46 np0005625203.localdomain sudo[159202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ybqwzrrqlwtroukqsukuiusvitoetjcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578946.07311-865-178692443569447/AnsiballZ_file.py
Feb 20 09:15:46 np0005625203.localdomain sudo[159202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:46 np0005625203.localdomain python3.9[159204]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:15:46 np0005625203.localdomain sudo[159202]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61059 DF PROTO=TCP SPT=48162 DPT=9101 SEQ=100224223 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F871010000000001030307) 
Feb 20 09:15:47 np0005625203.localdomain sudo[159294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mqhmqykxgcfdtoezaeanqzcxetmdqikk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578947.1732078-901-180534239575063/AnsiballZ_systemd.py
Feb 20 09:15:47 np0005625203.localdomain sudo[159294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:47 np0005625203.localdomain python3.9[159296]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:15:47 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:15:47 np0005625203.localdomain systemd-rc-local-generator[159320]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:15:47 np0005625203.localdomain systemd-sysv-generator[159325]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:15:47 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:15:48 np0005625203.localdomain systemd[1]: Starting Create netns directory...
Feb 20 09:15:48 np0005625203.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 20 09:15:48 np0005625203.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 20 09:15:48 np0005625203.localdomain systemd[1]: Finished Create netns directory.
Feb 20 09:15:48 np0005625203.localdomain sudo[159294]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:48 np0005625203.localdomain sudo[159428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-scsomlwljvmjuuuhjtrqgmizzwctynmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578948.4959793-931-172863642941170/AnsiballZ_file.py
Feb 20 09:15:48 np0005625203.localdomain sudo[159428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:48 np0005625203.localdomain python3.9[159430]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:48 np0005625203.localdomain sudo[159428]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:49 np0005625203.localdomain sudo[159520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cxuhztugxzbuuaolyiwedklrcuytcpcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578949.1094515-955-192201625806075/AnsiballZ_stat.py
Feb 20 09:15:49 np0005625203.localdomain sudo[159520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:49 np0005625203.localdomain python3.9[159522]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:49 np0005625203.localdomain sudo[159520]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:49 np0005625203.localdomain sudo[159593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwfyscegqnhegekkuxvdizdwrlnflpnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578949.1094515-955-192201625806075/AnsiballZ_copy.py
Feb 20 09:15:49 np0005625203.localdomain sudo[159593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:50 np0005625203.localdomain python3.9[159595]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771578949.1094515-955-192201625806075/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:50 np0005625203.localdomain sudo[159593]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38398 DF PROTO=TCP SPT=45568 DPT=9100 SEQ=2186059758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F87D010000000001030307) 
Feb 20 09:15:50 np0005625203.localdomain sudo[159685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yalzxrxyxptrxkcsydgkcmhbtkpnitqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578950.4305782-1006-5168931155726/AnsiballZ_file.py
Feb 20 09:15:50 np0005625203.localdomain sudo[159685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:51 np0005625203.localdomain python3.9[159687]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:15:51 np0005625203.localdomain sudo[159685]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:51 np0005625203.localdomain sudo[159777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mmewliujszdavwfhyglxxqgueratgkyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578951.2161362-1030-32681949706524/AnsiballZ_file.py
Feb 20 09:15:51 np0005625203.localdomain sudo[159777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:51 np0005625203.localdomain python3.9[159779]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:15:51 np0005625203.localdomain sudo[159777]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:52 np0005625203.localdomain sudo[159869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dmsfacezoeldfjkvsmelofqyoahhswub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578951.8567557-1054-89959531500989/AnsiballZ_stat.py
Feb 20 09:15:52 np0005625203.localdomain sudo[159869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:52 np0005625203.localdomain python3.9[159871]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:15:52 np0005625203.localdomain sudo[159869]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:52 np0005625203.localdomain sudo[159944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqzgxsneyxzzartgzjzquxikpxbwdzuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578951.8567557-1054-89959531500989/AnsiballZ_copy.py
Feb 20 09:15:52 np0005625203.localdomain sudo[159944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:52 np0005625203.localdomain python3.9[159946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578951.8567557-1054-89959531500989/.source.json _original_basename=.3245hc0j follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:15:52 np0005625203.localdomain sudo[159944]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23582 DF PROTO=TCP SPT=39042 DPT=9100 SEQ=3052421995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F888800000000001030307) 
Feb 20 09:15:53 np0005625203.localdomain python3.9[160036]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:15:55 np0005625203.localdomain sudo[160287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uzlgznhgbqrwzgsriwjtgodkfdbftsvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578955.1903815-1174-135932342572384/AnsiballZ_container_config_data.py
Feb 20 09:15:55 np0005625203.localdomain sudo[160287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:55 np0005625203.localdomain python3.9[160289]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Feb 20 09:15:55 np0005625203.localdomain sudo[160287]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38400 DF PROTO=TCP SPT=45568 DPT=9100 SEQ=2186059758 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F894C10000000001030307) 
Feb 20 09:15:56 np0005625203.localdomain sudo[160379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zzrootrhgjhzcdclalqmcynrpnywaqbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578956.42113-1207-223413076785674/AnsiballZ_container_config_hash.py
Feb 20 09:15:56 np0005625203.localdomain sudo[160379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:57 np0005625203.localdomain python3.9[160381]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 20 09:15:57 np0005625203.localdomain sudo[160379]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:57 np0005625203.localdomain sudo[160471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znmrhevcuywwbeshakfeupnjhnyjpxsc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771578957.4138365-1237-145572896986675/AnsiballZ_edpm_container_manage.py
Feb 20 09:15:57 np0005625203.localdomain sudo[160471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:58 np0005625203.localdomain python3[160473]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb 20 09:15:58 np0005625203.localdomain python3[160473]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "19964fda6b912d3d57e21b0bcc221725d936e513025030cb508474fe04b06af8",
                                                                    "Digest": "sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:7c305a77ab65247f0dc2ea1616c427b173cb95f37bb37e34c631d9615a73d2cc"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2026-01-30T06:29:34.446261637Z",
                                                                    "Config": {
                                                                         "User": "neutron",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20260127",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 785500417,
                                                                    "VirtualSize": 785500417,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/4e4217686394af7a9122b2b81585c3ad5207fe018f230f20d139fff3e54ac3cc/diff:/var/lib/containers/storage/overlay/33f73751efe606c7233470249b676223e1b26b870cc49c3dbfbe2c7691e9f3fe/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",
                                                                              "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",
                                                                              "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",
                                                                              "sha256:d3cc9cdab7e3e7c1a0a6c80e61bbd8cc5eeeba7069bab1cc064ed2e6cc28ed58",
                                                                              "sha256:d5cbf3016eca6267717119e8ebab3c6c083cae6c589c6961ae23bfa93ef3afa4",
                                                                              "sha256:0096ee5d07436ac5b94d9d58b8b2407cc5e6854d70de5e7f89b9a7a1ad4912ad"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20260127",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "neutron",
                                                                    "History": [
                                                                         {
                                                                              "created": "2026-01-28T05:56:51.126388624Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-28T05:56:51.126459235Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20260127\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-28T05:56:53.726938221Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890429494Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890534417Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890553228Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890570688Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890616649Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890659121Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:19.232761948Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:52.670543613Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:55.650316471Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:55.970652058Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:56.274301506Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:56.82928237Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.134416869Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.444274899Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.746599531Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.041383545Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.352119949Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.671042058Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.969834612Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.264649297Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.518696627Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.800434902Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:00.115933627Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:00.41398479Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:03.414738437Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:03.709666444Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:04.019868523Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:05.41751141Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124324267Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124384329Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124399349Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124410339Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:08.028503475Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:12:56.089921987Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:34.524252589Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:37.262239859Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:16:21.310836362Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:16:46.153105676Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage neutron",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:23.560707988Z",
                                                                              "created_by": "/bin/sh -c dnf -y install iputils net-tools openstack-neutron openstack-neutron-rpc-server openstack-neutron-ml2 openvswitch python3-networking-baremetal python3-openvswitch python3-unbound && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:41.849131913Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/neutron-base/neutron_sudoers /etc/sudoers.d/neutron_sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:42.744796961Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers.d/neutron_sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:54.044382348Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:27:49.126765909Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:28:47.079155224Z",
                                                                              "created_by": "/bin/sh -c dnf -y install libseccomp podman && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:28:49.983056567Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:28:56.370338178Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-agent-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:29:34.44483218Z",
                                                                              "created_by": "/bin/sh -c dnf -y install python3-networking-ovn-metadata-agent && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:29:34.444891241Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER neutron",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:29:36.920021505Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 20 09:15:58 np0005625203.localdomain podman[160523]: 2026-02-20 09:15:58.492722372 +0000 UTC m=+0.091133802 container remove be83a3138c7df5234a9cc56d1f7974df3ab2bd60cb1f3766ef11d88f4e1fa59e (image=registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1, name=ovn_metadata_agent, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, vcs-ref=b1428c69debc6a325367957538a456c4ca8d9a0f, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, org.opencontainers.image.revision=b1428c69debc6a325367957538a456c4ca8d9a0f, vcs-type=git, distribution-scope=public, tcib_managed=true, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, build-date=2026-01-12T22:56:19Z, url=https://www.redhat.com, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-neutron-metadata-agent-ovn, io.buildah.version=1.41.5, name=rhosp-rhel9/openstack-neutron-metadata-agent-ovn, com.redhat.component=openstack-neutron-metadata-agent-ovn-container, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': 'ef7731a1bdeb8ee7875974b29f2e34e6'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-neutron-metadata-agent-ovn:17.1', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'start_order': 1, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/neutron:/var/log/neutron:z', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/neutron:/var/lib/kolla/config_files/src:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/run/netns:/run/netns:shared', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro']}, managed_by=tripleo_ansible, vendor=Red Hat, Inc., container_name=ovn_metadata_agent, description=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, io.openshift.expose-services=, architecture=x86_64, config_id=tripleo_step4, io.k8s.display-name=Red Hat OpenStack Platform 17.1 neutron-metadata-agent-ovn, release=1766032510, maintainer=OpenStack TripleO Team, org.opencontainers.image.created=2026-01-12T22:56:19Z, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0)
Feb 20 09:15:58 np0005625203.localdomain python3[160473]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ovn_metadata_agent
Feb 20 09:15:58 np0005625203.localdomain podman[160538]: 
Feb 20 09:15:58 np0005625203.localdomain podman[160538]: 2026-02-20 09:15:58.599110426 +0000 UTC m=+0.087990816 container create 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Feb 20 09:15:58 np0005625203.localdomain podman[160538]: 2026-02-20 09:15:58.556352435 +0000 UTC m=+0.045232845 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 20 09:15:58 np0005625203.localdomain python3[160473]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311 --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 20 09:15:58 np0005625203.localdomain sshd[160562]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:15:58 np0005625203.localdomain sudo[160471]: pam_unix(sudo:session): session closed for user root
Feb 20 09:15:59 np0005625203.localdomain sudo[160664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rlbcczdiulgxzwhmyltbxfmqeywcrxcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578958.9586601-1261-91683890529492/AnsiballZ_stat.py
Feb 20 09:15:59 np0005625203.localdomain sudo[160664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:15:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=36227 DF PROTO=TCP SPT=54206 DPT=9102 SEQ=3322868557 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F8A0E40000000001030307) 
Feb 20 09:15:59 np0005625203.localdomain python3.9[160666]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:15:59 np0005625203.localdomain sshd[160562]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:16:00 np0005625203.localdomain sudo[160664]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:00 np0005625203.localdomain sudo[160758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dpwzwqbedzcgungkzgejdlgfdlfulsuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578960.7172487-1288-240352557795377/AnsiballZ_file.py
Feb 20 09:16:00 np0005625203.localdomain sudo[160758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:01 np0005625203.localdomain python3.9[160760]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:01 np0005625203.localdomain sudo[160758]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:01 np0005625203.localdomain sudo[160804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nprnpjmymbjdkifwqlntqglrlgtjfqoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578960.7172487-1288-240352557795377/AnsiballZ_stat.py
Feb 20 09:16:01 np0005625203.localdomain sudo[160804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:01 np0005625203.localdomain python3.9[160806]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:16:01 np0005625203.localdomain sudo[160804]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14262 DF PROTO=TCP SPT=42742 DPT=9882 SEQ=956250484 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F8ABC10000000001030307) 
Feb 20 09:16:02 np0005625203.localdomain sudo[160895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-idtmnxvdqksfqqgatinnfxmtopqbxymi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578961.8095455-1288-101543288671255/AnsiballZ_copy.py
Feb 20 09:16:02 np0005625203.localdomain sudo[160895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:02 np0005625203.localdomain python3.9[160897]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771578961.8095455-1288-101543288671255/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:02 np0005625203.localdomain sudo[160895]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:02 np0005625203.localdomain sudo[160941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-slbqljqergfjdbkfxyphqhoslhtbqsuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578961.8095455-1288-101543288671255/AnsiballZ_systemd.py
Feb 20 09:16:02 np0005625203.localdomain sudo[160941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:02 np0005625203.localdomain python3.9[160943]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:16:02 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:16:03 np0005625203.localdomain systemd-sysv-generator[160971]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:16:03 np0005625203.localdomain systemd-rc-local-generator[160965]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:16:03 np0005625203.localdomain sshd[160979]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:16:03 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:16:03 np0005625203.localdomain sudo[160941]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:04 np0005625203.localdomain sudo[161024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckzbkopqqehyzvmkmaeqtcpbgiwajpsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578961.8095455-1288-101543288671255/AnsiballZ_systemd.py
Feb 20 09:16:04 np0005625203.localdomain sudo[161024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:04 np0005625203.localdomain sshd[160979]: Received disconnect from 194.107.115.2 port 26704:11: Bye Bye [preauth]
Feb 20 09:16:04 np0005625203.localdomain sshd[160979]: Disconnected from authenticating user root 194.107.115.2 port 26704 [preauth]
Feb 20 09:16:04 np0005625203.localdomain python3.9[161026]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:16:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:16:04 np0005625203.localdomain systemd[1]: tmp-crun.ac474M.mount: Deactivated successfully.
Feb 20 09:16:04 np0005625203.localdomain podman[161028]: 2026-02-20 09:16:04.773489829 +0000 UTC m=+0.087419958 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 20 09:16:04 np0005625203.localdomain podman[161028]: 2026-02-20 09:16:04.81529985 +0000 UTC m=+0.129229979 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:16:04 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:16:05 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:16:05 np0005625203.localdomain systemd-rc-local-generator[161080]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:16:05 np0005625203.localdomain systemd-sysv-generator[161083]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:16:05 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:16:05 np0005625203.localdomain systemd[1]: Starting ovn_metadata_agent container...
Feb 20 09:16:05 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:16:05 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88044ebc49c7030c8bc48c83d655faa044682e4660f6491b206a0041b67bd0f/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 20 09:16:05 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a88044ebc49c7030c8bc48c83d655faa044682e4660f6491b206a0041b67bd0f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:16:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:16:05 np0005625203.localdomain podman[161093]: 2026-02-20 09:16:05.892824573 +0000 UTC m=+0.148770853 container init 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: + sudo -E kolla_set_configs
Feb 20 09:16:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:16:05 np0005625203.localdomain podman[161093]: 2026-02-20 09:16:05.944153093 +0000 UTC m=+0.200099373 container start 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Feb 20 09:16:05 np0005625203.localdomain edpm-start-podman-container[161093]: ovn_metadata_agent
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Validating config file
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Copying service configuration files
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Writing out command to execute
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 20 09:16:05 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: ++ cat /run_command
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: + CMD=neutron-ovn-metadata-agent
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: + ARGS=
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: + sudo kolla_copy_cacerts
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: Running command: 'neutron-ovn-metadata-agent'
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: + [[ ! -n '' ]]
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: + . kolla_extend_start
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: + umask 0022
Feb 20 09:16:06 np0005625203.localdomain ovn_metadata_agent[161107]: + exec neutron-ovn-metadata-agent
Feb 20 09:16:06 np0005625203.localdomain podman[161116]: 2026-02-20 09:16:06.033003974 +0000 UTC m=+0.083557491 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=starting, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:16:06 np0005625203.localdomain edpm-start-podman-container[161092]: Creating additional drop-in dependency for "ovn_metadata_agent" (379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d)
Feb 20 09:16:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14263 DF PROTO=TCP SPT=42742 DPT=9882 SEQ=956250484 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F8BB800000000001030307) 
Feb 20 09:16:06 np0005625203.localdomain podman[161116]: 2026-02-20 09:16:06.117343647 +0000 UTC m=+0.167897144 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:16:06 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:16:06 np0005625203.localdomain systemd-sysv-generator[161182]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:16:06 np0005625203.localdomain systemd-rc-local-generator[161178]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:16:06 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:16:06 np0005625203.localdomain systemd[1]: tmp-crun.YRtvdO.mount: Deactivated successfully.
Feb 20 09:16:06 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:16:06 np0005625203.localdomain systemd[1]: Started ovn_metadata_agent container.
Feb 20 09:16:06 np0005625203.localdomain sudo[161024]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:07 np0005625203.localdomain python3.9[161284]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.587 161112 INFO neutron.common.config [-] Logging enabled!
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.587 161112 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.588 161112 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.588 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.588 161112 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.588 161112 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.588 161112 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.589 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.589 161112 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.589 161112 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.589 161112 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.589 161112 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.589 161112 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.589 161112 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.589 161112 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.590 161112 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.590 161112 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.590 161112 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.590 161112 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.590 161112 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.590 161112 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.590 161112 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.591 161112 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.591 161112 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.591 161112 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.591 161112 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.591 161112 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.591 161112 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.591 161112 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.591 161112 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.592 161112 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.592 161112 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.592 161112 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.592 161112 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.592 161112 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.592 161112 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.592 161112 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.593 161112 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.593 161112 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = np0005625203.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.593 161112 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.593 161112 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.593 161112 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.593 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.593 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.594 161112 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.594 161112 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.594 161112 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.594 161112 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.594 161112 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.594 161112 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.594 161112 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.594 161112 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.595 161112 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.595 161112 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.595 161112 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.595 161112 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.595 161112 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.595 161112 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.595 161112 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.595 161112 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.596 161112 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.596 161112 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.596 161112 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.596 161112 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.596 161112 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.596 161112 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.596 161112 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.597 161112 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.597 161112 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.597 161112 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.597 161112 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.597 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.597 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.597 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.598 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.598 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.598 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.598 161112 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.598 161112 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.598 161112 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.598 161112 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.598 161112 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.599 161112 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.599 161112 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.599 161112 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.599 161112 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.599 161112 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.599 161112 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.599 161112 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.600 161112 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.600 161112 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.600 161112 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.600 161112 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.600 161112 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.600 161112 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.600 161112 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.600 161112 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.601 161112 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.601 161112 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.601 161112 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.601 161112 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.601 161112 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.601 161112 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.601 161112 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.601 161112 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.602 161112 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.602 161112 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.602 161112 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.602 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.602 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.602 161112 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.602 161112 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.602 161112 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.603 161112 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.603 161112 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.603 161112 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.603 161112 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.603 161112 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.603 161112 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.603 161112 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.604 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.604 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.604 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.604 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.604 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.604 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.604 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.605 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.605 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.605 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.605 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.605 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.605 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.605 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.606 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.606 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.606 161112 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.606 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.606 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.606 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.606 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.607 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.607 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.607 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.607 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.607 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.607 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.607 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.607 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.608 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.608 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.608 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.608 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.608 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.608 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.608 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.609 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.609 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.609 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.609 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.609 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.609 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.609 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.609 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.610 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.610 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.610 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.610 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.610 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.610 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.610 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.611 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.611 161112 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.611 161112 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.611 161112 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.611 161112 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.611 161112 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.611 161112 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.612 161112 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.612 161112 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.612 161112 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.612 161112 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.612 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.612 161112 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.612 161112 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.613 161112 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.613 161112 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.613 161112 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.613 161112 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.613 161112 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.613 161112 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.613 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.613 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.614 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.614 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.614 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.614 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.614 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.614 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.614 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.615 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.615 161112 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.615 161112 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.615 161112 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.615 161112 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.615 161112 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.615 161112 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.615 161112 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.616 161112 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.616 161112 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.616 161112 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.616 161112 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.616 161112 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.616 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.616 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.616 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.617 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.617 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.617 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.617 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.617 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.617 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.617 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.617 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.617 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.617 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.618 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.618 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.618 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.618 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.618 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.618 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.618 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.618 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.618 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.618 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.619 161112 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.619 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.619 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.619 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.619 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.619 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.619 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.619 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.619 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.619 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.620 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.620 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.620 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.620 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.620 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.620 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.620 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.620 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.620 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.620 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.621 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.621 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.621 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.621 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.621 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.621 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.621 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.621 161112 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.621 161112 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.621 161112 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.622 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.622 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.622 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.622 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.622 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.622 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.622 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.622 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.622 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.622 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.623 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.623 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.623 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.623 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.623 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.623 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.623 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.623 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.623 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.623 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.624 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.624 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.624 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.624 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.624 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.624 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.624 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.624 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.624 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.624 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.625 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.625 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.625 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.625 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.625 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.625 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.625 161112 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.625 161112 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.633 161112 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.633 161112 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.633 161112 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.634 161112 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.634 161112 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.647 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 1e4d60e6-0be0-4143-b488-1b391fbc71ef (UUID: 1e4d60e6-0be0-4143-b488-1b391fbc71ef) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.667 161112 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.668 161112 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.668 161112 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.668 161112 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.669 161112 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.671 161112 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.678 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '1e4d60e6-0be0-4143-b488-1b391fbc71ef'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], external_ids={'neutron:ovn-metadata-id': 'ecd5a56b-931b-530c-bd99-073654d19a29', 'neutron:ovn-metadata-sb-cfg': '1'}, name=1e4d60e6-0be0-4143-b488-1b391fbc71ef, nb_cfg_timestamp=1771578913267, nb_cfg=4) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.679 161112 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f52a5083b20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.679 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.679 161112 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.680 161112 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.680 161112 INFO oslo_service.service [-] Starting 1 workers
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.682 161112 DEBUG oslo_service.service [-] Started child 161299 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.685 161112 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpz27cifem/privsep.sock']
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.686 161299 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-159130'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.710 161299 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.711 161299 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.711 161299 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.713 161299 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connecting...
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.715 161299 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:ovsdbserver-sb.openstack.svc:6642: connected
Feb 20 09:16:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:07.727 161299 INFO eventlet.wsgi.server [-] (161299) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Feb 20 09:16:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50594 DF PROTO=TCP SPT=51606 DPT=9105 SEQ=3936669896 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F8C3400000000001030307) 
Feb 20 09:16:08 np0005625203.localdomain sudo[161380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hzpagyirsfqylurvyhznjsbshcymqfiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578967.9885128-1423-160454821714884/AnsiballZ_stat.py
Feb 20 09:16:08 np0005625203.localdomain sudo[161380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:08.269 161112 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 20 09:16:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:08.270 161112 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpz27cifem/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 20 09:16:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:08.173 161363 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 20 09:16:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:08.178 161363 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 20 09:16:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:08.181 161363 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Feb 20 09:16:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:08.181 161363 INFO oslo.privsep.daemon [-] privsep daemon running as pid 161363
Feb 20 09:16:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:08.272 161363 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad8d743-c39f-4153-a89d-062fb214096a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:16:08 np0005625203.localdomain python3.9[161382]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:16:08 np0005625203.localdomain sudo[161380]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:08.697 161363 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:16:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:08.697 161363 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:16:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:08.697 161363 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:16:08 np0005625203.localdomain sudo[161459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdzeihzrdvuwqgxumrowpwsyjvgoivsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578967.9885128-1423-160454821714884/AnsiballZ_copy.py
Feb 20 09:16:08 np0005625203.localdomain sudo[161459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:09 np0005625203.localdomain python3.9[161461]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771578967.9885128-1423-160454821714884/.source.yaml _original_basename=.5hrk6lyl follow=False checksum=00f5f1349c1b2f1d82b680e3efe9b7b384555dee backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:09 np0005625203.localdomain sudo[161459]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.135 161363 DEBUG oslo.privsep.daemon [-] privsep: reply[30157b59-1bed-4ab1-b0e2-53eae732895d]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.137 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, column=external_ids, values=({'neutron:ovn-metadata-id': 'ecd5a56b-931b-530c-bd99-073654d19a29'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.137 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.138 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.152 161112 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.152 161112 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.152 161112 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.153 161112 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.153 161112 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.153 161112 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.153 161112 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.153 161112 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.153 161112 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.153 161112 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.153 161112 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.153 161112 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.153 161112 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.154 161112 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.154 161112 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.154 161112 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.154 161112 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.154 161112 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.154 161112 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.154 161112 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.154 161112 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.154 161112 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.154 161112 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.155 161112 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.155 161112 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.155 161112 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.155 161112 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.155 161112 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.155 161112 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.155 161112 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.155 161112 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.155 161112 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.156 161112 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.156 161112 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.156 161112 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.156 161112 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.156 161112 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.156 161112 DEBUG oslo_service.service [-] host                           = np0005625203.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.156 161112 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.156 161112 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.156 161112 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.157 161112 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.157 161112 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.157 161112 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.157 161112 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.157 161112 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.157 161112 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.157 161112 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.157 161112 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.157 161112 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.157 161112 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.158 161112 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.158 161112 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.158 161112 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.158 161112 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.158 161112 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.158 161112 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.158 161112 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.158 161112 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.158 161112 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.158 161112 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.158 161112 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.159 161112 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.159 161112 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.159 161112 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.159 161112 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.159 161112 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.159 161112 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.159 161112 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.159 161112 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.159 161112 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.159 161112 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.159 161112 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.160 161112 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.160 161112 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.160 161112 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.160 161112 DEBUG oslo_service.service [-] nova_metadata_protocol         = http log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.160 161112 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.160 161112 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.160 161112 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.160 161112 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.160 161112 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.160 161112 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.161 161112 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.161 161112 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.161 161112 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.161 161112 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.161 161112 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.161 161112 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.161 161112 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.161 161112 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.161 161112 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.161 161112 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.161 161112 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.162 161112 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.162 161112 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.162 161112 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.162 161112 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.162 161112 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.162 161112 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.162 161112 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.162 161112 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.162 161112 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.162 161112 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.162 161112 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.163 161112 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.163 161112 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.163 161112 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.163 161112 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.163 161112 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.163 161112 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.163 161112 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.163 161112 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.163 161112 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.164 161112 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.164 161112 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.164 161112 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.164 161112 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.164 161112 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.164 161112 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.164 161112 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.164 161112 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.164 161112 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.164 161112 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.165 161112 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.165 161112 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.165 161112 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.165 161112 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.165 161112 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.165 161112 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.165 161112 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.165 161112 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.165 161112 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.165 161112 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.166 161112 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.166 161112 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.166 161112 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.166 161112 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.166 161112 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.166 161112 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.166 161112 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.166 161112 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.166 161112 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.166 161112 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.167 161112 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.167 161112 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.167 161112 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.167 161112 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.167 161112 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.167 161112 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.167 161112 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.167 161112 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.167 161112 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.167 161112 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.167 161112 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.168 161112 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.168 161112 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.168 161112 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.168 161112 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.168 161112 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.168 161112 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.168 161112 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.168 161112 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.168 161112 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.168 161112 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.168 161112 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.169 161112 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.169 161112 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.169 161112 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.169 161112 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.169 161112 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.169 161112 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.169 161112 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.169 161112 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.169 161112 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.169 161112 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.169 161112 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.170 161112 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.170 161112 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.170 161112 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.170 161112 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.170 161112 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.170 161112 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.170 161112 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.170 161112 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.170 161112 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.170 161112 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.171 161112 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.171 161112 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.171 161112 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.171 161112 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.171 161112 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.171 161112 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.171 161112 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.171 161112 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.171 161112 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.171 161112 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.172 161112 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.172 161112 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.172 161112 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.172 161112 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.172 161112 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.172 161112 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.172 161112 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.172 161112 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.172 161112 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.172 161112 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.172 161112 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.173 161112 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.173 161112 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.173 161112 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.173 161112 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.173 161112 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.173 161112 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.173 161112 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.173 161112 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.173 161112 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.173 161112 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.173 161112 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.174 161112 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.174 161112 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.174 161112 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.174 161112 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.174 161112 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.174 161112 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.174 161112 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.174 161112 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.174 161112 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.174 161112 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.174 161112 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.175 161112 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.175 161112 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.175 161112 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.175 161112 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.175 161112 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.175 161112 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.175 161112 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.175 161112 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.175 161112 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.175 161112 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.176 161112 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.176 161112 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.176 161112 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.176 161112 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.176 161112 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.176 161112 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.176 161112 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.176 161112 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.176 161112 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.176 161112 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.176 161112 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.177 161112 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.177 161112 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.177 161112 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.177 161112 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.177 161112 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = tcp:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.177 161112 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.177 161112 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.177 161112 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.177 161112 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.177 161112 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.178 161112 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.178 161112 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.178 161112 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.178 161112 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.178 161112 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.178 161112 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.178 161112 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.178 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.178 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.178 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.179 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.179 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.179 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.179 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.179 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.179 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.179 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.179 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.179 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.179 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.179 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.180 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.180 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.180 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.180 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.180 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.180 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.180 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.180 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.180 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.180 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.181 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.181 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.181 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.181 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.181 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.181 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.181 161112 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.181 161112 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.181 161112 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.181 161112 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.182 161112 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:16:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:16:09.182 161112 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 20 09:16:09 np0005625203.localdomain sshd[155896]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:16:09 np0005625203.localdomain systemd[1]: session-51.scope: Deactivated successfully.
Feb 20 09:16:09 np0005625203.localdomain systemd[1]: session-51.scope: Consumed 32.300s CPU time.
Feb 20 09:16:09 np0005625203.localdomain systemd-logind[759]: Session 51 logged out. Waiting for processes to exit.
Feb 20 09:16:09 np0005625203.localdomain systemd-logind[759]: Removed session 51.
Feb 20 09:16:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50596 DF PROTO=TCP SPT=51606 DPT=9105 SEQ=3936669896 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F8DB000000000001030307) 
Feb 20 09:16:14 np0005625203.localdomain sshd[161476]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:16:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14264 DF PROTO=TCP SPT=42742 DPT=9882 SEQ=956250484 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F8DC800000000001030307) 
Feb 20 09:16:14 np0005625203.localdomain sshd[161476]: Accepted publickey for zuul from 192.168.122.30 port 45682 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:16:14 np0005625203.localdomain systemd-logind[759]: New session 52 of user zuul.
Feb 20 09:16:14 np0005625203.localdomain systemd[1]: Started Session 52 of User zuul.
Feb 20 09:16:14 np0005625203.localdomain sshd[161476]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:16:15 np0005625203.localdomain python3.9[161569]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:16:16 np0005625203.localdomain sudo[161663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jahwqijkxmirevvgwqjlwzwfydlxzkzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578976.113133-58-229889805266804/AnsiballZ_command.py
Feb 20 09:16:16 np0005625203.localdomain sudo[161663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:16 np0005625203.localdomain python3.9[161665]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:16:16 np0005625203.localdomain sudo[161663]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44701 DF PROTO=TCP SPT=55852 DPT=9101 SEQ=2714111965 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F8E6000000000001030307) 
Feb 20 09:16:17 np0005625203.localdomain sshd[161751]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:16:17 np0005625203.localdomain sudo[161770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vtcplfwbhxbyzgagsogutonyjofefdsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578976.9608884-82-4155114821874/AnsiballZ_command.py
Feb 20 09:16:17 np0005625203.localdomain sudo[161770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:17 np0005625203.localdomain python3.9[161772]: ansible-ansible.legacy.command Invoked with _raw_params=podman stop nova_virtlogd _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:16:17 np0005625203.localdomain systemd[1]: libpod-e44638810041aae6c932c01f4e4fb5e56658e5232a39c0d9fa7699f76e050660.scope: Deactivated successfully.
Feb 20 09:16:17 np0005625203.localdomain podman[161773]: 2026-02-20 09:16:17.510952207 +0000 UTC m=+0.074209477 container died e44638810041aae6c932c01f4e4fb5e56658e5232a39c0d9fa7699f76e050660 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, io.buildah.version=1.41.5, konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, cpe=cpe:/a:redhat:openstack:17.1::el9, url=https://www.redhat.com, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, batch=17.1_20260112.1, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, description=Red Hat OpenStack Platform 17.1 nova-libvirt, release=1766032510, vcs-type=git, architecture=x86_64, maintainer=OpenStack TripleO Team, com.redhat.component=openstack-nova-libvirt-container, distribution-scope=public, name=rhosp-rhel9/openstack-nova-libvirt, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, version=17.1.13, build-date=2026-01-12T23:31:49Z, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:16:17 np0005625203.localdomain systemd[1]: tmp-crun.t1vJxh.mount: Deactivated successfully.
Feb 20 09:16:17 np0005625203.localdomain podman[161773]: 2026-02-20 09:16:17.548192619 +0000 UTC m=+0.111449889 container cleanup e44638810041aae6c932c01f4e4fb5e56658e5232a39c0d9fa7699f76e050660 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, batch=17.1_20260112.1, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, tcib_managed=true, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:31:49Z, name=rhosp-rhel9/openstack-nova-libvirt, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, konflux.additional-tags=17.1.13 17.1_20260112.1, cpe=cpe:/a:redhat:openstack:17.1::el9, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, vendor=Red Hat, Inc., version=17.1.13, com.redhat.component=openstack-nova-libvirt-container, io.buildah.version=1.41.5, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.created=2026-01-12T23:31:49Z, release=1766032510, summary=Red Hat OpenStack Platform 17.1 nova-libvirt)
Feb 20 09:16:17 np0005625203.localdomain sudo[161770]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:17 np0005625203.localdomain podman[161788]: 2026-02-20 09:16:17.603697756 +0000 UTC m=+0.083746436 container remove e44638810041aae6c932c01f4e4fb5e56658e5232a39c0d9fa7699f76e050660 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-libvirt:17.1, name=nova_virtlogd, tcib_managed=true, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-libvirt, org.opencontainers.image.created=2026-01-12T23:31:49Z, build-date=2026-01-12T23:31:49Z, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-libvirt, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, batch=17.1_20260112.1, konflux.additional-tags=17.1.13 17.1_20260112.1, summary=Red Hat OpenStack Platform 17.1 nova-libvirt, maintainer=OpenStack TripleO Team, url=https://www.redhat.com, io.buildah.version=1.41.5, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, description=Red Hat OpenStack Platform 17.1 nova-libvirt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=17.1.13, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, release=1766032510, name=rhosp-rhel9/openstack-nova-libvirt, vendor=Red Hat, Inc., com.redhat.component=openstack-nova-libvirt-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:openstack:17.1::el9, architecture=x86_64, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-libvirt)
Feb 20 09:16:17 np0005625203.localdomain systemd[1]: libpod-conmon-e44638810041aae6c932c01f4e4fb5e56658e5232a39c0d9fa7699f76e050660.scope: Deactivated successfully.
Feb 20 09:16:18 np0005625203.localdomain sshd[161751]: Invalid user vps from 103.61.123.132 port 59738
Feb 20 09:16:18 np0005625203.localdomain systemd[1]: tmp-crun.1xuqZM.mount: Deactivated successfully.
Feb 20 09:16:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8f7dd65b6e00d2007afbafb03dbcb7c5d21b53f2ae8aaceb2ca2bcb487184bc6-merged.mount: Deactivated successfully.
Feb 20 09:16:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e44638810041aae6c932c01f4e4fb5e56658e5232a39c0d9fa7699f76e050660-userdata-shm.mount: Deactivated successfully.
Feb 20 09:16:18 np0005625203.localdomain sudo[161891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aubcibzffheibvljmwvvxkfpztjkpdli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578978.0678368-112-53379852803945/AnsiballZ_systemd_service.py
Feb 20 09:16:18 np0005625203.localdomain sudo[161891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:18 np0005625203.localdomain sshd[161751]: Received disconnect from 103.61.123.132 port 59738:11: Bye Bye [preauth]
Feb 20 09:16:18 np0005625203.localdomain sshd[161751]: Disconnected from invalid user vps 103.61.123.132 port 59738 [preauth]
Feb 20 09:16:19 np0005625203.localdomain python3.9[161893]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:16:19 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:16:19 np0005625203.localdomain systemd-rc-local-generator[161918]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:16:19 np0005625203.localdomain systemd-sysv-generator[161921]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:16:19 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:16:19 np0005625203.localdomain sudo[161891]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50858 DF PROTO=TCP SPT=57082 DPT=9100 SEQ=4226040268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F8F2400000000001030307) 
Feb 20 09:16:20 np0005625203.localdomain python3.9[162020]: ansible-ansible.builtin.service_facts Invoked
Feb 20 09:16:20 np0005625203.localdomain network[162037]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 09:16:20 np0005625203.localdomain network[162038]: 'network-scripts' will be removed from distribution in near future.
Feb 20 09:16:20 np0005625203.localdomain network[162039]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 09:16:21 np0005625203.localdomain sshd[162065]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:16:21 np0005625203.localdomain sshd[162065]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:16:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:16:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44703 DF PROTO=TCP SPT=55852 DPT=9101 SEQ=2714111965 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F8FDC00000000001030307) 
Feb 20 09:16:24 np0005625203.localdomain sudo[162241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hiusecircdmbjjxapbttoymctbrrkdoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578984.1555717-169-139904517966058/AnsiballZ_systemd_service.py
Feb 20 09:16:24 np0005625203.localdomain sudo[162241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:24 np0005625203.localdomain python3.9[162243]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:16:24 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:16:24 np0005625203.localdomain systemd-rc-local-generator[162270]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:16:24 np0005625203.localdomain systemd-sysv-generator[162275]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:16:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:16:25 np0005625203.localdomain systemd[1]: Stopped target tripleo_nova_libvirt.target.
Feb 20 09:16:25 np0005625203.localdomain sudo[162241]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:25 np0005625203.localdomain sudo[162373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cnmdenbqxrbrzmkoljssgsrdzqypuedh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578985.2655563-169-152497019046402/AnsiballZ_systemd_service.py
Feb 20 09:16:25 np0005625203.localdomain sudo[162373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:25 np0005625203.localdomain python3.9[162375]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:16:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50860 DF PROTO=TCP SPT=57082 DPT=9100 SEQ=4226040268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F90A010000000001030307) 
Feb 20 09:16:26 np0005625203.localdomain sudo[162373]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:27 np0005625203.localdomain sudo[162466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yweqmjfiabgclyuqgczoukvionpkchpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578986.9623504-169-61280522950917/AnsiballZ_systemd_service.py
Feb 20 09:16:27 np0005625203.localdomain sudo[162466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:27 np0005625203.localdomain python3.9[162468]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:16:27 np0005625203.localdomain sudo[162466]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:27 np0005625203.localdomain sudo[162559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxfzebtpgnycachfcscjgifqfodhdnsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578987.6326823-169-111662748152256/AnsiballZ_systemd_service.py
Feb 20 09:16:27 np0005625203.localdomain sudo[162559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:28 np0005625203.localdomain python3.9[162561]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:16:28 np0005625203.localdomain sudo[162559]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:28 np0005625203.localdomain sudo[162652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ttkvgfptcwovxwgnyzedobofxgqqizuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578988.3338025-169-84401534608060/AnsiballZ_systemd_service.py
Feb 20 09:16:28 np0005625203.localdomain sudo[162652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:28 np0005625203.localdomain python3.9[162654]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:16:28 np0005625203.localdomain sudo[162652]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:29 np0005625203.localdomain sudo[162745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utiznuhpkawkoxwxyxsinlwqojxciaod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578988.997548-169-136429662494469/AnsiballZ_systemd_service.py
Feb 20 09:16:29 np0005625203.localdomain sudo[162745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=593 DF PROTO=TCP SPT=57724 DPT=9102 SEQ=3153575939 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F916150000000001030307) 
Feb 20 09:16:29 np0005625203.localdomain python3.9[162747]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:16:30 np0005625203.localdomain sudo[162745]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:30 np0005625203.localdomain sudo[162838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-widmvdxkuplcaqtkvdmokpswqvbwjhpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578990.691206-169-164949378833497/AnsiballZ_systemd_service.py
Feb 20 09:16:30 np0005625203.localdomain sudo[162838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:31 np0005625203.localdomain python3.9[162840]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:16:31 np0005625203.localdomain sudo[162838]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8486 DF PROTO=TCP SPT=53932 DPT=9882 SEQ=257332066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F921000000000001030307) 
Feb 20 09:16:32 np0005625203.localdomain sudo[162931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zygsywkhxsnfowglirbnapcocrttfhuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578991.9532082-325-174154687792956/AnsiballZ_file.py
Feb 20 09:16:32 np0005625203.localdomain sudo[162931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:32 np0005625203.localdomain python3.9[162933]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:32 np0005625203.localdomain sudo[162931]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:32 np0005625203.localdomain sudo[163023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wsiezgyeqgbwdqgofycnxzrrrxdhalel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578992.695232-325-263741649133093/AnsiballZ_file.py
Feb 20 09:16:32 np0005625203.localdomain sudo[163023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:33 np0005625203.localdomain python3.9[163025]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:33 np0005625203.localdomain sudo[163023]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:33 np0005625203.localdomain sudo[163115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ykitlhvpfxkqoadiiciuxgwlhcosglmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578993.2548912-325-139260482927223/AnsiballZ_file.py
Feb 20 09:16:33 np0005625203.localdomain sudo[163115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:33 np0005625203.localdomain python3.9[163117]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:33 np0005625203.localdomain sudo[163115]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:34 np0005625203.localdomain sudo[163207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdaqxockdijhgqbwyzznhmnohauddirw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578993.81449-325-130204436998354/AnsiballZ_file.py
Feb 20 09:16:34 np0005625203.localdomain sudo[163207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:34 np0005625203.localdomain python3.9[163209]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:34 np0005625203.localdomain sudo[163207]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:34 np0005625203.localdomain sudo[163299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zxbypinpvloelpsrjyvhverrtudykwjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578994.3513558-325-166670446523048/AnsiballZ_file.py
Feb 20 09:16:34 np0005625203.localdomain sudo[163299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:34 np0005625203.localdomain sudo[163302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:16:34 np0005625203.localdomain sudo[163302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:16:34 np0005625203.localdomain sudo[163302]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:34 np0005625203.localdomain sudo[163317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:16:34 np0005625203.localdomain sudo[163317]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:16:34 np0005625203.localdomain python3.9[163301]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:34 np0005625203.localdomain sudo[163299]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:35 np0005625203.localdomain sudo[163439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lgkjgnvnjpachsprrvjkqstsuzqkxmxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578994.9322147-325-27972267947962/AnsiballZ_file.py
Feb 20 09:16:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:16:35 np0005625203.localdomain sudo[163439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:35 np0005625203.localdomain systemd[1]: tmp-crun.9soDSE.mount: Deactivated successfully.
Feb 20 09:16:35 np0005625203.localdomain podman[163443]: 2026-02-20 09:16:35.349088129 +0000 UTC m=+0.102217148 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:16:35 np0005625203.localdomain sudo[163317]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:35 np0005625203.localdomain podman[163443]: 2026-02-20 09:16:35.387986111 +0000 UTC m=+0.141115200 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:16:35 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:16:35 np0005625203.localdomain python3.9[163444]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:35 np0005625203.localdomain sudo[163439]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:35 np0005625203.localdomain sudo[163570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdjpzdfleualolxzvbbjzqrdrotsxjny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578995.5975368-325-264148778964256/AnsiballZ_file.py
Feb 20 09:16:35 np0005625203.localdomain sudo[163570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:36 np0005625203.localdomain sudo[163573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:16:36 np0005625203.localdomain sudo[163573]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:16:36 np0005625203.localdomain sudo[163573]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:36 np0005625203.localdomain python3.9[163572]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:36 np0005625203.localdomain sudo[163570]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=8487 DF PROTO=TCP SPT=53932 DPT=9882 SEQ=257332066 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F930C00000000001030307) 
Feb 20 09:16:36 np0005625203.localdomain sudo[163677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thtvffmkbonsmfrwdwrchakncroxdpxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578996.2881343-475-242878735523012/AnsiballZ_file.py
Feb 20 09:16:36 np0005625203.localdomain sudo[163677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:16:36 np0005625203.localdomain systemd[1]: tmp-crun.PMrKkF.mount: Deactivated successfully.
Feb 20 09:16:36 np0005625203.localdomain podman[163679]: 2026-02-20 09:16:36.642129664 +0000 UTC m=+0.082907272 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 09:16:36 np0005625203.localdomain podman[163679]: 2026-02-20 09:16:36.675196699 +0000 UTC m=+0.115974307 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:16:36 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:16:36 np0005625203.localdomain python3.9[163680]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:36 np0005625203.localdomain sudo[163677]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:37 np0005625203.localdomain sudo[163787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hcjbpcrptskqfivxfavhutpmraoqytik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578996.8394709-475-161072496925845/AnsiballZ_file.py
Feb 20 09:16:37 np0005625203.localdomain sudo[163787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:37 np0005625203.localdomain python3.9[163789]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:37 np0005625203.localdomain sudo[163787]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:37 np0005625203.localdomain sudo[163879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-clpnmmikaojxhzdhboxxnnnzmtihwuox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578997.4574142-475-274795875365136/AnsiballZ_file.py
Feb 20 09:16:37 np0005625203.localdomain sudo[163879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:37 np0005625203.localdomain python3.9[163881]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:37 np0005625203.localdomain sudo[163879]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25092 DF PROTO=TCP SPT=39552 DPT=9105 SEQ=3447352545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F938810000000001030307) 
Feb 20 09:16:38 np0005625203.localdomain sudo[163971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lcdyetdcreilcbkxmmulkgvbhllzufjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578998.1236079-475-276104440589009/AnsiballZ_file.py
Feb 20 09:16:38 np0005625203.localdomain sudo[163971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:38 np0005625203.localdomain python3.9[163973]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:38 np0005625203.localdomain sudo[163971]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:38 np0005625203.localdomain sudo[164063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jtyjvfdjreklohvoeoylzhvvkbkitsfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578998.681319-475-234440246435147/AnsiballZ_file.py
Feb 20 09:16:38 np0005625203.localdomain sudo[164063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:39 np0005625203.localdomain python3.9[164065]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:39 np0005625203.localdomain sudo[164063]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:39 np0005625203.localdomain sudo[164155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qadqjggwdyyirxyhhmginbiphdfvscig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578999.2813845-475-131569419107077/AnsiballZ_file.py
Feb 20 09:16:39 np0005625203.localdomain sudo[164155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:39 np0005625203.localdomain python3.9[164157]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:39 np0005625203.localdomain sudo[164155]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:40 np0005625203.localdomain sudo[164247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fizdxyerughmcectdcrssypdmhkayclh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771578999.8381195-475-161441408946641/AnsiballZ_file.py
Feb 20 09:16:40 np0005625203.localdomain sudo[164247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:40 np0005625203.localdomain python3.9[164249]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:16:40 np0005625203.localdomain sudo[164247]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:40 np0005625203.localdomain sudo[164339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gejveglbxclrgbeltnojzjqebzngpgvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579000.6464293-628-122721629553275/AnsiballZ_command.py
Feb 20 09:16:40 np0005625203.localdomain sudo[164339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:41 np0005625203.localdomain python3.9[164341]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:16:41 np0005625203.localdomain sudo[164339]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11133 DF PROTO=TCP SPT=56684 DPT=9105 SEQ=2739171354 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F944800000000001030307) 
Feb 20 09:16:41 np0005625203.localdomain python3.9[164433]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 20 09:16:42 np0005625203.localdomain sudo[164523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lsthvubjarvnksqeeddcupurotvwfvwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579002.2007298-682-78437074732347/AnsiballZ_systemd_service.py
Feb 20 09:16:42 np0005625203.localdomain sudo[164523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:42 np0005625203.localdomain python3.9[164525]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:16:42 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:16:42 np0005625203.localdomain systemd-sysv-generator[164551]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:16:42 np0005625203.localdomain systemd-rc-local-generator[164548]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:16:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:16:43 np0005625203.localdomain sudo[164523]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:43 np0005625203.localdomain sudo[164650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tnsadjvuqclhjoeogmolzhnqchriodhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579003.473089-706-23561338801906/AnsiballZ_command.py
Feb 20 09:16:43 np0005625203.localdomain sudo[164650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:43 np0005625203.localdomain python3.9[164652]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:16:43 np0005625203.localdomain sudo[164650]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25094 DF PROTO=TCP SPT=39552 DPT=9105 SEQ=3447352545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F950400000000001030307) 
Feb 20 09:16:44 np0005625203.localdomain sudo[164743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htzeutavtkcujsreawdlnsbesbcpqtim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579004.088622-706-238095874071379/AnsiballZ_command.py
Feb 20 09:16:44 np0005625203.localdomain sudo[164743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:44 np0005625203.localdomain python3.9[164745]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:16:44 np0005625203.localdomain sudo[164743]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:44 np0005625203.localdomain sudo[164836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sjzfehchqntsrqdmjapmukuplofgexfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579004.679209-706-199210894846623/AnsiballZ_command.py
Feb 20 09:16:44 np0005625203.localdomain sudo[164836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:45 np0005625203.localdomain python3.9[164838]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:16:45 np0005625203.localdomain sudo[164836]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:45 np0005625203.localdomain sudo[164929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mrhqfkmisijrevluiwdqdfkyzklvwzgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579005.2491708-706-141823100787318/AnsiballZ_command.py
Feb 20 09:16:45 np0005625203.localdomain sudo[164929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:45 np0005625203.localdomain python3.9[164931]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:16:45 np0005625203.localdomain sudo[164929]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:46 np0005625203.localdomain sudo[165022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmrjmylgxupiewthsjlrtzwgypmeonxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579005.8080873-706-149232426729579/AnsiballZ_command.py
Feb 20 09:16:46 np0005625203.localdomain sudo[165022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:46 np0005625203.localdomain python3.9[165024]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:16:46 np0005625203.localdomain sudo[165022]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:46 np0005625203.localdomain sudo[165115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hegcikyqbzddbdytwvliavyklfpgsvqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579006.424986-706-122776319010266/AnsiballZ_command.py
Feb 20 09:16:46 np0005625203.localdomain sudo[165115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:46 np0005625203.localdomain python3.9[165117]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:16:46 np0005625203.localdomain sudo[165115]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26732 DF PROTO=TCP SPT=42614 DPT=9101 SEQ=2178652605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F95B410000000001030307) 
Feb 20 09:16:47 np0005625203.localdomain sudo[165208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ozvkwjvapidaqcvaignihbibnvblnxrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579006.9510443-706-57714877257627/AnsiballZ_command.py
Feb 20 09:16:47 np0005625203.localdomain sudo[165208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:47 np0005625203.localdomain python3.9[165210]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:16:47 np0005625203.localdomain sudo[165208]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:49 np0005625203.localdomain sudo[165301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pnwfwfzpnuctfjwckemoeeucbkusxycp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579009.07602-868-57938384653277/AnsiballZ_getent.py
Feb 20 09:16:49 np0005625203.localdomain sudo[165301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:49 np0005625203.localdomain python3.9[165303]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Feb 20 09:16:49 np0005625203.localdomain sudo[165301]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33666 DF PROTO=TCP SPT=39202 DPT=9100 SEQ=2506806287 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F967810000000001030307) 
Feb 20 09:16:50 np0005625203.localdomain sudo[165394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qosahrxsubbuqhccnvwtuggazsjbkmdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579009.9015174-892-100705891300784/AnsiballZ_group.py
Feb 20 09:16:50 np0005625203.localdomain sudo[165394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:50 np0005625203.localdomain python3.9[165396]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 20 09:16:50 np0005625203.localdomain groupadd[165397]: group added to /etc/group: name=libvirt, GID=42473
Feb 20 09:16:50 np0005625203.localdomain groupadd[165397]: group added to /etc/gshadow: name=libvirt
Feb 20 09:16:50 np0005625203.localdomain groupadd[165397]: new group: name=libvirt, GID=42473
Feb 20 09:16:50 np0005625203.localdomain sudo[165394]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:51 np0005625203.localdomain sudo[165492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iuarcwxxxlquewpzplysjmyixjvhpchf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579010.7950966-916-157904168127561/AnsiballZ_user.py
Feb 20 09:16:51 np0005625203.localdomain sudo[165492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:51 np0005625203.localdomain python3.9[165494]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005625203.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 20 09:16:51 np0005625203.localdomain useradd[165496]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Feb 20 09:16:51 np0005625203.localdomain sudo[165492]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:52 np0005625203.localdomain sudo[165592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pizzppvpzcoddixftcyfltofxoavsngh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579012.0244803-949-170146970222683/AnsiballZ_setup.py
Feb 20 09:16:52 np0005625203.localdomain sudo[165592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:52 np0005625203.localdomain python3.9[165594]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:16:52 np0005625203.localdomain sudo[165592]: pam_unix(sudo:session): session closed for user root
Feb 20 09:16:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26734 DF PROTO=TCP SPT=42614 DPT=9101 SEQ=2178652605 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F973000000000001030307) 
Feb 20 09:16:53 np0005625203.localdomain sudo[165646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uywvgewxwnfdioglbdwghrkshtkvildw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579012.0244803-949-170146970222683/AnsiballZ_dnf.py
Feb 20 09:16:53 np0005625203.localdomain sudo[165646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:16:53 np0005625203.localdomain python3.9[165648]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:16:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33668 DF PROTO=TCP SPT=39202 DPT=9100 SEQ=2506806287 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F97F410000000001030307) 
Feb 20 09:16:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57404 DF PROTO=TCP SPT=34042 DPT=9102 SEQ=1771507977 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F98B460000000001030307) 
Feb 20 09:17:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=295 DF PROTO=TCP SPT=49684 DPT=9882 SEQ=2481225535 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F996000000000001030307) 
Feb 20 09:17:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:17:05 np0005625203.localdomain podman[165719]: 2026-02-20 09:17:05.774830389 +0000 UTC m=+0.087991122 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 20 09:17:05 np0005625203.localdomain podman[165719]: 2026-02-20 09:17:05.852292665 +0000 UTC m=+0.165453408 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb 20 09:17:05 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:17:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=296 DF PROTO=TCP SPT=49684 DPT=9882 SEQ=2481225535 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F9A5C00000000001030307) 
Feb 20 09:17:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:17:07.628 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:17:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:17:07.628 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:17:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:17:07.629 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:17:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:17:07 np0005625203.localdomain sshd[165752]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:17:07 np0005625203.localdomain systemd[1]: tmp-crun.2ifcGg.mount: Deactivated successfully.
Feb 20 09:17:07 np0005625203.localdomain podman[165745]: 2026-02-20 09:17:07.774291473 +0000 UTC m=+0.092252448 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:17:07 np0005625203.localdomain podman[165745]: 2026-02-20 09:17:07.781236733 +0000 UTC m=+0.099197758 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Feb 20 09:17:07 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:17:07 np0005625203.localdomain sshd[165752]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:17:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25901 DF PROTO=TCP SPT=37798 DPT=9105 SEQ=2528737337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F9AD800000000001030307) 
Feb 20 09:17:08 np0005625203.localdomain sshd[165765]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:17:08 np0005625203.localdomain sshd[165765]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:17:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25903 DF PROTO=TCP SPT=37798 DPT=9105 SEQ=2528737337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F9C5400000000001030307) 
Feb 20 09:17:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57408 DF PROTO=TCP SPT=34042 DPT=9102 SEQ=1771507977 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F9C6800000000001030307) 
Feb 20 09:17:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33483 DF PROTO=TCP SPT=52118 DPT=9101 SEQ=399829122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F9D0800000000001030307) 
Feb 20 09:17:17 np0005625203.localdomain kernel: SELinux:  Converting 2746 SID table entries...
Feb 20 09:17:17 np0005625203.localdomain kernel: SELinux:  Context system_u:object_r:insights_client_cache_t:s0 became invalid (unmapped).
Feb 20 09:17:17 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 09:17:17 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 09:17:17 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 09:17:17 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 09:17:17 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 09:17:17 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 09:17:17 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 09:17:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44706 DF PROTO=TCP SPT=55852 DPT=9101 SEQ=2714111965 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F9DC810000000001030307) 
Feb 20 09:17:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33485 DF PROTO=TCP SPT=52118 DPT=9101 SEQ=399829122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F9E8400000000001030307) 
Feb 20 09:17:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9471 DF PROTO=TCP SPT=44672 DPT=9100 SEQ=942249817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9F9F4800000000001030307) 
Feb 20 09:17:28 np0005625203.localdomain kernel: SELinux:  Converting 2749 SID table entries...
Feb 20 09:17:28 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 09:17:28 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 09:17:28 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 09:17:28 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 09:17:28 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 09:17:28 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 09:17:28 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 09:17:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44835 DF PROTO=TCP SPT=45718 DPT=9102 SEQ=3129645797 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA00750000000001030307) 
Feb 20 09:17:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39592 DF PROTO=TCP SPT=38152 DPT=9882 SEQ=556686058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA0B400000000001030307) 
Feb 20 09:17:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39593 DF PROTO=TCP SPT=38152 DPT=9882 SEQ=556686058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA1B000000000001030307) 
Feb 20 09:17:36 np0005625203.localdomain sudo[166821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:17:36 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=20 res=1
Feb 20 09:17:36 np0005625203.localdomain sudo[166821]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:17:36 np0005625203.localdomain sudo[166821]: pam_unix(sudo:session): session closed for user root
Feb 20 09:17:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:17:36 np0005625203.localdomain sudo[166840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:17:36 np0005625203.localdomain sudo[166840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:17:36 np0005625203.localdomain podman[166839]: 2026-02-20 09:17:36.340908607 +0000 UTC m=+0.094542531 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 09:17:36 np0005625203.localdomain podman[166839]: 2026-02-20 09:17:36.417182235 +0000 UTC m=+0.170816119 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 20 09:17:36 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:17:36 np0005625203.localdomain sudo[166840]: pam_unix(sudo:session): session closed for user root
Feb 20 09:17:37 np0005625203.localdomain sudo[166919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:17:37 np0005625203.localdomain sudo[166919]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:17:37 np0005625203.localdomain sudo[166919]: pam_unix(sudo:session): session closed for user root
Feb 20 09:17:37 np0005625203.localdomain kernel: SELinux:  Converting 2752 SID table entries...
Feb 20 09:17:37 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 09:17:37 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 09:17:37 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 09:17:37 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 09:17:37 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 09:17:37 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 09:17:37 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 09:17:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27854 DF PROTO=TCP SPT=39042 DPT=9105 SEQ=1585392998 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA22C00000000001030307) 
Feb 20 09:17:38 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=21 res=1
Feb 20 09:17:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:17:38 np0005625203.localdomain podman[166940]: 2026-02-20 09:17:38.797555603 +0000 UTC m=+0.088383204 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:17:38 np0005625203.localdomain podman[166940]: 2026-02-20 09:17:38.830325636 +0000 UTC m=+0.121153227 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Feb 20 09:17:38 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:17:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25097 DF PROTO=TCP SPT=39552 DPT=9105 SEQ=3447352545 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA2E800000000001030307) 
Feb 20 09:17:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39594 DF PROTO=TCP SPT=38152 DPT=9882 SEQ=556686058 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA3A800000000001030307) 
Feb 20 09:17:45 np0005625203.localdomain kernel: SELinux:  Converting 2752 SID table entries...
Feb 20 09:17:45 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 09:17:45 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 09:17:45 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 09:17:45 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 09:17:45 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 09:17:45 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 09:17:45 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 09:17:46 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:17:46 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=22 res=1
Feb 20 09:17:46 np0005625203.localdomain systemd-rc-local-generator[166995]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:17:46 np0005625203.localdomain systemd-sysv-generator[166998]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:17:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:17:46 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:17:47 np0005625203.localdomain systemd-rc-local-generator[167027]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:17:47 np0005625203.localdomain systemd-sysv-generator[167031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:17:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29499 DF PROTO=TCP SPT=46260 DPT=9101 SEQ=2162521400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA45C00000000001030307) 
Feb 20 09:17:47 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:17:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26189 DF PROTO=TCP SPT=54810 DPT=9100 SEQ=974116460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA51C00000000001030307) 
Feb 20 09:17:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=29501 DF PROTO=TCP SPT=46260 DPT=9101 SEQ=2162521400 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA5D800000000001030307) 
Feb 20 09:17:53 np0005625203.localdomain sshd[167048]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:17:53 np0005625203.localdomain sshd[167048]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:17:55 np0005625203.localdomain kernel: SELinux:  Converting 2753 SID table entries...
Feb 20 09:17:55 np0005625203.localdomain kernel: SELinux:  policy capability network_peer_controls=1
Feb 20 09:17:55 np0005625203.localdomain kernel: SELinux:  policy capability open_perms=1
Feb 20 09:17:55 np0005625203.localdomain kernel: SELinux:  policy capability extended_socket_class=1
Feb 20 09:17:55 np0005625203.localdomain kernel: SELinux:  policy capability always_check_network=0
Feb 20 09:17:55 np0005625203.localdomain kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 20 09:17:55 np0005625203.localdomain kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 20 09:17:55 np0005625203.localdomain kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Feb 20 09:17:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26191 DF PROTO=TCP SPT=54810 DPT=9100 SEQ=974116460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA69800000000001030307) 
Feb 20 09:17:56 np0005625203.localdomain groupadd[167060]: group added to /etc/group: name=clevis, GID=985
Feb 20 09:17:56 np0005625203.localdomain groupadd[167060]: group added to /etc/gshadow: name=clevis
Feb 20 09:17:56 np0005625203.localdomain groupadd[167060]: new group: name=clevis, GID=985
Feb 20 09:17:56 np0005625203.localdomain useradd[167067]: new user: name=clevis, UID=985, GID=985, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Feb 20 09:17:56 np0005625203.localdomain usermod[167077]: add 'clevis' to group 'tss'
Feb 20 09:17:56 np0005625203.localdomain usermod[167077]: add 'clevis' to shadow group 'tss'
Feb 20 09:17:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42144 DF PROTO=TCP SPT=35742 DPT=9882 SEQ=17414181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA746D0000000001030307) 
Feb 20 09:17:59 np0005625203.localdomain groupadd[167102]: group added to /etc/group: name=dnsmasq, GID=984
Feb 20 09:17:59 np0005625203.localdomain groupadd[167102]: group added to /etc/gshadow: name=dnsmasq
Feb 20 09:17:59 np0005625203.localdomain groupadd[167102]: new group: name=dnsmasq, GID=984
Feb 20 09:17:59 np0005625203.localdomain useradd[167109]: new user: name=dnsmasq, UID=984, GID=984, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Feb 20 09:17:59 np0005625203.localdomain dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Feb 20 09:17:59 np0005625203.localdomain dbus-broker-launch[754]: avc:  op=load_policy lsm=selinux seqno=23 res=1
Feb 20 09:17:59 np0005625203.localdomain dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Feb 20 09:18:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42146 DF PROTO=TCP SPT=35742 DPT=9882 SEQ=17414181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA80800000000001030307) 
Feb 20 09:18:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=42147 DF PROTO=TCP SPT=35742 DPT=9882 SEQ=17414181 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA90400000000001030307) 
Feb 20 09:18:06 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:18:06 np0005625203.localdomain systemd[1]: tmp-crun.31zU8L.mount: Deactivated successfully.
Feb 20 09:18:06 np0005625203.localdomain podman[167123]: 2026-02-20 09:18:06.794812522 +0000 UTC m=+0.109854028 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:18:06 np0005625203.localdomain podman[167123]: 2026-02-20 09:18:06.874299996 +0000 UTC m=+0.189341512 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 20 09:18:06 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:18:07 np0005625203.localdomain sshd[167149]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:18:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:18:07.628 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:18:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:18:07.629 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:18:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:18:07.630 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:18:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22289 DF PROTO=TCP SPT=45560 DPT=9105 SEQ=1733277404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FA98010000000001030307) 
Feb 20 09:18:08 np0005625203.localdomain sshd[167149]: Invalid user n8n from 118.99.80.29 port 12650
Feb 20 09:18:08 np0005625203.localdomain sshd[167149]: Received disconnect from 118.99.80.29 port 12650:11: Bye Bye [preauth]
Feb 20 09:18:08 np0005625203.localdomain sshd[167149]: Disconnected from invalid user n8n 118.99.80.29 port 12650 [preauth]
Feb 20 09:18:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:18:08 np0005625203.localdomain systemd[1]: tmp-crun.ZgDiQb.mount: Deactivated successfully.
Feb 20 09:18:08 np0005625203.localdomain podman[167151]: 2026-02-20 09:18:08.989783601 +0000 UTC m=+0.082688189 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb 20 09:18:09 np0005625203.localdomain podman[167151]: 2026-02-20 09:18:09.025243069 +0000 UTC m=+0.118147607 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:18:09 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:18:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25906 DF PROTO=TCP SPT=37798 DPT=9105 SEQ=2528737337 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FAA4800000000001030307) 
Feb 20 09:18:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22291 DF PROTO=TCP SPT=45560 DPT=9105 SEQ=1733277404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FAAFC00000000001030307) 
Feb 20 09:18:15 np0005625203.localdomain sshd[169149]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:18:15 np0005625203.localdomain sshd[169149]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:18:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53717 DF PROTO=TCP SPT=32912 DPT=9101 SEQ=1572338886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FABAC00000000001030307) 
Feb 20 09:18:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33488 DF PROTO=TCP SPT=52118 DPT=9101 SEQ=399829122 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FAC6810000000001030307) 
Feb 20 09:18:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=9474 DF PROTO=TCP SPT=44672 DPT=9100 SEQ=942249817 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FAD2800000000001030307) 
Feb 20 09:18:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=55593 DF PROTO=TCP SPT=57884 DPT=9100 SEQ=984004986 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FADEC00000000001030307) 
Feb 20 09:18:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62484 DF PROTO=TCP SPT=51438 DPT=9882 SEQ=2157995020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FAE99D0000000001030307) 
Feb 20 09:18:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62486 DF PROTO=TCP SPT=51438 DPT=9882 SEQ=2157995020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FAF5C00000000001030307) 
Feb 20 09:18:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=62487 DF PROTO=TCP SPT=51438 DPT=9882 SEQ=2157995020 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB05800000000001030307) 
Feb 20 09:18:37 np0005625203.localdomain polkitd[1028]: Reloading rules
Feb 20 09:18:37 np0005625203.localdomain polkitd[1028]: Collecting garbage unconditionally...
Feb 20 09:18:37 np0005625203.localdomain polkitd[1028]: Loading rules from directory /etc/polkit-1/rules.d
Feb 20 09:18:37 np0005625203.localdomain polkitd[1028]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 20 09:18:37 np0005625203.localdomain polkitd[1028]: Finished loading, compiling and executing 5 rules
Feb 20 09:18:37 np0005625203.localdomain polkitd[1028]: Reloading rules
Feb 20 09:18:37 np0005625203.localdomain polkitd[1028]: Collecting garbage unconditionally...
Feb 20 09:18:37 np0005625203.localdomain polkitd[1028]: Loading rules from directory /etc/polkit-1/rules.d
Feb 20 09:18:37 np0005625203.localdomain polkitd[1028]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 20 09:18:37 np0005625203.localdomain polkitd[1028]: Finished loading, compiling and executing 5 rules
Feb 20 09:18:37 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:18:37 np0005625203.localdomain podman[184279]: 2026-02-20 09:18:37.765128102 +0000 UTC m=+0.077627024 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_controller, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:18:37 np0005625203.localdomain podman[184279]: 2026-02-20 09:18:37.834329763 +0000 UTC m=+0.146828695 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:18:37 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:18:37 np0005625203.localdomain sudo[184327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:18:37 np0005625203.localdomain sudo[184327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:18:37 np0005625203.localdomain sudo[184327]: pam_unix(sudo:session): session closed for user root
Feb 20 09:18:37 np0005625203.localdomain sudo[184367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:18:37 np0005625203.localdomain sudo[184367]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:18:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20120 DF PROTO=TCP SPT=40502 DPT=9105 SEQ=3504947489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB0D400000000001030307) 
Feb 20 09:18:38 np0005625203.localdomain sudo[184367]: pam_unix(sudo:session): session closed for user root
Feb 20 09:18:38 np0005625203.localdomain sshd[184465]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:18:38 np0005625203.localdomain groupadd[184468]: group added to /etc/group: name=ceph, GID=167
Feb 20 09:18:38 np0005625203.localdomain groupadd[184468]: group added to /etc/gshadow: name=ceph
Feb 20 09:18:38 np0005625203.localdomain groupadd[184468]: new group: name=ceph, GID=167
Feb 20 09:18:38 np0005625203.localdomain sshd[184465]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:18:38 np0005625203.localdomain useradd[184474]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Feb 20 09:18:39 np0005625203.localdomain sudo[184481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:18:39 np0005625203.localdomain sudo[184481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:18:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:18:39 np0005625203.localdomain sudo[184481]: pam_unix(sudo:session): session closed for user root
Feb 20 09:18:39 np0005625203.localdomain podman[184499]: 2026-02-20 09:18:39.453856818 +0000 UTC m=+0.086712019 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:18:39 np0005625203.localdomain podman[184499]: 2026-02-20 09:18:39.461593029 +0000 UTC m=+0.094448240 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:18:39 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:18:40 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27859 DF PROTO=TCP SPT=39042 DPT=9105 SEQ=1585392998 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB18800000000001030307) 
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: Stopping OpenSSH server daemon...
Feb 20 09:18:42 np0005625203.localdomain sshd[120046]: Received signal 15; terminating.
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: sshd.service: Deactivated successfully.
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: Stopped OpenSSH server daemon.
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: sshd.service: Consumed 2.293s CPU time, read 32.0K from disk, written 0B to disk.
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: Stopped target sshd-keygen.target.
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: Stopping sshd-keygen.target...
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: Reached target sshd-keygen.target.
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: Starting OpenSSH server daemon...
Feb 20 09:18:42 np0005625203.localdomain sshd[185224]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:18:42 np0005625203.localdomain sshd[185224]: Server listening on 0.0.0.0 port 22.
Feb 20 09:18:42 np0005625203.localdomain sshd[185224]: Server listening on :: port 22.
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: Started OpenSSH server daemon.
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:43 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:43 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:43 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:43 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:43 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:43 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:43 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:43 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:43 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:43 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:43 np0005625203.localdomain sshd[185412]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: Starting man-db-cache-update.service...
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:18:44 np0005625203.localdomain systemd-rc-local-generator[185452]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:18:44 np0005625203.localdomain systemd-sysv-generator[185458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:18:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=20122 DF PROTO=TCP SPT=40502 DPT=9105 SEQ=3504947489 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB25000000000001030307) 
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/libvirtd.service:29: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Feb 20 09:18:44 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 09:18:45 np0005625203.localdomain sshd[185412]: Received disconnect from 103.48.192.48 port 57353:11: Bye Bye [preauth]
Feb 20 09:18:45 np0005625203.localdomain sshd[185412]: Disconnected from authenticating user root 103.48.192.48 port 57353 [preauth]
Feb 20 09:18:46 np0005625203.localdomain sudo[165646]: pam_unix(sudo:session): session closed for user root
Feb 20 09:18:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45860 DF PROTO=TCP SPT=59420 DPT=9101 SEQ=3767655687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB30010000000001030307) 
Feb 20 09:18:48 np0005625203.localdomain sudo[190470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-doarndqhczwkymehkkebggnplbbheeac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579127.6808572-985-147908148440972/AnsiballZ_systemd.py
Feb 20 09:18:48 np0005625203.localdomain sudo[190470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:18:48 np0005625203.localdomain python3.9[190511]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 20 09:18:48 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:18:48 np0005625203.localdomain systemd-rc-local-generator[190939]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:18:48 np0005625203.localdomain systemd-sysv-generator[190944]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:18:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:18:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:48 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:48 np0005625203.localdomain sudo[190470]: pam_unix(sudo:session): session closed for user root
Feb 20 09:18:49 np0005625203.localdomain sudo[191394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-afrkyybqrrmorvnpouganvldzkbbssmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579129.1043167-985-10466038596300/AnsiballZ_systemd.py
Feb 20 09:18:49 np0005625203.localdomain sudo[191394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:18:49 np0005625203.localdomain python3.9[191404]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 20 09:18:49 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:18:49 np0005625203.localdomain systemd-rc-local-generator[191490]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:18:49 np0005625203.localdomain systemd-sysv-generator[191494]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:18:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:18:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:50 np0005625203.localdomain sudo[191394]: pam_unix(sudo:session): session closed for user root
Feb 20 09:18:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12627 DF PROTO=TCP SPT=33562 DPT=9100 SEQ=2971503102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB3C410000000001030307) 
Feb 20 09:18:50 np0005625203.localdomain sudo[191980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oifdfxqorjfunzhtfapfpentfrxelpfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579130.4736197-985-45781589841961/AnsiballZ_systemd.py
Feb 20 09:18:50 np0005625203.localdomain sudo[191980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:18:51 np0005625203.localdomain python3.9[191999]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 20 09:18:51 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:18:51 np0005625203.localdomain systemd-sysv-generator[192257]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:18:51 np0005625203.localdomain systemd-rc-local-generator[192254]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:18:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:18:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:51 np0005625203.localdomain sudo[191980]: pam_unix(sudo:session): session closed for user root
Feb 20 09:18:52 np0005625203.localdomain sudo[192977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vymqxdhpflajyegplztrpuzpoyecfojq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579131.5853977-985-213722374683780/AnsiballZ_systemd.py
Feb 20 09:18:52 np0005625203.localdomain sudo[192977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:18:52 np0005625203.localdomain python3.9[192997]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 20 09:18:52 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:18:52 np0005625203.localdomain systemd-rc-local-generator[193232]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:18:52 np0005625203.localdomain systemd-sysv-generator[193237]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:18:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:18:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45862 DF PROTO=TCP SPT=59420 DPT=9101 SEQ=3767655687 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB47C00000000001030307) 
Feb 20 09:18:53 np0005625203.localdomain sudo[192977]: pam_unix(sudo:session): session closed for user root
Feb 20 09:18:54 np0005625203.localdomain sudo[194272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yeufcgwzjjymqxhewxkywvfkdxdodzmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579134.4210086-1072-92456795951255/AnsiballZ_systemd.py
Feb 20 09:18:54 np0005625203.localdomain sudo[194272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:18:55 np0005625203.localdomain python3.9[194295]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:18:55 np0005625203.localdomain systemd-rc-local-generator[194539]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:18:55 np0005625203.localdomain systemd-sysv-generator[194545]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:55 np0005625203.localdomain sudo[194272]: pam_unix(sudo:session): session closed for user root
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: Finished man-db-cache-update.service.
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Consumed 14.047s CPU time.
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: run-rbc04cce4bcd44b9e853509ed0ac8fe53.service: Deactivated successfully.
Feb 20 09:18:55 np0005625203.localdomain systemd[1]: run-r6574d50285684162a57e084db5727c65.service: Deactivated successfully.
Feb 20 09:18:55 np0005625203.localdomain sudo[194778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqjmacsbouwsrcovncaubsxupeyzgued ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579135.5184891-1072-40626826797743/AnsiballZ_systemd.py
Feb 20 09:18:55 np0005625203.localdomain sudo[194778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:18:56 np0005625203.localdomain python3.9[194780]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:18:56 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:18:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12629 DF PROTO=TCP SPT=33562 DPT=9100 SEQ=2971503102 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB54000000000001030307) 
Feb 20 09:18:56 np0005625203.localdomain systemd-sysv-generator[194813]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:18:56 np0005625203.localdomain systemd-rc-local-generator[194808]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:18:56 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:56 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:56 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:18:56 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:56 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:56 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:56 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:56 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:56 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:56 np0005625203.localdomain sudo[194778]: pam_unix(sudo:session): session closed for user root
Feb 20 09:18:56 np0005625203.localdomain sudo[194927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jusjyutddwgyugnjfygwgwaggrptopqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579136.6122775-1072-222234851940846/AnsiballZ_systemd.py
Feb 20 09:18:56 np0005625203.localdomain sudo[194927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:18:57 np0005625203.localdomain python3.9[194929]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:18:58 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:18:58 np0005625203.localdomain systemd-sysv-generator[194959]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:18:58 np0005625203.localdomain systemd-rc-local-generator[194955]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:18:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:18:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:18:58 np0005625203.localdomain sudo[194927]: pam_unix(sudo:session): session closed for user root
Feb 20 09:18:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59713 DF PROTO=TCP SPT=34934 DPT=9882 SEQ=3010404031 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB5ECD0000000001030307) 
Feb 20 09:18:58 np0005625203.localdomain sudo[195076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hohedthqdjtukothpskdresbxckvtmdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579138.7308736-1072-139361300207158/AnsiballZ_systemd.py
Feb 20 09:18:59 np0005625203.localdomain sudo[195076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:18:59 np0005625203.localdomain python3.9[195078]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:00 np0005625203.localdomain sudo[195076]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:01 np0005625203.localdomain sudo[195189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xlcvuhwxysqwtrcksslqmgfcjjqhnqgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579140.515656-1072-18244734282740/AnsiballZ_systemd.py
Feb 20 09:19:01 np0005625203.localdomain sudo[195189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:01 np0005625203.localdomain python3.9[195191]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59715 DF PROTO=TCP SPT=34934 DPT=9882 SEQ=3010404031 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB6AC00000000001030307) 
Feb 20 09:19:02 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:19:02 np0005625203.localdomain systemd-rc-local-generator[195216]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:19:02 np0005625203.localdomain systemd-sysv-generator[195222]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:19:02 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:02 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:02 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:02 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:02 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:19:02 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:02 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:02 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:02 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:02 np0005625203.localdomain sudo[195189]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:03 np0005625203.localdomain sudo[195338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eqbzpmssivufediemvfepsowqiacwtbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579143.2179208-1180-274675842049608/AnsiballZ_systemd.py
Feb 20 09:19:03 np0005625203.localdomain sudo[195338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:03 np0005625203.localdomain python3.9[195340]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 20 09:19:03 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:19:03 np0005625203.localdomain systemd-rc-local-generator[195368]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:19:03 np0005625203.localdomain systemd-sysv-generator[195373]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:19:03 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:03 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:03 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:03 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:03 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:19:04 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:04 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:04 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:04 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:19:04 np0005625203.localdomain sudo[195338]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:04 np0005625203.localdomain sudo[195487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhsldrxuabhomxcfbqinwfduiutzucqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579144.4482095-1204-273249027299907/AnsiballZ_systemd.py
Feb 20 09:19:04 np0005625203.localdomain sudo[195487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:05 np0005625203.localdomain python3.9[195489]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:05 np0005625203.localdomain sudo[195487]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:05 np0005625203.localdomain sudo[195600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-famqufrowqgnoiamkkozdiypyiglrtbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579145.2414331-1204-125921732074368/AnsiballZ_systemd.py
Feb 20 09:19:05 np0005625203.localdomain sudo[195600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:05 np0005625203.localdomain python3.9[195602]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:05 np0005625203.localdomain sudo[195600]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59716 DF PROTO=TCP SPT=34934 DPT=9882 SEQ=3010404031 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB7A800000000001030307) 
Feb 20 09:19:06 np0005625203.localdomain sudo[195713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dxhzhniutyyihblmcduexxaksqarpsju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579146.048684-1204-47839875103720/AnsiballZ_systemd.py
Feb 20 09:19:06 np0005625203.localdomain sudo[195713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:06 np0005625203.localdomain python3.9[195715]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:19:07.631 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:19:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:19:07.632 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:19:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:19:07.632 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:19:07 np0005625203.localdomain sudo[195713]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61487 DF PROTO=TCP SPT=34092 DPT=9105 SEQ=3351343978 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB82400000000001030307) 
Feb 20 09:19:08 np0005625203.localdomain sudo[195826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ythcyysnnrgnnfleizdqsijsuewwawvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579147.8483553-1204-35284377300242/AnsiballZ_systemd.py
Feb 20 09:19:08 np0005625203.localdomain sudo[195826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:19:08 np0005625203.localdomain podman[195829]: 2026-02-20 09:19:08.314253963 +0000 UTC m=+0.086869414 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb 20 09:19:08 np0005625203.localdomain podman[195829]: 2026-02-20 09:19:08.388261721 +0000 UTC m=+0.160877192 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:19:08 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:19:08 np0005625203.localdomain python3.9[195828]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:09 np0005625203.localdomain sudo[195826]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:19:09 np0005625203.localdomain systemd[1]: tmp-crun.y2YN6P.mount: Deactivated successfully.
Feb 20 09:19:09 np0005625203.localdomain podman[195875]: 2026-02-20 09:19:09.80908921 +0000 UTC m=+0.091485714 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 20 09:19:09 np0005625203.localdomain podman[195875]: 2026-02-20 09:19:09.842138221 +0000 UTC m=+0.124534755 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:19:09 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:19:09 np0005625203.localdomain sudo[195981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgewzfdtzkumdykslaiuxfzliznolbnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579149.7395554-1204-179732212685198/AnsiballZ_systemd.py
Feb 20 09:19:09 np0005625203.localdomain sudo[195981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:10 np0005625203.localdomain python3.9[195983]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22294 DF PROTO=TCP SPT=45560 DPT=9105 SEQ=1733277404 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB8E800000000001030307) 
Feb 20 09:19:11 np0005625203.localdomain sudo[195981]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:11 np0005625203.localdomain sudo[196094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-whdsvzgexqkxkgbmygcjlsjejkirctme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579151.4812841-1204-209692796899541/AnsiballZ_systemd.py
Feb 20 09:19:11 np0005625203.localdomain sudo[196094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:12 np0005625203.localdomain python3.9[196096]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:12 np0005625203.localdomain sudo[196094]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:12 np0005625203.localdomain sudo[196207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lahuoguqpgiwktelkwuoxigwhalvnrcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579152.2142549-1204-63086788197603/AnsiballZ_systemd.py
Feb 20 09:19:12 np0005625203.localdomain sudo[196207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:12 np0005625203.localdomain python3.9[196209]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:12 np0005625203.localdomain sudo[196207]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:13 np0005625203.localdomain sudo[196320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mnxvvwbbzvgzvapsqhuvgicsxdgwcywp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579153.4581704-1204-212868346359670/AnsiballZ_systemd.py
Feb 20 09:19:13 np0005625203.localdomain sudo[196320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:14 np0005625203.localdomain python3.9[196322]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:14 np0005625203.localdomain sudo[196320]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61489 DF PROTO=TCP SPT=34092 DPT=9105 SEQ=3351343978 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FB9A000000000001030307) 
Feb 20 09:19:14 np0005625203.localdomain sudo[196433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yfsphvunwelakrsthnpruubvgmvrxzrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579154.2383602-1204-71865340594182/AnsiballZ_systemd.py
Feb 20 09:19:14 np0005625203.localdomain sudo[196433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:14 np0005625203.localdomain python3.9[196435]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:14 np0005625203.localdomain sudo[196433]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:16 np0005625203.localdomain sudo[196546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mdfuekuavpcjfaohgsvriweozrsiijtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579155.094932-1204-199517155879316/AnsiballZ_systemd.py
Feb 20 09:19:16 np0005625203.localdomain sudo[196546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:16 np0005625203.localdomain python3.9[196548]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:16 np0005625203.localdomain sudo[196546]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3699 DF PROTO=TCP SPT=57598 DPT=9101 SEQ=1168880026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FBA5400000000001030307) 
Feb 20 09:19:17 np0005625203.localdomain sudo[196659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wlxzdcpoxgefxbihahnrhudkoyxcfsqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579156.8358884-1204-234500075802236/AnsiballZ_systemd.py
Feb 20 09:19:17 np0005625203.localdomain sudo[196659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:17 np0005625203.localdomain python3.9[196661]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:17 np0005625203.localdomain sudo[196659]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:17 np0005625203.localdomain sudo[196772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iwruogouqjqvjucreooxtfetwuvugsvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579157.615576-1204-212984347226464/AnsiballZ_systemd.py
Feb 20 09:19:17 np0005625203.localdomain sudo[196772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:18 np0005625203.localdomain python3.9[196774]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:19 np0005625203.localdomain sudo[196772]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:19 np0005625203.localdomain sudo[196885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ddnjjalpnqxxtahrdufmsxqxliphvzds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579159.4225957-1204-105811054477444/AnsiballZ_systemd.py
Feb 20 09:19:19 np0005625203.localdomain sudo[196885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:19 np0005625203.localdomain python3.9[196887]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17991 DF PROTO=TCP SPT=39880 DPT=9100 SEQ=1880786971 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FBB1800000000001030307) 
Feb 20 09:19:21 np0005625203.localdomain sudo[196885]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:21 np0005625203.localdomain sudo[196998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wepaatfnpictnexincdorhrqijlvjdzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579161.1721957-1204-234325867218748/AnsiballZ_systemd.py
Feb 20 09:19:21 np0005625203.localdomain sudo[196998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:21 np0005625203.localdomain python3.9[197000]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Feb 20 09:19:21 np0005625203.localdomain sudo[196998]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:22 np0005625203.localdomain sudo[197111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kjijexfzelewxrllujmsmzevtudpwehi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579162.2549548-1510-229781684356281/AnsiballZ_file.py
Feb 20 09:19:22 np0005625203.localdomain sudo[197111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:22 np0005625203.localdomain python3.9[197113]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:19:22 np0005625203.localdomain sudo[197111]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:22 np0005625203.localdomain sshd[197144]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:19:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3701 DF PROTO=TCP SPT=57598 DPT=9101 SEQ=1168880026 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FBBD000000000001030307) 
Feb 20 09:19:23 np0005625203.localdomain sudo[197223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfylgcxotlnexkciubyqqplbsocnlnmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579162.865878-1510-216097228889787/AnsiballZ_file.py
Feb 20 09:19:23 np0005625203.localdomain sudo[197223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:23 np0005625203.localdomain sshd[197144]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:19:23 np0005625203.localdomain python3.9[197225]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:19:23 np0005625203.localdomain sudo[197223]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:23 np0005625203.localdomain sshd[197260]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:19:23 np0005625203.localdomain sshd[197260]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:19:23 np0005625203.localdomain sudo[197335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iqqvhjgxcfvzdhpgkduvsptedkysgiao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579163.5236752-1510-247484813149394/AnsiballZ_file.py
Feb 20 09:19:23 np0005625203.localdomain sudo[197335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:23 np0005625203.localdomain python3.9[197337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:19:23 np0005625203.localdomain sudo[197335]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:25 np0005625203.localdomain sudo[197445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eezrjbedwwtprbbvbpxqfhkvequycodu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579164.1107426-1510-228939918547897/AnsiballZ_file.py
Feb 20 09:19:25 np0005625203.localdomain sudo[197445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:25 np0005625203.localdomain python3.9[197447]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:19:25 np0005625203.localdomain sudo[197445]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:25 np0005625203.localdomain sudo[197555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nseutlcxrplvsruojnrvpopghbrcqurn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579165.419066-1510-22760281231569/AnsiballZ_file.py
Feb 20 09:19:25 np0005625203.localdomain sudo[197555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:25 np0005625203.localdomain python3.9[197557]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:19:25 np0005625203.localdomain sudo[197555]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17993 DF PROTO=TCP SPT=39880 DPT=9100 SEQ=1880786971 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FBC9400000000001030307) 
Feb 20 09:19:26 np0005625203.localdomain sudo[197665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tdqvgqmsvhufckexhawcftnsrnmfpbhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579166.0299501-1510-125197619489321/AnsiballZ_file.py
Feb 20 09:19:26 np0005625203.localdomain sudo[197665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:26 np0005625203.localdomain python3.9[197667]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:19:26 np0005625203.localdomain sudo[197665]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:27 np0005625203.localdomain python3.9[197775]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:19:28 np0005625203.localdomain sudo[197883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rqoyqtakulvsisbxgklmamfjbpboqqjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579167.7600393-1663-10693881090138/AnsiballZ_stat.py
Feb 20 09:19:28 np0005625203.localdomain sudo[197883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:28 np0005625203.localdomain python3.9[197885]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:28 np0005625203.localdomain sudo[197883]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:29 np0005625203.localdomain sudo[197973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ykjqrnkvfwehvmegrcxoahxqlsphopbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579167.7600393-1663-10693881090138/AnsiballZ_copy.py
Feb 20 09:19:29 np0005625203.localdomain sudo[197973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:29 np0005625203.localdomain python3.9[197975]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579167.7600393-1663-10693881090138/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:29 np0005625203.localdomain sudo[197973]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24219 DF PROTO=TCP SPT=40700 DPT=9102 SEQ=852664082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FBD5350000000001030307) 
Feb 20 09:19:29 np0005625203.localdomain sudo[198083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mgmcqqwozpgyndqzzcuweeyldnnqbbva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579169.40626-1663-243147223854062/AnsiballZ_stat.py
Feb 20 09:19:29 np0005625203.localdomain sudo[198083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:29 np0005625203.localdomain python3.9[198085]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:29 np0005625203.localdomain sudo[198083]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:30 np0005625203.localdomain sudo[198173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-okwwvegayrfpdcmdregzcaojujoxhpva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579169.40626-1663-243147223854062/AnsiballZ_copy.py
Feb 20 09:19:30 np0005625203.localdomain sudo[198173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:30 np0005625203.localdomain python3.9[198175]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579169.40626-1663-243147223854062/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:30 np0005625203.localdomain sudo[198173]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:30 np0005625203.localdomain sudo[198283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bsjtxwbxyubrtnnhudvjfdjumzodxyny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579170.55257-1663-25650369019952/AnsiballZ_stat.py
Feb 20 09:19:30 np0005625203.localdomain sudo[198283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:31 np0005625203.localdomain python3.9[198285]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:31 np0005625203.localdomain sudo[198283]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:31 np0005625203.localdomain sudo[198373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-upurxpktxewgpyvqjrdbqdmcvvedzout ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579170.55257-1663-25650369019952/AnsiballZ_copy.py
Feb 20 09:19:31 np0005625203.localdomain sudo[198373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:31 np0005625203.localdomain python3.9[198375]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579170.55257-1663-25650369019952/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:31 np0005625203.localdomain sudo[198373]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57470 DF PROTO=TCP SPT=43302 DPT=9882 SEQ=1866985418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FBE0000000000001030307) 
Feb 20 09:19:32 np0005625203.localdomain sudo[198483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvfsrypwonjqqafvumjgblbhsligrdtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579171.93636-1663-258903456416418/AnsiballZ_stat.py
Feb 20 09:19:32 np0005625203.localdomain sudo[198483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:32 np0005625203.localdomain python3.9[198485]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:32 np0005625203.localdomain sudo[198483]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:32 np0005625203.localdomain sudo[198573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xmjlwcnjqyrkpvxcdvxntpojfloqqhok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579171.93636-1663-258903456416418/AnsiballZ_copy.py
Feb 20 09:19:32 np0005625203.localdomain sudo[198573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:32 np0005625203.localdomain python3.9[198575]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579171.93636-1663-258903456416418/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:32 np0005625203.localdomain sudo[198573]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:33 np0005625203.localdomain sudo[198683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rovbguqcnrcghhrxxeapknmzjmbngzcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579173.1140838-1663-7979421064702/AnsiballZ_stat.py
Feb 20 09:19:33 np0005625203.localdomain sudo[198683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:33 np0005625203.localdomain python3.9[198685]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:33 np0005625203.localdomain sudo[198683]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:33 np0005625203.localdomain sudo[198773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ovlsxsxfgqmqfaqyceuorffnrzaqehbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579173.1140838-1663-7979421064702/AnsiballZ_copy.py
Feb 20 09:19:33 np0005625203.localdomain sudo[198773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:34 np0005625203.localdomain python3.9[198775]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579173.1140838-1663-7979421064702/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=8d9b2057482987a531d808ceb2ac4bc7d43bf17c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:34 np0005625203.localdomain sudo[198773]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:35 np0005625203.localdomain sudo[198883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tqdyjdcziuhtcaafnktousqkapgbuzsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579174.2680836-1663-247160465493336/AnsiballZ_stat.py
Feb 20 09:19:35 np0005625203.localdomain sudo[198883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:35 np0005625203.localdomain python3.9[198885]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:35 np0005625203.localdomain sudo[198883]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:35 np0005625203.localdomain sudo[198973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zvccrextcemnpcpustyaepnncadfxobj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579174.2680836-1663-247160465493336/AnsiballZ_copy.py
Feb 20 09:19:35 np0005625203.localdomain sudo[198973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:35 np0005625203.localdomain python3.9[198975]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579174.2680836-1663-247160465493336/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:35 np0005625203.localdomain sudo[198973]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57471 DF PROTO=TCP SPT=43302 DPT=9882 SEQ=1866985418 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FBEFC00000000001030307) 
Feb 20 09:19:36 np0005625203.localdomain sudo[199083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iqrclgzskumjpiwrcmckmxgfxaggxylg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579176.0170465-1663-146940163042770/AnsiballZ_stat.py
Feb 20 09:19:36 np0005625203.localdomain sudo[199083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:36 np0005625203.localdomain python3.9[199085]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:36 np0005625203.localdomain sudo[199083]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:36 np0005625203.localdomain sshd[199086]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:19:37 np0005625203.localdomain sudo[199173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xrdoatnahxgggkyocibexqkvthddjcpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579176.0170465-1663-146940163042770/AnsiballZ_copy.py
Feb 20 09:19:37 np0005625203.localdomain sudo[199173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:37 np0005625203.localdomain sshd[199086]: Received disconnect from 194.107.115.2 port 33908:11: Bye Bye [preauth]
Feb 20 09:19:37 np0005625203.localdomain sshd[199086]: Disconnected from authenticating user root 194.107.115.2 port 33908 [preauth]
Feb 20 09:19:37 np0005625203.localdomain python3.9[199175]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579176.0170465-1663-146940163042770/.source.conf follow=False _original_basename=auth.conf checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:37 np0005625203.localdomain sudo[199173]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14534 DF PROTO=TCP SPT=49888 DPT=9105 SEQ=1427152165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FBF7810000000001030307) 
Feb 20 09:19:38 np0005625203.localdomain sudo[199283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lnhpqltnxtmrqbbcutsfdyaldzxzlavp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579177.920832-1663-148499651547321/AnsiballZ_stat.py
Feb 20 09:19:38 np0005625203.localdomain sudo[199283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:38 np0005625203.localdomain python3.9[199285]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:38 np0005625203.localdomain sudo[199283]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:19:38 np0005625203.localdomain podman[199338]: 2026-02-20 09:19:38.776472434 +0000 UTC m=+0.090084017 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:19:38 np0005625203.localdomain sudo[199385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-duqsnmkobfjxecdhzkfkjngcjpzcptzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579177.920832-1663-148499651547321/AnsiballZ_copy.py
Feb 20 09:19:38 np0005625203.localdomain sudo[199385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:38 np0005625203.localdomain podman[199338]: 2026-02-20 09:19:38.841705926 +0000 UTC m=+0.155317529 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible)
Feb 20 09:19:38 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:19:38 np0005625203.localdomain python3.9[199396]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1771579177.920832-1663-148499651547321/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:39 np0005625203.localdomain sudo[199385]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:39 np0005625203.localdomain sudo[199457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:19:39 np0005625203.localdomain sudo[199457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:19:39 np0005625203.localdomain sudo[199457]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:39 np0005625203.localdomain sudo[199504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:19:39 np0005625203.localdomain sudo[199504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:19:39 np0005625203.localdomain sudo[199543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjbyprzwggcgwgxpdxoeittryzisibsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579179.3870175-2005-69448864211703/AnsiballZ_file.py
Feb 20 09:19:39 np0005625203.localdomain sudo[199543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:39 np0005625203.localdomain python3.9[199546]: ansible-ansible.builtin.file Invoked with path=/etc/libvirt/passwd.db state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:39 np0005625203.localdomain sudo[199543]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:40 np0005625203.localdomain sudo[199504]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:40 np0005625203.localdomain sudo[199685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ravixvftkgczdbsjaywmbsmdzncxabom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579180.0596085-2029-96210322885728/AnsiballZ_file.py
Feb 20 09:19:40 np0005625203.localdomain sudo[199685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:19:40 np0005625203.localdomain podman[199688]: 2026-02-20 09:19:40.420857043 +0000 UTC m=+0.076682528 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 20 09:19:40 np0005625203.localdomain podman[199688]: 2026-02-20 09:19:40.456331897 +0000 UTC m=+0.112157382 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:19:40 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:19:40 np0005625203.localdomain python3.9[199687]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:40 np0005625203.localdomain sudo[199685]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:40 np0005625203.localdomain sudo[199828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-trcxibgdcgtdsbbtvomgbccevaaqccgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579180.6830237-2029-238114448066459/AnsiballZ_file.py
Feb 20 09:19:40 np0005625203.localdomain sudo[199800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:19:40 np0005625203.localdomain sudo[199828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:40 np0005625203.localdomain sudo[199800]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:19:40 np0005625203.localdomain sudo[199800]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:41 np0005625203.localdomain python3.9[199831]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:41 np0005625203.localdomain sudo[199828]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:41 np0005625203.localdomain sudo[199940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fznyyqupxhtfjjmoienahzezswpanduh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579181.2901232-2029-226446972927485/AnsiballZ_file.py
Feb 20 09:19:41 np0005625203.localdomain sudo[199940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:41 np0005625203.localdomain python3.9[199942]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:41 np0005625203.localdomain sudo[199940]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:42 np0005625203.localdomain sudo[200050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uyhunezplrbqnziybxkshkbqafdlcxnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579181.8813274-2029-279660330642022/AnsiballZ_file.py
Feb 20 09:19:42 np0005625203.localdomain sudo[200050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:42 np0005625203.localdomain python3.9[200052]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:42 np0005625203.localdomain sudo[200050]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:42 np0005625203.localdomain sudo[200160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kkitfplpxzttdlpmjyuxgqxjiorzocsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579182.4587882-2029-47417439155322/AnsiballZ_file.py
Feb 20 09:19:42 np0005625203.localdomain sudo[200160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:42 np0005625203.localdomain python3.9[200162]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:42 np0005625203.localdomain sudo[200160]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:43 np0005625203.localdomain sudo[200270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zjxywdlgtrbhroflrmltfkcuxpblqkqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579183.1088645-2029-271693918848258/AnsiballZ_file.py
Feb 20 09:19:43 np0005625203.localdomain sudo[200270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:43 np0005625203.localdomain python3.9[200272]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:43 np0005625203.localdomain sudo[200270]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:43 np0005625203.localdomain sudo[200380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wuomuisjfffhsuyltwcriuesaodmptkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579183.7413247-2029-49636037604737/AnsiballZ_file.py
Feb 20 09:19:43 np0005625203.localdomain sudo[200380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14536 DF PROTO=TCP SPT=49888 DPT=9105 SEQ=1427152165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC0F410000000001030307) 
Feb 20 09:19:44 np0005625203.localdomain python3.9[200382]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:44 np0005625203.localdomain sudo[200380]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24223 DF PROTO=TCP SPT=40700 DPT=9102 SEQ=852664082 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC10800000000001030307) 
Feb 20 09:19:44 np0005625203.localdomain sudo[200490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxntcitonvcxgiqnancboauuadnzuvun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579184.3109818-2029-25771852198368/AnsiballZ_file.py
Feb 20 09:19:44 np0005625203.localdomain sudo[200490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:44 np0005625203.localdomain python3.9[200492]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:44 np0005625203.localdomain sudo[200490]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:45 np0005625203.localdomain sudo[200600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uzgehidrwultvukroyxldfkizwrhcnbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579184.916005-2029-166166525130952/AnsiballZ_file.py
Feb 20 09:19:45 np0005625203.localdomain sudo[200600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:45 np0005625203.localdomain python3.9[200602]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:45 np0005625203.localdomain sudo[200600]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:45 np0005625203.localdomain sudo[200710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ztbyachaeyrynxqnfoabumabyhgrwvdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579185.507855-2029-111255071615489/AnsiballZ_file.py
Feb 20 09:19:45 np0005625203.localdomain sudo[200710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:45 np0005625203.localdomain python3.9[200712]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:45 np0005625203.localdomain sudo[200710]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:46 np0005625203.localdomain sudo[200820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cdincajzpugczcaecqewzdpvcblysbci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579186.0821185-2029-22735326563950/AnsiballZ_file.py
Feb 20 09:19:46 np0005625203.localdomain sudo[200820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54953 DF PROTO=TCP SPT=49328 DPT=9101 SEQ=505562078 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC1A800000000001030307) 
Feb 20 09:19:47 np0005625203.localdomain python3.9[200822]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:47 np0005625203.localdomain sudo[200820]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:47 np0005625203.localdomain sudo[200930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-otesguipezuxytxlqqsfxyuwcaawwnbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579187.3021872-2029-191021524665546/AnsiballZ_file.py
Feb 20 09:19:47 np0005625203.localdomain sudo[200930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:19:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6000.1 total, 600.0 interval
                                                          Cumulative writes: 4844 writes, 21K keys, 4844 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4844 writes, 660 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fb610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fb610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fb610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x55f8e79fa2d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.4e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Feb 20 09:19:47 np0005625203.localdomain python3.9[200932]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:47 np0005625203.localdomain sudo[200930]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:48 np0005625203.localdomain sudo[201040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bgqnungmdgcvnufttprspnqmapauljek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579187.889601-2029-245900911148674/AnsiballZ_file.py
Feb 20 09:19:48 np0005625203.localdomain sudo[201040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:48 np0005625203.localdomain python3.9[201042]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:48 np0005625203.localdomain sudo[201040]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:49 np0005625203.localdomain sudo[201150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpsmzmyxxifqtpbhqiofhksavaxpvyol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579189.074261-2029-187809664412742/AnsiballZ_file.py
Feb 20 09:19:49 np0005625203.localdomain sudo[201150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:49 np0005625203.localdomain python3.9[201152]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:49 np0005625203.localdomain sudo[201150]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30031 DF PROTO=TCP SPT=58580 DPT=9100 SEQ=2106536304 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC26800000000001030307) 
Feb 20 09:19:50 np0005625203.localdomain sudo[201260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rgdkznbehvbskenyalcpsdadpdngfxrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579190.0181563-2326-39624375211713/AnsiballZ_stat.py
Feb 20 09:19:50 np0005625203.localdomain sudo[201260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:50 np0005625203.localdomain python3.9[201262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:50 np0005625203.localdomain sudo[201260]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:50 np0005625203.localdomain sudo[201348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-letereztyakpouxcsvfiixecmdxxmkvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579190.0181563-2326-39624375211713/AnsiballZ_copy.py
Feb 20 09:19:50 np0005625203.localdomain sudo[201348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:51 np0005625203.localdomain python3.9[201350]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579190.0181563-2326-39624375211713/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:51 np0005625203.localdomain sudo[201348]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:51 np0005625203.localdomain sudo[201458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znuguqtcwetihkdhmcpjaugdfcuravdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579191.177597-2326-152393227300966/AnsiballZ_stat.py
Feb 20 09:19:51 np0005625203.localdomain sudo[201458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:51 np0005625203.localdomain python3.9[201460]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:51 np0005625203.localdomain sudo[201458]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:19:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6000.1 total, 600.0 interval
                                                          Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5843 writes, 764 syncs, 7.65 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          
                                                          ** Compaction Stats [default] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      2/0    2.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      2/0    2.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [default] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [default] **
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-0] **
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-1] **
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [m-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [m-2] **
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.57 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      1/0    1.57 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-0] **
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-1] **
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [p-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [p-2] **
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-0] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c3610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-0] **
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-1] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c3610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-1] **
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      1/0    1.26 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Sum      1/0    1.26 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [O-2] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c3610#2 capacity: 272.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,7.2928e-05%) FilterBlock(1,0.11 KB,3.92689e-05%) IndexBlock(1,0.14 KB,5.04886e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [O-2] **
                                                          
                                                          ** Compaction Stats [L] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [L] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [L] **
                                                          
                                                          ** Compaction Stats [P] **
                                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                                          
                                                          ** Compaction Stats [P] **
                                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                          
                                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                          
                                                          Uptime(secs): 6000.1 total, 4800.0 interval
                                                          Flush(GB): cumulative 0.000, interval 0.000
                                                          AddFile(GB): cumulative 0.000, interval 0.000
                                                          AddFile(Total Files): cumulative 0, interval 0
                                                          AddFile(L0 Files): cumulative 0, interval 0
                                                          AddFile(Keys): cumulative 0, interval 0
                                                          Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                          Block cache BinnedLRUCache@0x558a5e9c22d0#2 capacity: 1.62 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.2e-05 secs_since: 0
                                                          Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,8.34465e-05%) FilterBlock(3,0.33 KB,1.92569e-05%) IndexBlock(3,0.34 KB,2.01739e-05%) Misc(1,0.00 KB,0%)
                                                          
                                                          ** File Read Latency Histogram By Level [P] **
Feb 20 09:19:52 np0005625203.localdomain sudo[201546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-glctzlzdkweippsrvwzpafyzobqktamc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579191.177597-2326-152393227300966/AnsiballZ_copy.py
Feb 20 09:19:52 np0005625203.localdomain sudo[201546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:52 np0005625203.localdomain python3.9[201548]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579191.177597-2326-152393227300966/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:52 np0005625203.localdomain sudo[201546]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:52 np0005625203.localdomain sudo[201656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mzkzdblwamxfbgjsmybmtascscvjpxlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579192.3589227-2326-195558608760614/AnsiballZ_stat.py
Feb 20 09:19:52 np0005625203.localdomain sudo[201656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:52 np0005625203.localdomain python3.9[201658]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:52 np0005625203.localdomain sudo[201656]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54955 DF PROTO=TCP SPT=49328 DPT=9101 SEQ=505562078 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC32400000000001030307) 
Feb 20 09:19:53 np0005625203.localdomain sudo[201744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cawifuasztumwukekxdqfsevcieyhffi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579192.3589227-2326-195558608760614/AnsiballZ_copy.py
Feb 20 09:19:53 np0005625203.localdomain sudo[201744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:53 np0005625203.localdomain python3.9[201746]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579192.3589227-2326-195558608760614/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:53 np0005625203.localdomain sudo[201744]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:53 np0005625203.localdomain sudo[201854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gnwjzhxcebidtjhvlewkpmhmbpievvxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579193.5391479-2326-163927722162689/AnsiballZ_stat.py
Feb 20 09:19:53 np0005625203.localdomain sudo[201854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:54 np0005625203.localdomain python3.9[201856]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:54 np0005625203.localdomain sudo[201854]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:54 np0005625203.localdomain sudo[201942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eecptcciwneybiqinafmfatrcrqyexuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579193.5391479-2326-163927722162689/AnsiballZ_copy.py
Feb 20 09:19:54 np0005625203.localdomain sudo[201942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:54 np0005625203.localdomain python3.9[201944]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579193.5391479-2326-163927722162689/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:54 np0005625203.localdomain sudo[201942]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:54 np0005625203.localdomain sudo[202052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-frptntqfqallhkjsethhvczjjkbauzwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579194.6970813-2326-54007675019313/AnsiballZ_stat.py
Feb 20 09:19:54 np0005625203.localdomain sudo[202052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:55 np0005625203.localdomain python3.9[202054]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:55 np0005625203.localdomain sudo[202052]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:55 np0005625203.localdomain sudo[202140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqwvzdckyajgxbjzgmrwlxrcoemvhiwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579194.6970813-2326-54007675019313/AnsiballZ_copy.py
Feb 20 09:19:55 np0005625203.localdomain sudo[202140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:55 np0005625203.localdomain python3.9[202142]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579194.6970813-2326-54007675019313/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:55 np0005625203.localdomain sudo[202140]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30033 DF PROTO=TCP SPT=58580 DPT=9100 SEQ=2106536304 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC3E410000000001030307) 
Feb 20 09:19:56 np0005625203.localdomain sudo[202250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdskernwwbplcrhllxqkuzujsujjdyth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579196.011082-2326-193595627374622/AnsiballZ_stat.py
Feb 20 09:19:56 np0005625203.localdomain sudo[202250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:56 np0005625203.localdomain python3.9[202252]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:56 np0005625203.localdomain sudo[202250]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:56 np0005625203.localdomain sudo[202338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gqbxpocmgzxspscawiavmhlxtzahjiku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579196.011082-2326-193595627374622/AnsiballZ_copy.py
Feb 20 09:19:56 np0005625203.localdomain sudo[202338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:57 np0005625203.localdomain python3.9[202340]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579196.011082-2326-193595627374622/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:57 np0005625203.localdomain sudo[202338]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:57 np0005625203.localdomain sudo[202448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ybyxaaacdazjuhjkkhfvtncbxkbgzeku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579197.1380618-2326-98675565104700/AnsiballZ_stat.py
Feb 20 09:19:57 np0005625203.localdomain sudo[202448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:57 np0005625203.localdomain python3.9[202450]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:57 np0005625203.localdomain sudo[202448]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:57 np0005625203.localdomain sudo[202536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zznijymosfnkwnrwfdcapmqiufexyixr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579197.1380618-2326-98675565104700/AnsiballZ_copy.py
Feb 20 09:19:57 np0005625203.localdomain sudo[202536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:58 np0005625203.localdomain python3.9[202538]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579197.1380618-2326-98675565104700/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:58 np0005625203.localdomain sudo[202536]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19945 DF PROTO=TCP SPT=49164 DPT=9882 SEQ=1670029319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC492D0000000001030307) 
Feb 20 09:19:59 np0005625203.localdomain sudo[202646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqczkuwoiiixkwqfboqdbwfittgtxjsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579198.76122-2326-82199161373125/AnsiballZ_stat.py
Feb 20 09:19:59 np0005625203.localdomain sudo[202646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:59 np0005625203.localdomain python3.9[202648]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:19:59 np0005625203.localdomain sudo[202646]: pam_unix(sudo:session): session closed for user root
Feb 20 09:19:59 np0005625203.localdomain sudo[202734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqwhcxghfagxkuwbfnqzwhwlblrnnlxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579198.76122-2326-82199161373125/AnsiballZ_copy.py
Feb 20 09:19:59 np0005625203.localdomain sudo[202734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:19:59 np0005625203.localdomain python3.9[202736]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579198.76122-2326-82199161373125/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:19:59 np0005625203.localdomain sudo[202734]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:00 np0005625203.localdomain sudo[202844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sceqceupyhqimhsknwjqqazvlrzjloui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579199.9035878-2326-242262438997258/AnsiballZ_stat.py
Feb 20 09:20:00 np0005625203.localdomain sudo[202844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:00 np0005625203.localdomain python3.9[202846]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:00 np0005625203.localdomain sudo[202844]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:01 np0005625203.localdomain sudo[202932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lstlhbikmmgwtrjfztjmxwfbxjhaobqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579199.9035878-2326-242262438997258/AnsiballZ_copy.py
Feb 20 09:20:01 np0005625203.localdomain sudo[202932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:01 np0005625203.localdomain python3.9[202934]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579199.9035878-2326-242262438997258/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:01 np0005625203.localdomain sudo[202932]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:01 np0005625203.localdomain sudo[203042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gpbgzxbdzfdqzgcrtxirspnqkmyqmngk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579201.481789-2326-148641002651150/AnsiballZ_stat.py
Feb 20 09:20:01 np0005625203.localdomain sudo[203042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:01 np0005625203.localdomain python3.9[203044]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:01 np0005625203.localdomain sudo[203042]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19947 DF PROTO=TCP SPT=49164 DPT=9882 SEQ=1670029319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC55400000000001030307) 
Feb 20 09:20:02 np0005625203.localdomain sudo[203130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eaoxcvslstsoaokoggerayauwerrattn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579201.481789-2326-148641002651150/AnsiballZ_copy.py
Feb 20 09:20:02 np0005625203.localdomain sudo[203130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:02 np0005625203.localdomain python3.9[203132]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579201.481789-2326-148641002651150/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:02 np0005625203.localdomain sudo[203130]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:02 np0005625203.localdomain sudo[203240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-higccquinrfkbftfzqsadykysemowekl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579202.6111765-2326-111511055555924/AnsiballZ_stat.py
Feb 20 09:20:02 np0005625203.localdomain sudo[203240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:03 np0005625203.localdomain python3.9[203242]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:03 np0005625203.localdomain sudo[203240]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:03 np0005625203.localdomain sudo[203328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-atmebccwxzvopgdwywxgfqoutfzwlzdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579202.6111765-2326-111511055555924/AnsiballZ_copy.py
Feb 20 09:20:03 np0005625203.localdomain sudo[203328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:04 np0005625203.localdomain python3.9[203330]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579202.6111765-2326-111511055555924/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:04 np0005625203.localdomain sudo[203328]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:04 np0005625203.localdomain sshd[203364]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:20:04 np0005625203.localdomain sudo[203440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-afkbyavckowunwdkvwgxxbvatnvuigcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579204.5661573-2326-142217266322447/AnsiballZ_stat.py
Feb 20 09:20:04 np0005625203.localdomain sudo[203440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:05 np0005625203.localdomain python3.9[203442]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:05 np0005625203.localdomain sudo[203440]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:05 np0005625203.localdomain sudo[203528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jmarfetqrepowbuhajgartahkqapqgbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579204.5661573-2326-142217266322447/AnsiballZ_copy.py
Feb 20 09:20:05 np0005625203.localdomain sudo[203528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:05 np0005625203.localdomain python3.9[203530]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579204.5661573-2326-142217266322447/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:05 np0005625203.localdomain sudo[203528]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:05 np0005625203.localdomain sshd[203364]: Invalid user claude from 103.61.123.132 port 44336
Feb 20 09:20:06 np0005625203.localdomain sudo[203638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmffidzyhewryrhorgftobxeuknjlije ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579205.7379928-2326-105551898698107/AnsiballZ_stat.py
Feb 20 09:20:06 np0005625203.localdomain sudo[203638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:06 np0005625203.localdomain sshd[203640]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:20:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19948 DF PROTO=TCP SPT=49164 DPT=9882 SEQ=1670029319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC65000000000001030307) 
Feb 20 09:20:06 np0005625203.localdomain python3.9[203641]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:06 np0005625203.localdomain sudo[203638]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:06 np0005625203.localdomain sshd[203364]: Received disconnect from 103.61.123.132 port 44336:11: Bye Bye [preauth]
Feb 20 09:20:06 np0005625203.localdomain sshd[203364]: Disconnected from invalid user claude 103.61.123.132 port 44336 [preauth]
Feb 20 09:20:06 np0005625203.localdomain sshd[203659]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:20:06 np0005625203.localdomain sudo[203730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-shputytnupjvxcxeiyqdkdcjmfgcyrke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579205.7379928-2326-105551898698107/AnsiballZ_copy.py
Feb 20 09:20:06 np0005625203.localdomain sudo[203730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:06 np0005625203.localdomain sshd[203659]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:20:06 np0005625203.localdomain python3.9[203732]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579205.7379928-2326-105551898698107/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:06 np0005625203.localdomain sudo[203730]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:07 np0005625203.localdomain sudo[203840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pfcodekhxtqqtrtgbqgxjwxcgokreobv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579206.9041612-2326-144683635416301/AnsiballZ_stat.py
Feb 20 09:20:07 np0005625203.localdomain sudo[203840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:07 np0005625203.localdomain python3.9[203842]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:07 np0005625203.localdomain sudo[203840]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:20:07.631 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:20:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:20:07.632 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:20:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:20:07.632 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:20:07 np0005625203.localdomain sudo[203928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbasxyxrpqarwhizimgrtkbcoapauuoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579206.9041612-2326-144683635416301/AnsiballZ_copy.py
Feb 20 09:20:07 np0005625203.localdomain sudo[203928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:07 np0005625203.localdomain python3.9[203930]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579206.9041612-2326-144683635416301/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:07 np0005625203.localdomain sudo[203928]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52750 DF PROTO=TCP SPT=56776 DPT=9105 SEQ=545263210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC6CC00000000001030307) 
Feb 20 09:20:08 np0005625203.localdomain sshd[203640]: Received disconnect from 152.32.129.236 port 37096:11: Bye Bye [preauth]
Feb 20 09:20:08 np0005625203.localdomain sshd[203640]: Disconnected from authenticating user root 152.32.129.236 port 37096 [preauth]
Feb 20 09:20:08 np0005625203.localdomain python3.9[204038]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                                            ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:20:09 np0005625203.localdomain sudo[204149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yhsebklppuxubzdbrwodmczbfepvegol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579208.8509755-2944-157041725773371/AnsiballZ_seboolean.py
Feb 20 09:20:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:20:09 np0005625203.localdomain sudo[204149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:09 np0005625203.localdomain podman[204151]: 2026-02-20 09:20:09.467753315 +0000 UTC m=+0.087027257 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 20 09:20:09 np0005625203.localdomain podman[204151]: 2026-02-20 09:20:09.504620445 +0000 UTC m=+0.123894377 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:20:09 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:20:09 np0005625203.localdomain python3.9[204152]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Feb 20 09:20:09 np0005625203.localdomain sudo[204149]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:20:10 np0005625203.localdomain sudo[204285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbuvuyrcizvlwqosvilacrtddkxtsibo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579210.4344149-2974-10568667022989/AnsiballZ_systemd.py
Feb 20 09:20:10 np0005625203.localdomain sudo[204285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:10 np0005625203.localdomain podman[204279]: 2026-02-20 09:20:10.779611351 +0000 UTC m=+0.089645603 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:20:10 np0005625203.localdomain podman[204279]: 2026-02-20 09:20:10.812336675 +0000 UTC m=+0.122370947 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Feb 20 09:20:10 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:20:11 np0005625203.localdomain python3.9[204298]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:20:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61492 DF PROTO=TCP SPT=34092 DPT=9105 SEQ=3351343978 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC78800000000001030307) 
Feb 20 09:20:11 np0005625203.localdomain systemd-sysv-generator[204334]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:20:11 np0005625203.localdomain systemd-rc-local-generator[204326]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: Starting libvirt logging daemon socket...
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: Listening on libvirt logging daemon socket.
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: Starting libvirt logging daemon admin socket...
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: Listening on libvirt logging daemon admin socket.
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: Starting libvirt logging daemon...
Feb 20 09:20:11 np0005625203.localdomain systemd[1]: Started libvirt logging daemon.
Feb 20 09:20:11 np0005625203.localdomain sudo[204285]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:11 np0005625203.localdomain sudo[204454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qgdonftvtusuwvsmdnmcqqlbpfcdhwrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579211.659007-2974-105636995579470/AnsiballZ_systemd.py
Feb 20 09:20:11 np0005625203.localdomain sudo[204454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:12 np0005625203.localdomain python3.9[204456]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:20:12 np0005625203.localdomain systemd-rc-local-generator[204479]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:20:12 np0005625203.localdomain systemd-sysv-generator[204483]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: Starting libvirt nodedev daemon socket...
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: Listening on libvirt nodedev daemon socket.
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: Starting libvirt nodedev daemon admin socket...
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: Starting libvirt nodedev daemon read-only socket...
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: Listening on libvirt nodedev daemon admin socket.
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Feb 20 09:20:12 np0005625203.localdomain systemd[1]: Started libvirt nodedev daemon.
Feb 20 09:20:12 np0005625203.localdomain sudo[204454]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:13 np0005625203.localdomain systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb 20 09:20:13 np0005625203.localdomain sudo[204630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-octfpboltzsqjirnrnxiyudnncyyaiyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579212.748066-2974-95118272858863/AnsiballZ_systemd.py
Feb 20 09:20:13 np0005625203.localdomain sudo[204630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb 20 09:20:14 np0005625203.localdomain python3.9[204632]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:20:14 np0005625203.localdomain systemd-rc-local-generator[204659]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:20:14 np0005625203.localdomain systemd-sysv-generator[204662]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:20:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=19949 DF PROTO=TCP SPT=49164 DPT=9882 SEQ=1670029319 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC84800000000001030307) 
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: Starting libvirt proxy daemon socket...
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: Listening on libvirt proxy daemon socket.
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: Starting libvirt proxy daemon admin socket...
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: Starting libvirt proxy daemon read-only socket...
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: Listening on libvirt proxy daemon admin socket.
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: Listening on libvirt proxy daemon read-only socket.
Feb 20 09:20:14 np0005625203.localdomain systemd[1]: Started libvirt proxy daemon.
Feb 20 09:20:14 np0005625203.localdomain sudo[204630]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:14 np0005625203.localdomain sudo[204809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yoycmnwbvalbgngdyghcyjqeitypbiwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579214.5862432-2974-61893502464269/AnsiballZ_systemd.py
Feb 20 09:20:14 np0005625203.localdomain sudo[204809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:15 np0005625203.localdomain python3.9[204811]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:20:15 np0005625203.localdomain systemd-rc-local-generator[204830]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:20:15 np0005625203.localdomain systemd-sysv-generator[204837]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:15 np0005625203.localdomain setroubleshoot[204620]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 0082937b-3098-4e29-90cc-9a4d44003533
Feb 20 09:20:15 np0005625203.localdomain setroubleshoot[204620]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                                 
                                                                 *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                                 
                                                                 If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                                 Then turn on full auditing to get path information about the offending file and generate the error again.
                                                                 Do
                                                                 
                                                                 Turn on full auditing
                                                                 # auditctl -w /etc/shadow -p w
                                                                 Try to recreate AVC. Then execute
                                                                 # ausearch -m avc -ts recent
                                                                 If you see PATH record check ownership/permissions on file, and fix it,
                                                                 otherwise report as a bugzilla.
                                                                 
                                                                 *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                                 
                                                                 If you believe that virtlogd should have the dac_read_search capability by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                                 # semodule -X 300 -i my-virtlogd.pp
                                                                 
Feb 20 09:20:15 np0005625203.localdomain setroubleshoot[204620]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 0082937b-3098-4e29-90cc-9a4d44003533
Feb 20 09:20:15 np0005625203.localdomain setroubleshoot[204620]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                                 
                                                                 *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                                 
                                                                 If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                                 Then turn on full auditing to get path information about the offending file and generate the error again.
                                                                 Do
                                                                 
                                                                 Turn on full auditing
                                                                 # auditctl -w /etc/shadow -p w
                                                                 Try to recreate AVC. Then execute
                                                                 # ausearch -m avc -ts recent
                                                                 If you see PATH record check ownership/permissions on file, and fix it,
                                                                 otherwise report as a bugzilla.
                                                                 
                                                                 *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                                 
                                                                 If you believe that virtlogd should have the dac_read_search capability by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                                 # semodule -X 300 -i my-virtlogd.pp
                                                                 
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: Listening on libvirt locking daemon socket.
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: Starting libvirt QEMU daemon socket...
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: Starting Virtual Machine and Container Registration Service...
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: Listening on libvirt QEMU daemon socket.
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: Starting libvirt QEMU daemon admin socket...
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: Starting libvirt QEMU daemon read-only socket...
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: Listening on libvirt QEMU daemon admin socket.
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: Started Virtual Machine and Container Registration Service.
Feb 20 09:20:15 np0005625203.localdomain systemd[1]: Started libvirt QEMU daemon.
Feb 20 09:20:15 np0005625203.localdomain sudo[204809]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:15 np0005625203.localdomain sudo[204984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hedwerivfkccyovlohpeuykihviusrkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579215.6888852-2974-270075664996169/AnsiballZ_systemd.py
Feb 20 09:20:15 np0005625203.localdomain sudo[204984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:16 np0005625203.localdomain python3.9[204986]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:20:16 np0005625203.localdomain systemd-rc-local-generator[205011]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:20:16 np0005625203.localdomain systemd-sysv-generator[205015]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: Starting libvirt secret daemon socket...
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: Listening on libvirt secret daemon socket.
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: Starting libvirt secret daemon admin socket...
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: Starting libvirt secret daemon read-only socket...
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: Listening on libvirt secret daemon admin socket.
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: Listening on libvirt secret daemon read-only socket.
Feb 20 09:20:16 np0005625203.localdomain systemd[1]: Started libvirt secret daemon.
Feb 20 09:20:16 np0005625203.localdomain sudo[204984]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23837 DF PROTO=TCP SPT=32968 DPT=9101 SEQ=3046730796 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC8F810000000001030307) 
Feb 20 09:20:17 np0005625203.localdomain sudo[205155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zstoanijddxnyfbmfyaqskxwfymznbfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579217.2068672-3085-111720453931140/AnsiballZ_file.py
Feb 20 09:20:17 np0005625203.localdomain sudo[205155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:17 np0005625203.localdomain python3.9[205157]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:17 np0005625203.localdomain sudo[205155]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:18 np0005625203.localdomain sudo[205265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ixlsceoapqbrjgpiigixavpgagetgfpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579217.9204173-3109-32938310545759/AnsiballZ_find.py
Feb 20 09:20:18 np0005625203.localdomain sudo[205265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:18 np0005625203.localdomain python3.9[205267]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 20 09:20:18 np0005625203.localdomain sudo[205265]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:18 np0005625203.localdomain sudo[205375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rgzqggyelruyjhfbqnioevemwrwebbou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579218.5641565-3133-233297319135504/AnsiballZ_command.py
Feb 20 09:20:18 np0005625203.localdomain sudo[205375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:19 np0005625203.localdomain python3.9[205377]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                                            echo ceph
                                                            awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:20:19 np0005625203.localdomain sudo[205375]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31307 DF PROTO=TCP SPT=43416 DPT=9100 SEQ=3934864249 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FC9BC00000000001030307) 
Feb 20 09:20:20 np0005625203.localdomain python3.9[205489]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 20 09:20:21 np0005625203.localdomain python3.9[205597]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:22 np0005625203.localdomain python3.9[205683]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579220.8465989-3190-31736966080552/.source.xml follow=False _original_basename=secret.xml.j2 checksum=e299a5f369c62c832b857708260504de70ea24e6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:22 np0005625203.localdomain sudo[205791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-klozrxbxwnexuaomkijeylxibogrogth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579222.4907715-3235-65486163299879/AnsiballZ_command.py
Feb 20 09:20:22 np0005625203.localdomain sudo[205791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:22 np0005625203.localdomain python3.9[205793]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                            virsh secret-define --file /tmp/secret.xml
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:20:22 np0005625203.localdomain polkitd[1028]: Registered Authentication Agent for unix-process:205795:978124 (system bus name :1.2840 [pkttyagent --process 205795 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Feb 20 09:20:23 np0005625203.localdomain polkitd[1028]: Unregistered Authentication Agent for unix-process:205795:978124 (system bus name :1.2840, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Feb 20 09:20:23 np0005625203.localdomain polkitd[1028]: Registered Authentication Agent for unix-process:205794:978123 (system bus name :1.2841 [pkttyagent --process 205794 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Feb 20 09:20:23 np0005625203.localdomain polkitd[1028]: Unregistered Authentication Agent for unix-process:205794:978123 (system bus name :1.2841, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Feb 20 09:20:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23839 DF PROTO=TCP SPT=32968 DPT=9101 SEQ=3046730796 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FCA7410000000001030307) 
Feb 20 09:20:23 np0005625203.localdomain sudo[205791]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:24 np0005625203.localdomain python3.9[205913]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:24 np0005625203.localdomain sudo[206021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vigsuforgwczaepcqilcemphsecfyhzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579224.5745063-3283-220692467099951/AnsiballZ_command.py
Feb 20 09:20:24 np0005625203.localdomain sudo[206021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:25 np0005625203.localdomain sudo[206021]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:25 np0005625203.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Feb 20 09:20:25 np0005625203.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.016s CPU time.
Feb 20 09:20:25 np0005625203.localdomain systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb 20 09:20:25 np0005625203.localdomain sudo[206132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-colsrxcjchwsxjxfbwqpaupgdeghzubl ; FSID=a8557ee9-b55d-5519-942c-cf8f6172f1d8 KEY=AQDtD5hpAAAAABAA3WyXm9j+KcpKUe+kDHkLgg== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579225.220446-3307-10014987199187/AnsiballZ_command.py
Feb 20 09:20:25 np0005625203.localdomain sudo[206132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:25 np0005625203.localdomain polkitd[1028]: Registered Authentication Agent for unix-process:206135:978402 (system bus name :1.2844 [pkttyagent --process 206135 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8)
Feb 20 09:20:25 np0005625203.localdomain polkitd[1028]: Unregistered Authentication Agent for unix-process:206135:978402 (system bus name :1.2844, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale C.UTF-8) (disconnected from bus)
Feb 20 09:20:25 np0005625203.localdomain sudo[206132]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31309 DF PROTO=TCP SPT=43416 DPT=9100 SEQ=3934864249 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FCB3810000000001030307) 
Feb 20 09:20:26 np0005625203.localdomain sudo[206248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nmdeiehqzoracnewbllrteiyuczlhqle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579226.0860417-3331-23777015848386/AnsiballZ_copy.py
Feb 20 09:20:26 np0005625203.localdomain sudo[206248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:26 np0005625203.localdomain python3.9[206250]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:26 np0005625203.localdomain sudo[206248]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:27 np0005625203.localdomain sudo[206358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ypmnbrjppsryiniwoestpejpgvdcorpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579226.76066-3355-15160015592018/AnsiballZ_stat.py
Feb 20 09:20:27 np0005625203.localdomain sudo[206358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:27 np0005625203.localdomain python3.9[206360]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:27 np0005625203.localdomain sudo[206358]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:27 np0005625203.localdomain sudo[206446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mapnpfglmmxufyyacxabswuktknxmykw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579226.76066-3355-15160015592018/AnsiballZ_copy.py
Feb 20 09:20:27 np0005625203.localdomain sudo[206446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:27 np0005625203.localdomain python3.9[206448]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579226.76066-3355-15160015592018/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=dc5ee7162311c27a6084cbee4052b901d56cb1ba backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:27 np0005625203.localdomain sudo[206446]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:28 np0005625203.localdomain sudo[206556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tplhipheifjqfiopdjvildbreuqrwdnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579228.167155-3403-160999404131785/AnsiballZ_file.py
Feb 20 09:20:28 np0005625203.localdomain sudo[206556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:28 np0005625203.localdomain sshd[206559]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:20:28 np0005625203.localdomain python3.9[206558]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:28 np0005625203.localdomain sudo[206556]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:28 np0005625203.localdomain sshd[206559]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:20:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44746 DF PROTO=TCP SPT=59538 DPT=9882 SEQ=3239891562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FCBE5F0000000001030307) 
Feb 20 09:20:29 np0005625203.localdomain sudo[206668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cfydvbtuumficozjfrxrharkssvnvvzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579229.1270828-3427-94484126925050/AnsiballZ_stat.py
Feb 20 09:20:29 np0005625203.localdomain sudo[206668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:29 np0005625203.localdomain python3.9[206670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:29 np0005625203.localdomain sudo[206668]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:29 np0005625203.localdomain sudo[206725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uetoldexppwpcmngukewhcfwtsswlfed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579229.1270828-3427-94484126925050/AnsiballZ_file.py
Feb 20 09:20:29 np0005625203.localdomain sudo[206725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:30 np0005625203.localdomain python3.9[206727]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:30 np0005625203.localdomain sudo[206725]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:30 np0005625203.localdomain sudo[206835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-udpxwwrvnczuqlubfhomghluixjuarzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579230.2486696-3463-62864479639056/AnsiballZ_stat.py
Feb 20 09:20:30 np0005625203.localdomain sudo[206835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:30 np0005625203.localdomain python3.9[206837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:30 np0005625203.localdomain sudo[206835]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:30 np0005625203.localdomain sudo[206892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bwsyqeivlwjcycfpexhyubkjzbruwrzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579230.2486696-3463-62864479639056/AnsiballZ_file.py
Feb 20 09:20:30 np0005625203.localdomain sudo[206892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:31 np0005625203.localdomain python3.9[206894]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8y0e7aec recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:31 np0005625203.localdomain sudo[206892]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:31 np0005625203.localdomain sudo[207002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blafgdnvvicqsayegzccodarbvprldqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579231.39093-3499-136513436316899/AnsiballZ_stat.py
Feb 20 09:20:31 np0005625203.localdomain sudo[207002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:31 np0005625203.localdomain python3.9[207004]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:31 np0005625203.localdomain sudo[207002]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44748 DF PROTO=TCP SPT=59538 DPT=9882 SEQ=3239891562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FCCA810000000001030307) 
Feb 20 09:20:32 np0005625203.localdomain sudo[207059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-alqevrckwggcpwwpndcarlmacwkumtsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579231.39093-3499-136513436316899/AnsiballZ_file.py
Feb 20 09:20:32 np0005625203.localdomain sudo[207059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:32 np0005625203.localdomain python3.9[207061]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:32 np0005625203.localdomain sudo[207059]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:33 np0005625203.localdomain sudo[207169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxnlsswnjnfrqvgkltfkxrwbqvwcvvfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579233.2682447-3538-4876800154643/AnsiballZ_command.py
Feb 20 09:20:33 np0005625203.localdomain sudo[207169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:33 np0005625203.localdomain python3.9[207171]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:20:33 np0005625203.localdomain sudo[207169]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:35 np0005625203.localdomain sudo[207280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yqmhkkxddwqvwdolnspvqctprcgsefbg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771579234.6096163-3562-127338247718737/AnsiballZ_edpm_nftables_from_files.py
Feb 20 09:20:35 np0005625203.localdomain sudo[207280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:35 np0005625203.localdomain python3[207282]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 20 09:20:35 np0005625203.localdomain sudo[207280]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:37 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44749 DF PROTO=TCP SPT=59538 DPT=9882 SEQ=3239891562 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FCDA400000000001030307) 
Feb 20 09:20:38 np0005625203.localdomain sudo[207390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vhuabzsikkdpjvdsycujaclgidnrmgol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579237.609944-3587-94541428982476/AnsiballZ_stat.py
Feb 20 09:20:38 np0005625203.localdomain sudo[207390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57849 DF PROTO=TCP SPT=32966 DPT=9105 SEQ=3511011336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FCE2010000000001030307) 
Feb 20 09:20:38 np0005625203.localdomain python3.9[207392]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:38 np0005625203.localdomain sudo[207390]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:38 np0005625203.localdomain sudo[207447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yjqkapbishhbccrhozpddbxqtorlfdjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579237.609944-3587-94541428982476/AnsiballZ_file.py
Feb 20 09:20:38 np0005625203.localdomain sudo[207447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:38 np0005625203.localdomain python3.9[207449]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:38 np0005625203.localdomain sudo[207447]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:39 np0005625203.localdomain sudo[207557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tsfaikirkijmzjztrxhxhxbsleyqfwrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579238.9031646-3622-223593599798993/AnsiballZ_stat.py
Feb 20 09:20:39 np0005625203.localdomain sudo[207557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:39 np0005625203.localdomain python3.9[207559]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:39 np0005625203.localdomain sudo[207557]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:20:39 np0005625203.localdomain sudo[207658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wkvrodwqbjktppzncdwmzamddaetavtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579238.9031646-3622-223593599798993/AnsiballZ_copy.py
Feb 20 09:20:39 np0005625203.localdomain podman[207617]: 2026-02-20 09:20:39.778689147 +0000 UTC m=+0.090465725 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true)
Feb 20 09:20:39 np0005625203.localdomain sudo[207658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:39 np0005625203.localdomain podman[207617]: 2026-02-20 09:20:39.844358244 +0000 UTC m=+0.156134822 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:20:39 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:20:40 np0005625203.localdomain python3.9[207665]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579238.9031646-3622-223593599798993/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:40 np0005625203.localdomain sudo[207658]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:40 np0005625203.localdomain sudo[207780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dpareyvrlueulkwrwozphalwimtgxhjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579240.5144305-3667-200900034760556/AnsiballZ_stat.py
Feb 20 09:20:40 np0005625203.localdomain sudo[207780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:41 np0005625203.localdomain python3.9[207782]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:41 np0005625203.localdomain sudo[207780]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:41 np0005625203.localdomain sudo[207785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:20:41 np0005625203.localdomain sudo[207785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:20:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:20:41 np0005625203.localdomain sudo[207785]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:41 np0005625203.localdomain podman[207811]: 2026-02-20 09:20:41.274869858 +0000 UTC m=+0.082973771 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 20 09:20:41 np0005625203.localdomain podman[207811]: 2026-02-20 09:20:41.310110563 +0000 UTC m=+0.118214396 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 20 09:20:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14539 DF PROTO=TCP SPT=49888 DPT=9105 SEQ=1427152165 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FCEE800000000001030307) 
Feb 20 09:20:41 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:20:41 np0005625203.localdomain sudo[207831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:20:41 np0005625203.localdomain sudo[207831]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:20:41 np0005625203.localdomain sudo[207891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exnldsmmgdovxilbjcxmlrsiocwczpon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579240.5144305-3667-200900034760556/AnsiballZ_file.py
Feb 20 09:20:41 np0005625203.localdomain sudo[207891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:41 np0005625203.localdomain python3.9[207893]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:41 np0005625203.localdomain sudo[207891]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:41 np0005625203.localdomain sudo[207831]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:42 np0005625203.localdomain sudo[208032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-locmthmbsyargymbcncleiupvxdksjyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579241.8282914-3703-148429378560919/AnsiballZ_stat.py
Feb 20 09:20:42 np0005625203.localdomain sudo[208032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:42 np0005625203.localdomain python3.9[208034]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:42 np0005625203.localdomain sudo[208032]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:42 np0005625203.localdomain sudo[208089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hggxrhzfmrccoewkxrbsgokiggiwfohv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579241.8282914-3703-148429378560919/AnsiballZ_file.py
Feb 20 09:20:42 np0005625203.localdomain sudo[208089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:42 np0005625203.localdomain sudo[208091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:20:42 np0005625203.localdomain sudo[208091]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:20:42 np0005625203.localdomain sudo[208091]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:42 np0005625203.localdomain python3.9[208102]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:42 np0005625203.localdomain sudo[208089]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:44 np0005625203.localdomain sudo[208217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-arxxnhfgdmetxiqkxjnobzoyeqnydbpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579243.030026-3739-126578170522060/AnsiballZ_stat.py
Feb 20 09:20:44 np0005625203.localdomain sudo[208217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57851 DF PROTO=TCP SPT=32966 DPT=9105 SEQ=3511011336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FCF9C00000000001030307) 
Feb 20 09:20:44 np0005625203.localdomain python3.9[208219]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:44 np0005625203.localdomain sudo[208217]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:44 np0005625203.localdomain sudo[208307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zhxjnhqgfvslcmpjgjpxwvjrtwdhtvpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579243.030026-3739-126578170522060/AnsiballZ_copy.py
Feb 20 09:20:44 np0005625203.localdomain sudo[208307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:44 np0005625203.localdomain python3.9[208309]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579243.030026-3739-126578170522060/.source.nft follow=False _original_basename=ruleset.j2 checksum=e2e2635f27347d386f310e86d2b40c40289835bb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:44 np0005625203.localdomain sudo[208307]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:45 np0005625203.localdomain sudo[208417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vunjjtbpgioirxdksrqkwyfjjaibfswl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579245.142994-3784-108725464953997/AnsiballZ_file.py
Feb 20 09:20:45 np0005625203.localdomain sudo[208417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:45 np0005625203.localdomain python3.9[208419]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:45 np0005625203.localdomain sudo[208417]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:46 np0005625203.localdomain sudo[208527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dowytztywltiqmufdboquqbwwtmfddwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579246.5535505-3808-85678545350721/AnsiballZ_command.py
Feb 20 09:20:46 np0005625203.localdomain sudo[208527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:46 np0005625203.localdomain python3.9[208529]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:20:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14860 DF PROTO=TCP SPT=38220 DPT=9101 SEQ=928703589 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD04C00000000001030307) 
Feb 20 09:20:47 np0005625203.localdomain sudo[208527]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:47 np0005625203.localdomain sudo[208640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dlathvfidftzafqnewthmppgnwccvmup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579247.365952-3832-31795505489258/AnsiballZ_blockinfile.py
Feb 20 09:20:47 np0005625203.localdomain sudo[208640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:47 np0005625203.localdomain python3.9[208642]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:48 np0005625203.localdomain sudo[208640]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:48 np0005625203.localdomain sshd[208656]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:20:48 np0005625203.localdomain sshd[208656]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:20:48 np0005625203.localdomain sudo[208752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vdapymitxnqktjvuxyugzffdpivpfkpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579248.3484952-3859-37919571286167/AnsiballZ_command.py
Feb 20 09:20:48 np0005625203.localdomain sudo[208752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:48 np0005625203.localdomain python3.9[208754]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:20:48 np0005625203.localdomain sudo[208752]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:49 np0005625203.localdomain sudo[208863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pptflglydwcuwcerpifohmknugiqagaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579249.1160717-3883-103870687188679/AnsiballZ_stat.py
Feb 20 09:20:49 np0005625203.localdomain sudo[208863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:49 np0005625203.localdomain python3.9[208865]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:20:49 np0005625203.localdomain sudo[208863]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54958 DF PROTO=TCP SPT=49328 DPT=9101 SEQ=505562078 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD10800000000001030307) 
Feb 20 09:20:50 np0005625203.localdomain sudo[208975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bvwedscdrpkkjtsypxufhbwqxhbvicbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579249.8374124-3907-221962199967237/AnsiballZ_command.py
Feb 20 09:20:50 np0005625203.localdomain sudo[208975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:50 np0005625203.localdomain python3.9[208977]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:20:50 np0005625203.localdomain sudo[208975]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:50 np0005625203.localdomain sudo[209088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dqrgttebqiezehsodaavoiemwphykhpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579250.565393-3932-252896956223982/AnsiballZ_file.py
Feb 20 09:20:50 np0005625203.localdomain sudo[209088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:51 np0005625203.localdomain python3.9[209090]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:51 np0005625203.localdomain sudo[209088]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:51 np0005625203.localdomain sudo[209198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbzdcvsygafedeehiqfqyptlyzehtqsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579251.260622-3955-87110778938224/AnsiballZ_stat.py
Feb 20 09:20:51 np0005625203.localdomain sudo[209198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:51 np0005625203.localdomain python3.9[209200]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:51 np0005625203.localdomain sudo[209198]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:52 np0005625203.localdomain sudo[209286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xqiglrehjsecauzasjlcvpphxexmlszj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579251.260622-3955-87110778938224/AnsiballZ_copy.py
Feb 20 09:20:52 np0005625203.localdomain sudo[209286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:52 np0005625203.localdomain python3.9[209288]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579251.260622-3955-87110778938224/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:52 np0005625203.localdomain sudo[209286]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:52 np0005625203.localdomain sudo[209396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bodvhacxmcvjjozqvbsyllnsjroynbdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579252.4743204-4000-31382061021686/AnsiballZ_stat.py
Feb 20 09:20:52 np0005625203.localdomain sudo[209396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:52 np0005625203.localdomain python3.9[209398]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:52 np0005625203.localdomain sudo[209396]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30036 DF PROTO=TCP SPT=58580 DPT=9100 SEQ=2106536304 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD1C800000000001030307) 
Feb 20 09:20:53 np0005625203.localdomain sudo[209484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxvzgytdnbyqcvgismmvnuwomtkejtdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579252.4743204-4000-31382061021686/AnsiballZ_copy.py
Feb 20 09:20:53 np0005625203.localdomain sudo[209484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:53 np0005625203.localdomain python3.9[209486]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579252.4743204-4000-31382061021686/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:53 np0005625203.localdomain sudo[209484]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:53 np0005625203.localdomain sudo[209594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fktzmsctybydppjjgvzfiliswhjmkcxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579253.6798034-4045-115017229548508/AnsiballZ_stat.py
Feb 20 09:20:53 np0005625203.localdomain sudo[209594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:54 np0005625203.localdomain python3.9[209596]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:20:54 np0005625203.localdomain sudo[209594]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:54 np0005625203.localdomain sudo[209682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vrcuvwwscgwmnscaxorctkeafxgrfkgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579253.6798034-4045-115017229548508/AnsiballZ_copy.py
Feb 20 09:20:54 np0005625203.localdomain sudo[209682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:54 np0005625203.localdomain python3.9[209684]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579253.6798034-4045-115017229548508/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:20:54 np0005625203.localdomain sudo[209682]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:55 np0005625203.localdomain sudo[209792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fhmkafpzkcklujxjxpqwchimfibeblfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579255.1593933-4090-41401656115457/AnsiballZ_systemd.py
Feb 20 09:20:55 np0005625203.localdomain sudo[209792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:55 np0005625203.localdomain python3.9[209794]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:20:55 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:20:55 np0005625203.localdomain systemd-rc-local-generator[209816]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:20:55 np0005625203.localdomain systemd-sysv-generator[209823]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:20:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:20:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:56 np0005625203.localdomain systemd[1]: Reached target edpm_libvirt.target.
Feb 20 09:20:56 np0005625203.localdomain sudo[209792]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28164 DF PROTO=TCP SPT=39870 DPT=9100 SEQ=2263165006 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD28C00000000001030307) 
Feb 20 09:20:56 np0005625203.localdomain sudo[209942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tiakvbtexgnpftfilcbvykgyujraqeyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579256.3460107-4115-25927699717543/AnsiballZ_systemd.py
Feb 20 09:20:56 np0005625203.localdomain sudo[209942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:20:57 np0005625203.localdomain python3.9[209944]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:20:57 np0005625203.localdomain systemd-rc-local-generator[209972]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:20:57 np0005625203.localdomain systemd-sysv-generator[209976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:20:57 np0005625203.localdomain systemd-sysv-generator[210012]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:20:57 np0005625203.localdomain systemd-rc-local-generator[210006]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:20:57 np0005625203.localdomain sudo[209942]: pam_unix(sudo:session): session closed for user root
Feb 20 09:20:58 np0005625203.localdomain sshd[161476]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:20:58 np0005625203.localdomain systemd[1]: session-52.scope: Deactivated successfully.
Feb 20 09:20:58 np0005625203.localdomain systemd[1]: session-52.scope: Consumed 3min 23.049s CPU time.
Feb 20 09:20:58 np0005625203.localdomain systemd-logind[759]: Session 52 logged out. Waiting for processes to exit.
Feb 20 09:20:58 np0005625203.localdomain systemd-logind[759]: Removed session 52.
Feb 20 09:20:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26157 DF PROTO=TCP SPT=51936 DPT=9882 SEQ=867600807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD338C0000000001030307) 
Feb 20 09:21:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26159 DF PROTO=TCP SPT=51936 DPT=9882 SEQ=867600807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD3F810000000001030307) 
Feb 20 09:21:03 np0005625203.localdomain sshd[210036]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:21:04 np0005625203.localdomain sshd[210036]: Accepted publickey for zuul from 192.168.122.30 port 59030 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:21:04 np0005625203.localdomain systemd-logind[759]: New session 53 of user zuul.
Feb 20 09:21:04 np0005625203.localdomain systemd[1]: Started Session 53 of User zuul.
Feb 20 09:21:04 np0005625203.localdomain sshd[210036]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:21:05 np0005625203.localdomain python3.9[210147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:21:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=26160 DF PROTO=TCP SPT=51936 DPT=9882 SEQ=867600807 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD4F400000000001030307) 
Feb 20 09:21:06 np0005625203.localdomain python3.9[210259]: ansible-ansible.builtin.service_facts Invoked
Feb 20 09:21:06 np0005625203.localdomain network[210276]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 09:21:06 np0005625203.localdomain network[210277]: 'network-scripts' will be removed from distribution in near future.
Feb 20 09:21:06 np0005625203.localdomain network[210278]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 09:21:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:21:07.632 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:21:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:21:07.633 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:21:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:21:07.633 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:21:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1995 DF PROTO=TCP SPT=51302 DPT=9105 SEQ=206175585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD57000000000001030307) 
Feb 20 09:21:09 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:21:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:21:10 np0005625203.localdomain podman[210361]: 2026-02-20 09:21:10.026814632 +0000 UTC m=+0.101883793 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:21:10 np0005625203.localdomain podman[210361]: 2026-02-20 09:21:10.065457943 +0000 UTC m=+0.140527094 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller)
Feb 20 09:21:10 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:21:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52755 DF PROTO=TCP SPT=56776 DPT=9105 SEQ=545263210 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD62810000000001030307) 
Feb 20 09:21:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:21:11 np0005625203.localdomain podman[210410]: 2026-02-20 09:21:11.45615582 +0000 UTC m=+0.084946553 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:21:11 np0005625203.localdomain podman[210410]: 2026-02-20 09:21:11.491166787 +0000 UTC m=+0.119957540 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:21:11 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:21:13 np0005625203.localdomain sudo[210552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-shdpowvmeewgmrfhosuikglbxlvwevnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579272.9665012-97-214571452172864/AnsiballZ_setup.py
Feb 20 09:21:13 np0005625203.localdomain sudo[210552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:13 np0005625203.localdomain python3.9[210554]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:21:13 np0005625203.localdomain sudo[210552]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1997 DF PROTO=TCP SPT=51302 DPT=9105 SEQ=206175585 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD6EC00000000001030307) 
Feb 20 09:21:14 np0005625203.localdomain sudo[210615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xoozwufqzfgugbjptedymudvdubbnlpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579272.9665012-97-214571452172864/AnsiballZ_dnf.py
Feb 20 09:21:14 np0005625203.localdomain sudo[210615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:14 np0005625203.localdomain python3.9[210617]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:21:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18767 DF PROTO=TCP SPT=51018 DPT=9101 SEQ=2725740534 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD7A010000000001030307) 
Feb 20 09:21:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51577 DF PROTO=TCP SPT=48614 DPT=9100 SEQ=1897257399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD86400000000001030307) 
Feb 20 09:21:21 np0005625203.localdomain sudo[210615]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:22 np0005625203.localdomain sudo[210727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcdgvjgkriygvdtoiarfmbhgotywdqqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579281.9718492-133-266622387577098/AnsiballZ_stat.py
Feb 20 09:21:22 np0005625203.localdomain sudo[210727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:22 np0005625203.localdomain python3.9[210729]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:21:22 np0005625203.localdomain sudo[210727]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18769 DF PROTO=TCP SPT=51018 DPT=9101 SEQ=2725740534 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD91C10000000001030307) 
Feb 20 09:21:23 np0005625203.localdomain sudo[210839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yztzccwiazswzihgmepkteqmbksoiqas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579282.7878141-157-273372280885262/AnsiballZ_copy.py
Feb 20 09:21:23 np0005625203.localdomain sudo[210839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:23 np0005625203.localdomain python3.9[210841]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi mode=preserve remote_src=True src=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi/ backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:21:23 np0005625203.localdomain sudo[210839]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:24 np0005625203.localdomain sudo[210949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ffhqnymmiblukvkpjhpccbqvdtxjhoix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579283.6661656-181-154572079338921/AnsiballZ_command.py
Feb 20 09:21:24 np0005625203.localdomain sudo[210949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:24 np0005625203.localdomain python3.9[210951]: ansible-ansible.legacy.command Invoked with _raw_params=mv "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi" "/var/lib/config-data/puppet-generated/iscsid/etc/iscsi.adopted"
                                                             _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:21:24 np0005625203.localdomain sudo[210949]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:24 np0005625203.localdomain sudo[211060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thxmeerqcqnunpkekvyxpjcjgylqiqun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579284.4094074-205-17806076230996/AnsiballZ_command.py
Feb 20 09:21:24 np0005625203.localdomain sudo[211060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:24 np0005625203.localdomain python3.9[211062]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:21:24 np0005625203.localdomain sudo[211060]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:25 np0005625203.localdomain sudo[211171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqsdgouwrixpnuzsmqctsynorrimupqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579285.010805-229-136914457970564/AnsiballZ_command.py
Feb 20 09:21:25 np0005625203.localdomain sudo[211171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:25 np0005625203.localdomain python3.9[211173]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:21:25 np0005625203.localdomain sudo[211171]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:26 np0005625203.localdomain sudo[211282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wkrphqcstnvjzvkjzenaiqbbtbddoxgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579285.7602544-256-512704830977/AnsiballZ_stat.py
Feb 20 09:21:26 np0005625203.localdomain sudo[211282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:26 np0005625203.localdomain python3.9[211284]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:21:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51579 DF PROTO=TCP SPT=48614 DPT=9100 SEQ=1897257399 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FD9E010000000001030307) 
Feb 20 09:21:26 np0005625203.localdomain sudo[211282]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:26 np0005625203.localdomain sudo[211394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xvxsirvuptzwhstdpcvotcvqgmrhkavq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579286.6142378-289-100013265630436/AnsiballZ_lineinfile.py
Feb 20 09:21:26 np0005625203.localdomain sudo[211394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:27 np0005625203.localdomain python3.9[211396]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:21:27 np0005625203.localdomain sudo[211394]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:28 np0005625203.localdomain sudo[211504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xutjnqkrkzjoytpermqefqqizddjpool ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579287.46814-316-115393258940025/AnsiballZ_systemd_service.py
Feb 20 09:21:28 np0005625203.localdomain sudo[211504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:28 np0005625203.localdomain python3.9[211506]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:21:28 np0005625203.localdomain systemd[1]: Listening on Open-iSCSI iscsid Socket.
Feb 20 09:21:28 np0005625203.localdomain sudo[211504]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1007 DF PROTO=TCP SPT=50732 DPT=9882 SEQ=3188833898 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FDA8BC0000000001030307) 
Feb 20 09:21:29 np0005625203.localdomain sudo[211618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bniinzmhvvinvvfidtmvyfbaahqljyvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579289.6673753-340-127884992645943/AnsiballZ_systemd_service.py
Feb 20 09:21:29 np0005625203.localdomain sudo[211618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:30 np0005625203.localdomain python3.9[211620]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:21:30 np0005625203.localdomain systemd-sysv-generator[211652]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:21:30 np0005625203.localdomain systemd-rc-local-generator[211648]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: Starting Open-iSCSI...
Feb 20 09:21:30 np0005625203.localdomain iscsid[211661]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi
Feb 20 09:21:30 np0005625203.localdomain iscsid[211661]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.<reversed domain name>[:identifier].
Feb 20 09:21:30 np0005625203.localdomain iscsid[211661]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6.
Feb 20 09:21:30 np0005625203.localdomain iscsid[211661]: If using hardware iscsi like qla4xxx this message can be ignored.
Feb 20 09:21:30 np0005625203.localdomain iscsid[211661]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi
Feb 20 09:21:30 np0005625203.localdomain iscsid[211661]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf
Feb 20 09:21:30 np0005625203.localdomain iscsid[211661]: iscsid: can't open iscsid.ipc_auth_uid configuration file /etc/iscsi/iscsid.conf
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: Started Open-iSCSI.
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Feb 20 09:21:30 np0005625203.localdomain systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Feb 20 09:21:30 np0005625203.localdomain sudo[211618]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:31 np0005625203.localdomain sshd[211680]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:21:31 np0005625203.localdomain sshd[211680]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:21:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1009 DF PROTO=TCP SPT=50732 DPT=9882 SEQ=3188833898 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FDB4C00000000001030307) 
Feb 20 09:21:32 np0005625203.localdomain systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Feb 20 09:21:32 np0005625203.localdomain systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Feb 20 09:21:33 np0005625203.localdomain python3.9[211773]: ansible-ansible.builtin.service_facts Invoked
Feb 20 09:21:33 np0005625203.localdomain network[211799]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 09:21:33 np0005625203.localdomain network[211800]: 'network-scripts' will be removed from distribution in near future.
Feb 20 09:21:33 np0005625203.localdomain network[211802]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 09:21:33 np0005625203.localdomain systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service.
Feb 20 09:21:33 np0005625203.localdomain sshd[211813]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:21:33 np0005625203.localdomain sshd[211813]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:21:34 np0005625203.localdomain setroubleshoot[211698]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 0ab38efe-d00d-4b6e-b2be-b646609de0cf
Feb 20 09:21:34 np0005625203.localdomain setroubleshoot[211698]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Feb 20 09:21:34 np0005625203.localdomain setroubleshoot[211698]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 0ab38efe-d00d-4b6e-b2be-b646609de0cf
Feb 20 09:21:34 np0005625203.localdomain setroubleshoot[211698]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Feb 20 09:21:34 np0005625203.localdomain setroubleshoot[211698]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 0ab38efe-d00d-4b6e-b2be-b646609de0cf
Feb 20 09:21:34 np0005625203.localdomain setroubleshoot[211698]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Feb 20 09:21:34 np0005625203.localdomain setroubleshoot[211698]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 0ab38efe-d00d-4b6e-b2be-b646609de0cf
Feb 20 09:21:34 np0005625203.localdomain setroubleshoot[211698]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Feb 20 09:21:34 np0005625203.localdomain setroubleshoot[211698]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 0ab38efe-d00d-4b6e-b2be-b646609de0cf
Feb 20 09:21:34 np0005625203.localdomain setroubleshoot[211698]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Feb 20 09:21:34 np0005625203.localdomain setroubleshoot[211698]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi. For complete SELinux messages run: sealert -l 0ab38efe-d00d-4b6e-b2be-b646609de0cf
Feb 20 09:21:34 np0005625203.localdomain setroubleshoot[211698]: SELinux is preventing /usr/sbin/iscsid from search access on the directory iscsi.
                                                                 
                                                                 *****  Plugin catchall (100. confidence) suggests   **************************
                                                                 
                                                                 If you believe that iscsid should be allowed search access on the iscsi directory by default.
                                                                 Then you should report this as a bug.
                                                                 You can generate a local policy module to allow this access.
                                                                 Do
                                                                 allow this access for now by executing:
                                                                 # ausearch -c 'iscsid' --raw | audit2allow -M my-iscsid
                                                                 # semodule -X 300 -i my-iscsid.pp
                                                                 
Feb 20 09:21:34 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:21:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1010 DF PROTO=TCP SPT=50732 DPT=9882 SEQ=3188833898 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FDC4800000000001030307) 
Feb 20 09:21:37 np0005625203.localdomain sshd[212030]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:21:37 np0005625203.localdomain sudo[212040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rrvmkjycgfidkrmzwjvcahaojdlfbipu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579297.565798-409-252016121836658/AnsiballZ_dnf.py
Feb 20 09:21:37 np0005625203.localdomain sudo[212040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51649 DF PROTO=TCP SPT=54368 DPT=9105 SEQ=1349084042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FDCC400000000001030307) 
Feb 20 09:21:38 np0005625203.localdomain python3.9[212043]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:21:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:21:40 np0005625203.localdomain sshd[212030]: Invalid user titu from 101.126.88.203 port 50626
Feb 20 09:21:40 np0005625203.localdomain systemd[1]: tmp-crun.56f3px.mount: Deactivated successfully.
Feb 20 09:21:40 np0005625203.localdomain podman[212046]: 2026-02-20 09:21:40.758525508 +0000 UTC m=+0.076213264 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:21:40 np0005625203.localdomain podman[212046]: 2026-02-20 09:21:40.791951449 +0000 UTC m=+0.109639205 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:21:40 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:21:40 np0005625203.localdomain sshd[212072]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:21:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57854 DF PROTO=TCP SPT=32966 DPT=9105 SEQ=3511011336 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FDD8800000000001030307) 
Feb 20 09:21:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:21:41 np0005625203.localdomain podman[212077]: 2026-02-20 09:21:41.765085153 +0000 UTC m=+0.081394444 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:21:41 np0005625203.localdomain podman[212077]: 2026-02-20 09:21:41.79929661 +0000 UTC m=+0.115605891 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 09:21:41 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:21:41 np0005625203.localdomain sshd[212030]: Received disconnect from 101.126.88.203 port 50626:11: Bye Bye [preauth]
Feb 20 09:21:41 np0005625203.localdomain sshd[212030]: Disconnected from invalid user titu 101.126.88.203 port 50626 [preauth]
Feb 20 09:21:41 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: Starting man-db-cache-update.service...
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:21:42 np0005625203.localdomain systemd-rc-local-generator[212131]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:21:42 np0005625203.localdomain systemd-sysv-generator[212135]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: Finished man-db-cache-update.service.
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: run-rbea1b20533344f70804edfddb932c86b.service: Deactivated successfully.
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: Starting man-db-cache-update.service...
Feb 20 09:21:42 np0005625203.localdomain sshd[212072]: Received disconnect from 118.99.80.29 port 16928:11: Bye Bye [preauth]
Feb 20 09:21:42 np0005625203.localdomain sshd[212072]: Disconnected from authenticating user root 118.99.80.29 port 16928 [preauth]
Feb 20 09:21:42 np0005625203.localdomain sudo[212273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:21:42 np0005625203.localdomain sudo[212273]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:21:42 np0005625203.localdomain sudo[212273]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: Finished man-db-cache-update.service.
Feb 20 09:21:42 np0005625203.localdomain systemd[1]: run-rc86a8ce4bcee4474a6d192c901d915be.service: Deactivated successfully.
Feb 20 09:21:42 np0005625203.localdomain sudo[212294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:21:42 np0005625203.localdomain sudo[212294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:21:43 np0005625203.localdomain sudo[212040]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:43 np0005625203.localdomain sudo[212294]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:44 np0005625203.localdomain systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@1.service: Deactivated successfully.
Feb 20 09:21:44 np0005625203.localdomain systemd[1]: setroubleshootd.service: Deactivated successfully.
Feb 20 09:21:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51651 DF PROTO=TCP SPT=54368 DPT=9105 SEQ=1349084042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FDE4000000000001030307) 
Feb 20 09:21:44 np0005625203.localdomain sudo[212451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dmpcwobttucuhenrgsfgmgrhpwtmyyhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579303.9754806-436-112615184180143/AnsiballZ_file.py
Feb 20 09:21:44 np0005625203.localdomain sudo[212451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:44 np0005625203.localdomain python3.9[212453]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 09:21:44 np0005625203.localdomain sudo[212451]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:45 np0005625203.localdomain sudo[212561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jbthkwujitdktaxesbhvecgxcfgzbmco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579304.8394036-460-253689564329468/AnsiballZ_modprobe.py
Feb 20 09:21:45 np0005625203.localdomain sudo[212561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:45 np0005625203.localdomain python3.9[212563]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb 20 09:21:45 np0005625203.localdomain sudo[212561]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:46 np0005625203.localdomain sudo[212675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xlygtysfrftcgiicetikjwaffsahwlvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579305.9164207-484-132108921009496/AnsiballZ_stat.py
Feb 20 09:21:46 np0005625203.localdomain sudo[212675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:46 np0005625203.localdomain python3.9[212677]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:21:46 np0005625203.localdomain sudo[212675]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:46 np0005625203.localdomain sudo[212689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:21:46 np0005625203.localdomain sudo[212689]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:21:46 np0005625203.localdomain sudo[212689]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:46 np0005625203.localdomain sudo[212781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mewwjcqcfjusnnmseibzsllxbkepvrvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579305.9164207-484-132108921009496/AnsiballZ_copy.py
Feb 20 09:21:46 np0005625203.localdomain sudo[212781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:46 np0005625203.localdomain python3.9[212783]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579305.9164207-484-132108921009496/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:21:46 np0005625203.localdomain sudo[212781]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54120 DF PROTO=TCP SPT=60108 DPT=9101 SEQ=538779568 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FDEF400000000001030307) 
Feb 20 09:21:47 np0005625203.localdomain sudo[212891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wbosofzwssivyzzrjxwtvuhtjkoqqxww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579307.2113645-532-38113823081023/AnsiballZ_lineinfile.py
Feb 20 09:21:47 np0005625203.localdomain sudo[212891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:47 np0005625203.localdomain python3.9[212893]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:21:47 np0005625203.localdomain sudo[212891]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:48 np0005625203.localdomain sudo[213001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cozuglxftuxqauqvlmgsmybbyprdtmkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579308.3103373-556-77426340939241/AnsiballZ_systemd.py
Feb 20 09:21:48 np0005625203.localdomain sudo[213001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:49 np0005625203.localdomain python3.9[213003]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:21:49 np0005625203.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 20 09:21:49 np0005625203.localdomain systemd[1]: Stopped Load Kernel Modules.
Feb 20 09:21:49 np0005625203.localdomain systemd[1]: Stopping Load Kernel Modules...
Feb 20 09:21:49 np0005625203.localdomain systemd[1]: Starting Load Kernel Modules...
Feb 20 09:21:49 np0005625203.localdomain systemd-modules-load[213007]: Module 'msr' is built in
Feb 20 09:21:49 np0005625203.localdomain systemd[1]: Finished Load Kernel Modules.
Feb 20 09:21:49 np0005625203.localdomain sudo[213001]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49285 DF PROTO=TCP SPT=52600 DPT=9100 SEQ=766774194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FDFB400000000001030307) 
Feb 20 09:21:50 np0005625203.localdomain sudo[213115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yrbempjjxmnategjwqmihsitmzxruhcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579310.528872-580-225053105625685/AnsiballZ_command.py
Feb 20 09:21:50 np0005625203.localdomain sudo[213115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:50 np0005625203.localdomain python3.9[213117]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:21:51 np0005625203.localdomain sudo[213115]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:51 np0005625203.localdomain sudo[213226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qmbqodpghtyvosokbmoehkyrusxgmimt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579311.7114875-610-100974872370744/AnsiballZ_stat.py
Feb 20 09:21:51 np0005625203.localdomain sudo[213226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:52 np0005625203.localdomain python3.9[213228]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:21:52 np0005625203.localdomain sudo[213226]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:52 np0005625203.localdomain sudo[213336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dyqxrmycjckxlzhoebigbmfqkeqbvhmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579312.449765-637-164641461236332/AnsiballZ_stat.py
Feb 20 09:21:52 np0005625203.localdomain sudo[213336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:52 np0005625203.localdomain python3.9[213338]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:21:52 np0005625203.localdomain sudo[213336]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:52 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=28167 DF PROTO=TCP SPT=39870 DPT=9100 SEQ=2263165006 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE06810000000001030307) 
Feb 20 09:21:53 np0005625203.localdomain sudo[213424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vpfzyzkifkymeiznwxeforjvcmvmhgmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579312.449765-637-164641461236332/AnsiballZ_copy.py
Feb 20 09:21:53 np0005625203.localdomain sudo[213424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:53 np0005625203.localdomain python3.9[213426]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579312.449765-637-164641461236332/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:21:53 np0005625203.localdomain sudo[213424]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:54 np0005625203.localdomain sudo[213534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-whzbcmjmeiprgajcnrwyedhcqhovehca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579313.7266674-682-272970966095163/AnsiballZ_command.py
Feb 20 09:21:54 np0005625203.localdomain sudo[213534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:54 np0005625203.localdomain python3.9[213536]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:21:54 np0005625203.localdomain sudo[213534]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:54 np0005625203.localdomain sudo[213645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exegdusvbnmjtvftedxvyzseogrxufcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579314.414369-706-181358482232940/AnsiballZ_lineinfile.py
Feb 20 09:21:54 np0005625203.localdomain sudo[213645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:54 np0005625203.localdomain python3.9[213647]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:21:54 np0005625203.localdomain sudo[213645]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:55 np0005625203.localdomain sudo[213755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ethamdamqpodfvbskqvgsmjtifvsqsue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579315.144111-730-2319589440592/AnsiballZ_replace.py
Feb 20 09:21:55 np0005625203.localdomain sudo[213755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:55 np0005625203.localdomain python3.9[213757]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:21:55 np0005625203.localdomain sudo[213755]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:55 np0005625203.localdomain systemd-journald[48285]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 75.4 (251 of 333 items), suggesting rotation.
Feb 20 09:21:55 np0005625203.localdomain systemd-journald[48285]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating.
Feb 20 09:21:55 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:21:55 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:21:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49287 DF PROTO=TCP SPT=52600 DPT=9100 SEQ=766774194 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE13000000000001030307) 
Feb 20 09:21:56 np0005625203.localdomain sudo[213866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efiifbrelikhkekcmcwidrvbgvxnvaqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579315.9472437-754-241643631066082/AnsiballZ_replace.py
Feb 20 09:21:56 np0005625203.localdomain sudo[213866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:56 np0005625203.localdomain python3.9[213868]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:21:56 np0005625203.localdomain sudo[213866]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:57 np0005625203.localdomain sudo[213976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxbjqokzcpscpaqtfsvzjhaqgeorronx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579316.7223053-781-170577031336665/AnsiballZ_lineinfile.py
Feb 20 09:21:57 np0005625203.localdomain sudo[213976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:57 np0005625203.localdomain python3.9[213978]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:21:57 np0005625203.localdomain sudo[213976]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:57 np0005625203.localdomain sudo[214086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rhgpvswepiduhqbkwrweanhmvgmvlhsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579317.3246253-781-183548663895966/AnsiballZ_lineinfile.py
Feb 20 09:21:57 np0005625203.localdomain sudo[214086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:57 np0005625203.localdomain python3.9[214088]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:21:57 np0005625203.localdomain sudo[214086]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:58 np0005625203.localdomain sudo[214196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xsomlosykoiuyjtplsgisgutnuiqasbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579318.233276-781-11362433078791/AnsiballZ_lineinfile.py
Feb 20 09:21:58 np0005625203.localdomain sudo[214196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:58 np0005625203.localdomain python3.9[214198]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:21:58 np0005625203.localdomain sudo[214196]: pam_unix(sudo:session): session closed for user root
Feb 20 09:21:59 np0005625203.localdomain sudo[214306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pzmmeonlwgdfcwvnyanxzpbglrapsvnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579318.8716567-781-261029055087807/AnsiballZ_lineinfile.py
Feb 20 09:21:59 np0005625203.localdomain sudo[214306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:21:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34465 DF PROTO=TCP SPT=56996 DPT=9102 SEQ=1784729263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE1F250000000001030307) 
Feb 20 09:21:59 np0005625203.localdomain python3.9[214308]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:21:59 np0005625203.localdomain sudo[214306]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:00 np0005625203.localdomain sudo[214416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfoqfmavifbzioiprjfktkogpuabocwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579320.287202-868-222927447967393/AnsiballZ_stat.py
Feb 20 09:22:00 np0005625203.localdomain sudo[214416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:00 np0005625203.localdomain python3.9[214418]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:22:00 np0005625203.localdomain sudo[214416]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:01 np0005625203.localdomain sudo[214528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qciyugpjavzyffxearcqybqvnswislnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579320.9521163-892-122448367299301/AnsiballZ_command.py
Feb 20 09:22:01 np0005625203.localdomain sudo[214528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:01 np0005625203.localdomain python3.9[214530]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:22:01 np0005625203.localdomain sudo[214528]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3742 DF PROTO=TCP SPT=46438 DPT=9882 SEQ=2009212028 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE2A000000000001030307) 
Feb 20 09:22:02 np0005625203.localdomain sudo[214639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fiuvmkzhifcqrtdjulqykwbsqeuxijhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579321.8209336-919-273845950711115/AnsiballZ_systemd_service.py
Feb 20 09:22:02 np0005625203.localdomain sudo[214639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:02 np0005625203.localdomain python3.9[214641]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:22:02 np0005625203.localdomain systemd[1]: Listening on multipathd control socket.
Feb 20 09:22:02 np0005625203.localdomain sudo[214639]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:04 np0005625203.localdomain sudo[214753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cpyeytmyomjfnyehrfcvfftvejgqsrjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579323.714688-943-27838381809201/AnsiballZ_systemd_service.py
Feb 20 09:22:04 np0005625203.localdomain sudo[214753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:04 np0005625203.localdomain python3.9[214755]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:22:05 np0005625203.localdomain systemd[1]: Starting Wait for udev To Complete Device Initialization...
Feb 20 09:22:05 np0005625203.localdomain udevadm[214760]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Feb 20 09:22:05 np0005625203.localdomain systemd[1]: Finished Wait for udev To Complete Device Initialization.
Feb 20 09:22:05 np0005625203.localdomain systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 20 09:22:05 np0005625203.localdomain multipathd[214763]: --------start up--------
Feb 20 09:22:05 np0005625203.localdomain multipathd[214763]: read /etc/multipath.conf
Feb 20 09:22:05 np0005625203.localdomain multipathd[214763]: path checkers start up
Feb 20 09:22:05 np0005625203.localdomain systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 20 09:22:05 np0005625203.localdomain sudo[214753]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3743 DF PROTO=TCP SPT=46438 DPT=9882 SEQ=2009212028 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE39C00000000001030307) 
Feb 20 09:22:06 np0005625203.localdomain sudo[214879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jrsfcdgxfsposlmghjspuheiuxxdmktb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579326.3600068-979-31969945159241/AnsiballZ_file.py
Feb 20 09:22:06 np0005625203.localdomain sudo[214879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:06 np0005625203.localdomain python3.9[214881]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 09:22:06 np0005625203.localdomain sudo[214879]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:07 np0005625203.localdomain sudo[214989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bqwnjskcmimjfwozqxeqyuefusenycxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579327.065453-1003-3939771029989/AnsiballZ_modprobe.py
Feb 20 09:22:07 np0005625203.localdomain sudo[214989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:07 np0005625203.localdomain python3.9[214991]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb 20 09:22:07 np0005625203.localdomain sudo[214989]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:22:07.633 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:22:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:22:07.634 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:22:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:22:07.634 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:22:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52766 DF PROTO=TCP SPT=41524 DPT=9105 SEQ=2360808995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE41810000000001030307) 
Feb 20 09:22:08 np0005625203.localdomain sudo[215106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kfihqbggvzyueefmsyduetbhcxpamquj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579327.8514576-1027-7447220501662/AnsiballZ_stat.py
Feb 20 09:22:08 np0005625203.localdomain sudo[215106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:08 np0005625203.localdomain python3.9[215108]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:22:08 np0005625203.localdomain sudo[215106]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:08 np0005625203.localdomain sudo[215194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kuwoitkhmeygtnbdhkoohvxhjclpodwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579327.8514576-1027-7447220501662/AnsiballZ_copy.py
Feb 20 09:22:08 np0005625203.localdomain sudo[215194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:08 np0005625203.localdomain python3.9[215196]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579327.8514576-1027-7447220501662/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:08 np0005625203.localdomain sudo[215194]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:09 np0005625203.localdomain sudo[215304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ieszabxnbyqrxvkrbhhnttluladxbelg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579329.7164073-1075-31996914653504/AnsiballZ_lineinfile.py
Feb 20 09:22:09 np0005625203.localdomain sudo[215304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:10 np0005625203.localdomain python3.9[215306]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:10 np0005625203.localdomain sudo[215304]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:10 np0005625203.localdomain sudo[215414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxskuryvtuckpjlsdoyiqagoohpcowgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579330.3882942-1099-27813830056197/AnsiballZ_systemd.py
Feb 20 09:22:10 np0005625203.localdomain sudo[215414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:10 np0005625203.localdomain python3.9[215416]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:22:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:22:11 np0005625203.localdomain systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 20 09:22:11 np0005625203.localdomain systemd[1]: Stopped Load Kernel Modules.
Feb 20 09:22:11 np0005625203.localdomain systemd[1]: Stopping Load Kernel Modules...
Feb 20 09:22:11 np0005625203.localdomain systemd[1]: Starting Load Kernel Modules...
Feb 20 09:22:11 np0005625203.localdomain systemd-modules-load[215431]: Module 'msr' is built in
Feb 20 09:22:11 np0005625203.localdomain systemd[1]: Finished Load Kernel Modules.
Feb 20 09:22:11 np0005625203.localdomain sudo[215414]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:11 np0005625203.localdomain podman[215418]: 2026-02-20 09:22:11.107054123 +0000 UTC m=+0.093965548 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 09:22:11 np0005625203.localdomain podman[215418]: 2026-02-20 09:22:11.183436061 +0000 UTC m=+0.170347436 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:22:11 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:22:12 np0005625203.localdomain sudo[215551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ylpzxcyadzgxxobdfkdynnsiviayquya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579331.8315096-1123-263123784906383/AnsiballZ_dnf.py
Feb 20 09:22:12 np0005625203.localdomain sudo[215551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:22:12 np0005625203.localdomain systemd[1]: tmp-crun.A3dqr9.mount: Deactivated successfully.
Feb 20 09:22:12 np0005625203.localdomain podman[215554]: 2026-02-20 09:22:12.201333139 +0000 UTC m=+0.087738973 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb 20 09:22:12 np0005625203.localdomain podman[215554]: 2026-02-20 09:22:12.234246434 +0000 UTC m=+0.120652298 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 20 09:22:12 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:22:12 np0005625203.localdomain python3.9[215553]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:22:12 np0005625203.localdomain systemd[1]: virtnodedevd.service: Deactivated successfully.
Feb 20 09:22:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52768 DF PROTO=TCP SPT=41524 DPT=9105 SEQ=2360808995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE59410000000001030307) 
Feb 20 09:22:14 np0005625203.localdomain systemd[1]: virtproxyd.service: Deactivated successfully.
Feb 20 09:22:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3744 DF PROTO=TCP SPT=46438 DPT=9882 SEQ=2009212028 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE5A810000000001030307) 
Feb 20 09:22:15 np0005625203.localdomain sshd[215577]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:22:15 np0005625203.localdomain sshd[215577]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:22:16 np0005625203.localdomain systemd-rc-local-generator[215606]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:22:16 np0005625203.localdomain systemd-sysv-generator[215611]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:22:16 np0005625203.localdomain systemd-rc-local-generator[215642]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:22:16 np0005625203.localdomain systemd-sysv-generator[215647]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:16 np0005625203.localdomain systemd-logind[759]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 20 09:22:16 np0005625203.localdomain systemd-logind[759]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Feb 20 09:22:16 np0005625203.localdomain lvm[215693]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 20 09:22:16 np0005625203.localdomain lvm[215693]: VG ceph_vg1 finished
Feb 20 09:22:16 np0005625203.localdomain lvm[215694]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 20 09:22:16 np0005625203.localdomain lvm[215694]: VG ceph_vg0 finished
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Feb 20 09:22:16 np0005625203.localdomain systemd[1]: Starting man-db-cache-update.service...
Feb 20 09:22:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41505 DF PROTO=TCP SPT=39344 DPT=9101 SEQ=2072273638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE64400000000001030307) 
Feb 20 09:22:17 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:22:17 np0005625203.localdomain systemd-rc-local-generator[215745]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:22:17 np0005625203.localdomain systemd-sysv-generator[215750]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:22:17 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:17 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:17 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:17 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:17 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:22:17 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:17 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:17 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:17 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:17 np0005625203.localdomain systemd[1]: Queuing reload/restart jobs for marked units…
Feb 20 09:22:17 np0005625203.localdomain sshd[216070]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:22:18 np0005625203.localdomain sshd[216070]: Received disconnect from 5.253.59.68 port 42638:11: Bye Bye [preauth]
Feb 20 09:22:18 np0005625203.localdomain sshd[216070]: Disconnected from authenticating user root 5.253.59.68 port 42638 [preauth]
Feb 20 09:22:18 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Deactivated successfully.
Feb 20 09:22:18 np0005625203.localdomain systemd[1]: Finished man-db-cache-update.service.
Feb 20 09:22:18 np0005625203.localdomain systemd[1]: man-db-cache-update.service: Consumed 1.299s CPU time.
Feb 20 09:22:18 np0005625203.localdomain systemd[1]: run-r96ddcbfdb5844d3980a3805fd4ed9412.service: Deactivated successfully.
Feb 20 09:22:18 np0005625203.localdomain sudo[215551]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:18 np0005625203.localdomain sudo[217005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-whfrxpamloppomaldyojzunxopylgwld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579338.62943-1147-259398521768184/AnsiballZ_systemd_service.py
Feb 20 09:22:18 np0005625203.localdomain sudo[217005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:19 np0005625203.localdomain python3.9[217007]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:22:19 np0005625203.localdomain systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Feb 20 09:22:19 np0005625203.localdomain multipathd[214763]: exit (signal)
Feb 20 09:22:19 np0005625203.localdomain multipathd[214763]: --------shut down-------
Feb 20 09:22:19 np0005625203.localdomain systemd[1]: multipathd.service: Deactivated successfully.
Feb 20 09:22:19 np0005625203.localdomain systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Feb 20 09:22:19 np0005625203.localdomain systemd[1]: Starting Device-Mapper Multipath Device Controller...
Feb 20 09:22:19 np0005625203.localdomain multipathd[217013]: --------start up--------
Feb 20 09:22:19 np0005625203.localdomain multipathd[217013]: read /etc/multipath.conf
Feb 20 09:22:19 np0005625203.localdomain multipathd[217013]: path checkers start up
Feb 20 09:22:19 np0005625203.localdomain systemd[1]: Started Device-Mapper Multipath Device Controller.
Feb 20 09:22:19 np0005625203.localdomain sudo[217005]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:20 np0005625203.localdomain python3.9[217129]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:22:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18772 DF PROTO=TCP SPT=51018 DPT=9101 SEQ=2725740534 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE70800000000001030307) 
Feb 20 09:22:21 np0005625203.localdomain sudo[217241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpcujboqkbpptzlvwpfzuxyqtiwaemcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579341.063357-1199-211331743909340/AnsiballZ_file.py
Feb 20 09:22:21 np0005625203.localdomain sudo[217241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:21 np0005625203.localdomain python3.9[217243]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:21 np0005625203.localdomain sudo[217241]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:23 np0005625203.localdomain sudo[217351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-voyzcqzmsdipmikfugbazddorygwkets ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579342.040615-1232-86823496660077/AnsiballZ_systemd_service.py
Feb 20 09:22:23 np0005625203.localdomain sudo[217351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41507 DF PROTO=TCP SPT=39344 DPT=9101 SEQ=2072273638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE7C010000000001030307) 
Feb 20 09:22:23 np0005625203.localdomain python3.9[217353]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:22:23 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:22:23 np0005625203.localdomain systemd-rc-local-generator[217375]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:22:23 np0005625203.localdomain systemd-sysv-generator[217381]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:22:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:22:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:23 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:23 np0005625203.localdomain sudo[217351]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:24 np0005625203.localdomain python3.9[217497]: ansible-ansible.builtin.service_facts Invoked
Feb 20 09:22:24 np0005625203.localdomain network[217514]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 09:22:24 np0005625203.localdomain network[217515]: 'network-scripts' will be removed from distribution in near future.
Feb 20 09:22:24 np0005625203.localdomain network[217516]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 09:22:25 np0005625203.localdomain sshd[217550]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:22:25 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:22:25 np0005625203.localdomain systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 20 09:22:25 np0005625203.localdomain systemd[1]: virtqemud.service: Deactivated successfully.
Feb 20 09:22:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24634 DF PROTO=TCP SPT=49976 DPT=9100 SEQ=2172774708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE88400000000001030307) 
Feb 20 09:22:26 np0005625203.localdomain sshd[217550]: Invalid user ts1 from 34.131.211.42 port 40540
Feb 20 09:22:27 np0005625203.localdomain sshd[217550]: Received disconnect from 34.131.211.42 port 40540:11: Bye Bye [preauth]
Feb 20 09:22:27 np0005625203.localdomain sshd[217550]: Disconnected from invalid user ts1 34.131.211.42 port 40540 [preauth]
Feb 20 09:22:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37367 DF PROTO=TCP SPT=59764 DPT=9882 SEQ=503338527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE931D0000000001030307) 
Feb 20 09:22:29 np0005625203.localdomain sudo[217751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-blcijhglxupobhsdhastgbhtztcpqyfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579349.628348-1289-108248125808407/AnsiballZ_systemd_service.py
Feb 20 09:22:29 np0005625203.localdomain sudo[217751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:30 np0005625203.localdomain python3.9[217753]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:22:30 np0005625203.localdomain sudo[217751]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:30 np0005625203.localdomain sudo[217862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bjhwxmitnslcybgecnmeilfgzhcgpcxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579350.337313-1289-200908710412739/AnsiballZ_systemd_service.py
Feb 20 09:22:30 np0005625203.localdomain sudo[217862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:30 np0005625203.localdomain python3.9[217864]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:22:30 np0005625203.localdomain sudo[217862]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:31 np0005625203.localdomain sudo[217973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gmpghxnfxzhmoqcfzvubfgmypjwlwtjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579351.051206-1289-103411253764563/AnsiballZ_systemd_service.py
Feb 20 09:22:31 np0005625203.localdomain sudo[217973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:31 np0005625203.localdomain python3.9[217975]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:22:31 np0005625203.localdomain sudo[217973]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:31 np0005625203.localdomain sudo[218084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gptdzktftrkjvfumvkxcokleayuijmwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579351.7420692-1289-39635852590508/AnsiballZ_systemd_service.py
Feb 20 09:22:31 np0005625203.localdomain sudo[218084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37369 DF PROTO=TCP SPT=59764 DPT=9882 SEQ=503338527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FE9F400000000001030307) 
Feb 20 09:22:32 np0005625203.localdomain python3.9[218086]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:22:32 np0005625203.localdomain sudo[218084]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:33 np0005625203.localdomain sudo[218195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cfudagojgzfsgdfspgbspgraaysvvqgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579352.8992383-1289-223535464139209/AnsiballZ_systemd_service.py
Feb 20 09:22:33 np0005625203.localdomain sudo[218195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:33 np0005625203.localdomain python3.9[218197]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:22:34 np0005625203.localdomain sudo[218195]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:35 np0005625203.localdomain sudo[218306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uipiluwjwaotfphfjdlwvufgfiozrggj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579354.7725844-1289-97339600682706/AnsiballZ_systemd_service.py
Feb 20 09:22:35 np0005625203.localdomain sudo[218306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:35 np0005625203.localdomain python3.9[218308]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:22:35 np0005625203.localdomain sudo[218306]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:35 np0005625203.localdomain sudo[218417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rdpvaxdljhkozlccsijqogzjuogzuaky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579355.4806335-1289-229335292151610/AnsiballZ_systemd_service.py
Feb 20 09:22:35 np0005625203.localdomain sudo[218417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:36 np0005625203.localdomain python3.9[218419]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:22:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37370 DF PROTO=TCP SPT=59764 DPT=9882 SEQ=503338527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FEAF010000000001030307) 
Feb 20 09:22:37 np0005625203.localdomain sudo[218417]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:37 np0005625203.localdomain sudo[218528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wvpyshqogkdxhnhbmuoqlnmkiifbblte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579357.2117717-1289-53737537967203/AnsiballZ_systemd_service.py
Feb 20 09:22:37 np0005625203.localdomain sudo[218528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:37 np0005625203.localdomain python3.9[218530]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:22:37 np0005625203.localdomain sudo[218528]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1031 DF PROTO=TCP SPT=60198 DPT=9105 SEQ=922119022 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FEB6C00000000001030307) 
Feb 20 09:22:40 np0005625203.localdomain sudo[218639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vfcdjlzzhtgkwdezuolgenkdmodjgzmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579360.1802282-1466-237734933807029/AnsiballZ_file.py
Feb 20 09:22:40 np0005625203.localdomain sudo[218639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:40 np0005625203.localdomain python3.9[218641]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:40 np0005625203.localdomain sudo[218639]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:41 np0005625203.localdomain sudo[218749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-apchkhlgywafuwyumpwdvddexqhkxpwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579360.80637-1466-206497543309293/AnsiballZ_file.py
Feb 20 09:22:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=51654 DF PROTO=TCP SPT=54368 DPT=9105 SEQ=1349084042 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FEC2800000000001030307) 
Feb 20 09:22:41 np0005625203.localdomain sudo[218749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:41 np0005625203.localdomain python3.9[218751]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:41 np0005625203.localdomain sudo[218749]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:22:41 np0005625203.localdomain sudo[218865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qrfrwohrkuxqrdxznikxibcitegyvsho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579361.4752748-1466-147781347931130/AnsiballZ_file.py
Feb 20 09:22:41 np0005625203.localdomain sudo[218865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:41 np0005625203.localdomain podman[218840]: 2026-02-20 09:22:41.797111461 +0000 UTC m=+0.104101299 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller)
Feb 20 09:22:41 np0005625203.localdomain podman[218840]: 2026-02-20 09:22:41.877673852 +0000 UTC m=+0.184663680 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:22:41 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:22:41 np0005625203.localdomain python3.9[218872]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:41 np0005625203.localdomain sudo[218865]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:42 np0005625203.localdomain sudo[218994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vskusmosbziphsjvgwtyljbovmcptfmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579362.081124-1466-155139133300583/AnsiballZ_file.py
Feb 20 09:22:42 np0005625203.localdomain sudo[218994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:22:42 np0005625203.localdomain podman[218997]: 2026-02-20 09:22:42.483260414 +0000 UTC m=+0.081934397 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 09:22:42 np0005625203.localdomain podman[218997]: 2026-02-20 09:22:42.490265696 +0000 UTC m=+0.088939709 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:22:42 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:22:42 np0005625203.localdomain python3.9[218996]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:42 np0005625203.localdomain sudo[218994]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:43 np0005625203.localdomain sudo[219123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xumijckpekdwowvuszkzrlfwidhoabpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579362.764609-1466-265042475020782/AnsiballZ_file.py
Feb 20 09:22:43 np0005625203.localdomain sudo[219123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:43 np0005625203.localdomain python3.9[219125]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:43 np0005625203.localdomain sudo[219123]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:43 np0005625203.localdomain sudo[219233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydagfmbtalphchbpibksxtrqgpykrkto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579363.3997383-1466-195720160326159/AnsiballZ_file.py
Feb 20 09:22:43 np0005625203.localdomain sudo[219233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:43 np0005625203.localdomain python3.9[219235]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:43 np0005625203.localdomain sudo[219233]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37371 DF PROTO=TCP SPT=59764 DPT=9882 SEQ=503338527 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FECE800000000001030307) 
Feb 20 09:22:44 np0005625203.localdomain sudo[219343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qrxareubwavjayttstnkqxpzfkehmeyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579364.5629423-1466-221534108117103/AnsiballZ_file.py
Feb 20 09:22:44 np0005625203.localdomain sudo[219343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:45 np0005625203.localdomain python3.9[219345]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:45 np0005625203.localdomain sudo[219343]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:45 np0005625203.localdomain sudo[219453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-elewocemppnjorubmstztnccqfpnxabc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579365.1374645-1466-144113690817360/AnsiballZ_file.py
Feb 20 09:22:45 np0005625203.localdomain sudo[219453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:45 np0005625203.localdomain python3.9[219455]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:45 np0005625203.localdomain sudo[219453]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:46 np0005625203.localdomain sudo[219489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:22:46 np0005625203.localdomain sudo[219489]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:22:46 np0005625203.localdomain sudo[219489]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:46 np0005625203.localdomain sudo[219531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 09:22:46 np0005625203.localdomain sudo[219531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:22:46 np0005625203.localdomain sudo[219599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjzowzojoqgpvtenliulebugfxzigqvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579366.6151166-1637-259031908977556/AnsiballZ_file.py
Feb 20 09:22:46 np0005625203.localdomain sudo[219599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63264 DF PROTO=TCP SPT=48572 DPT=9101 SEQ=2162004073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FED9810000000001030307) 
Feb 20 09:22:47 np0005625203.localdomain python3.9[219601]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:47 np0005625203.localdomain sudo[219599]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:47 np0005625203.localdomain sudo[219531]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:47 np0005625203.localdomain sudo[219693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:22:47 np0005625203.localdomain sudo[219693]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:22:47 np0005625203.localdomain sudo[219693]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:47 np0005625203.localdomain sudo[219725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:22:47 np0005625203.localdomain sudo[219725]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:22:47 np0005625203.localdomain sudo[219765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-njelkwtssmfileotnivfnmajpuqplljk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579367.1760888-1637-136360762097855/AnsiballZ_file.py
Feb 20 09:22:47 np0005625203.localdomain sudo[219765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:47 np0005625203.localdomain python3.9[219767]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:47 np0005625203.localdomain sudo[219765]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:48 np0005625203.localdomain sudo[219725]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:48 np0005625203.localdomain sudo[219906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fkdvouqnxvjvqupzxzhzcrkqmllkoihy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579367.7795513-1637-181314366288270/AnsiballZ_file.py
Feb 20 09:22:48 np0005625203.localdomain sudo[219906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:48 np0005625203.localdomain python3.9[219908]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:48 np0005625203.localdomain sudo[219906]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:48 np0005625203.localdomain sudo[219996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:22:48 np0005625203.localdomain sudo[219996]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:22:48 np0005625203.localdomain sudo[219996]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:48 np0005625203.localdomain sudo[220034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sxjnnatzzodlvghhyjktytwfbxggmxgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579368.355537-1637-159272127567183/AnsiballZ_file.py
Feb 20 09:22:48 np0005625203.localdomain sudo[220034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:48 np0005625203.localdomain python3.9[220036]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:48 np0005625203.localdomain sudo[220034]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:49 np0005625203.localdomain sudo[220144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fjzpoqvbxpgtxblauueirbtevcveawcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579369.0027714-1637-57795028070781/AnsiballZ_file.py
Feb 20 09:22:49 np0005625203.localdomain sudo[220144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:49 np0005625203.localdomain python3.9[220146]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:49 np0005625203.localdomain sudo[220144]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:49 np0005625203.localdomain sudo[220254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ncvfzyfckjbzulcmzoebjbpfsnhmujhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579369.60212-1637-253305258715443/AnsiballZ_file.py
Feb 20 09:22:49 np0005625203.localdomain sudo[220254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:50 np0005625203.localdomain python3.9[220256]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:50 np0005625203.localdomain sudo[220254]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64534 DF PROTO=TCP SPT=36622 DPT=9100 SEQ=257076617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FEE5C00000000001030307) 
Feb 20 09:22:50 np0005625203.localdomain sudo[220364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gyqbprxooztrvdxtpnvefhnkcyiajmfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579370.188912-1637-205327992453165/AnsiballZ_file.py
Feb 20 09:22:50 np0005625203.localdomain sudo[220364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:50 np0005625203.localdomain python3.9[220366]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:50 np0005625203.localdomain sudo[220364]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:51 np0005625203.localdomain sudo[220474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qqzyfbiogxenzjkrzqqtjegtqsmyrype ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579370.809399-1637-197086443103988/AnsiballZ_file.py
Feb 20 09:22:51 np0005625203.localdomain sudo[220474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:51 np0005625203.localdomain python3.9[220476]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:22:51 np0005625203.localdomain sudo[220474]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:51 np0005625203.localdomain sudo[220584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zxscdkpupuhryvwszgninhphhlbauqjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579371.6278796-1811-22970522939111/AnsiballZ_command.py
Feb 20 09:22:51 np0005625203.localdomain sudo[220584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:52 np0005625203.localdomain python3.9[220586]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:22:52 np0005625203.localdomain sudo[220584]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:52 np0005625203.localdomain sshd[220593]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:22:52 np0005625203.localdomain sshd[220593]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:22:52 np0005625203.localdomain python3.9[220698]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 20 09:22:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63266 DF PROTO=TCP SPT=48572 DPT=9101 SEQ=2162004073 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FEF1410000000001030307) 
Feb 20 09:22:53 np0005625203.localdomain sudo[220806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fuxrcsewaxfzszvqpdblbkgzwqosbzyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579373.2283106-1865-61885481333729/AnsiballZ_systemd_service.py
Feb 20 09:22:53 np0005625203.localdomain sudo[220806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:53 np0005625203.localdomain python3.9[220808]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:22:53 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:22:53 np0005625203.localdomain systemd-rc-local-generator[220835]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:22:53 np0005625203.localdomain systemd-sysv-generator[220839]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:22:53 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:53 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:54 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:54 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:54 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:22:54 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:54 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:54 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:54 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:22:54 np0005625203.localdomain sudo[220806]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:54 np0005625203.localdomain sudo[220952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckiwidohcksnoqgmxymfmuyuuobiunao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579374.4033058-1889-105668741904540/AnsiballZ_command.py
Feb 20 09:22:54 np0005625203.localdomain sudo[220952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:54 np0005625203.localdomain python3.9[220954]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:22:54 np0005625203.localdomain sudo[220952]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:55 np0005625203.localdomain sudo[221063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-urnkbgquqennyoushariwlzgxnjuujsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579375.0002947-1889-46658332529085/AnsiballZ_command.py
Feb 20 09:22:55 np0005625203.localdomain sudo[221063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:55 np0005625203.localdomain python3.9[221065]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:22:55 np0005625203.localdomain sudo[221063]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64536 DF PROTO=TCP SPT=36622 DPT=9100 SEQ=257076617 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FEFD800000000001030307) 
Feb 20 09:22:56 np0005625203.localdomain sudo[221174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-feeirhhifstimmzrtlfrbbfileiqjhtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579375.6088169-1889-135594249483238/AnsiballZ_command.py
Feb 20 09:22:56 np0005625203.localdomain sudo[221174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:56 np0005625203.localdomain python3.9[221176]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:22:56 np0005625203.localdomain sudo[221174]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:57 np0005625203.localdomain sudo[221285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xkasxdsnfkzcgyilifdtorypezngfwss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579377.0255344-1889-266266589170546/AnsiballZ_command.py
Feb 20 09:22:57 np0005625203.localdomain sudo[221285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:57 np0005625203.localdomain python3.9[221287]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:22:57 np0005625203.localdomain sudo[221285]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:57 np0005625203.localdomain sudo[221396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sydochotideruakwbbuaiglhoxpltqhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579377.608576-1889-245314198483579/AnsiballZ_command.py
Feb 20 09:22:57 np0005625203.localdomain sudo[221396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:58 np0005625203.localdomain python3.9[221398]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:22:58 np0005625203.localdomain sudo[221396]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:58 np0005625203.localdomain sudo[221507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oyawkhrkdtoqjvyskvwgatmxfwxbipmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579378.6612747-1889-172988953063777/AnsiballZ_command.py
Feb 20 09:22:58 np0005625203.localdomain sudo[221507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:59 np0005625203.localdomain python3.9[221509]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:22:59 np0005625203.localdomain sudo[221507]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=40092 DF PROTO=TCP SPT=48134 DPT=9102 SEQ=901573462 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF09850000000001030307) 
Feb 20 09:22:59 np0005625203.localdomain sudo[221618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grxozvfeswwsjdeobcnzrbqirvxjfwth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579379.2398071-1889-63855935469145/AnsiballZ_command.py
Feb 20 09:22:59 np0005625203.localdomain sudo[221618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:22:59 np0005625203.localdomain sshd[221621]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:22:59 np0005625203.localdomain python3.9[221620]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:22:59 np0005625203.localdomain sudo[221618]: pam_unix(sudo:session): session closed for user root
Feb 20 09:22:59 np0005625203.localdomain sshd[221621]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:23:00 np0005625203.localdomain sudo[221731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sidemgkpsjhxakpmhpnhaaaqfrabuprk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579379.8641486-1889-141073439187172/AnsiballZ_command.py
Feb 20 09:23:00 np0005625203.localdomain sudo[221731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:00 np0005625203.localdomain python3.9[221733]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:23:00 np0005625203.localdomain sshd[221735]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:23:00 np0005625203.localdomain sudo[221731]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:01 np0005625203.localdomain sshd[221735]: Received disconnect from 194.107.115.2 port 41074:11: Bye Bye [preauth]
Feb 20 09:23:01 np0005625203.localdomain sshd[221735]: Disconnected from authenticating user root 194.107.115.2 port 41074 [preauth]
Feb 20 09:23:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4410 DF PROTO=TCP SPT=55858 DPT=9882 SEQ=3927015288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF14410000000001030307) 
Feb 20 09:23:02 np0005625203.localdomain sudo[221844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rheoapvnuqvnacehrsrgzxfifjqqcgof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579382.513549-2096-271444178950761/AnsiballZ_file.py
Feb 20 09:23:02 np0005625203.localdomain sudo[221844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:02 np0005625203.localdomain python3.9[221846]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:03 np0005625203.localdomain sudo[221844]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:03 np0005625203.localdomain sudo[221954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dosdssnstysluyckqglnddccnnljdvje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579383.1277964-2096-11005717641865/AnsiballZ_file.py
Feb 20 09:23:03 np0005625203.localdomain sudo[221954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:03 np0005625203.localdomain python3.9[221956]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:03 np0005625203.localdomain sudo[221954]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:04 np0005625203.localdomain sudo[222064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ugkrokcagwzwoawyrkimthayirlssofw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579383.8625681-2141-43281674100487/AnsiballZ_file.py
Feb 20 09:23:04 np0005625203.localdomain sudo[222064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:04 np0005625203.localdomain python3.9[222066]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:04 np0005625203.localdomain sudo[222064]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:04 np0005625203.localdomain sudo[222174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ikcaexrenwkoilsbpyxijwwptwphbbkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579384.4062765-2141-149838528944545/AnsiballZ_file.py
Feb 20 09:23:04 np0005625203.localdomain sudo[222174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:04 np0005625203.localdomain python3.9[222176]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:04 np0005625203.localdomain sudo[222174]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:05 np0005625203.localdomain sudo[222284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hfuetwmvgxlhfgqzegpoetpukqdioyqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579385.0219374-2141-43883443927881/AnsiballZ_file.py
Feb 20 09:23:05 np0005625203.localdomain sudo[222284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:05 np0005625203.localdomain python3.9[222286]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:05 np0005625203.localdomain sudo[222284]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:05 np0005625203.localdomain sudo[222394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dcpybkojmadudozpvmajfikxphldmhhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579385.6219802-2141-156980395877315/AnsiballZ_file.py
Feb 20 09:23:05 np0005625203.localdomain sudo[222394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:06 np0005625203.localdomain python3.9[222396]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=4411 DF PROTO=TCP SPT=55858 DPT=9882 SEQ=3927015288 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF24010000000001030307) 
Feb 20 09:23:06 np0005625203.localdomain sudo[222394]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:06 np0005625203.localdomain sudo[222504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-glcsyfssjuldstnmoqogvkevypuduvlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579386.1987867-2141-253658941496941/AnsiballZ_file.py
Feb 20 09:23:06 np0005625203.localdomain sudo[222504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:06 np0005625203.localdomain python3.9[222506]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:06 np0005625203.localdomain sudo[222504]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:07 np0005625203.localdomain sudo[222614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zndmzjnikzdmyydczqbxfwbjfbtyikgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579386.8141706-2141-135243091361442/AnsiballZ_file.py
Feb 20 09:23:07 np0005625203.localdomain sudo[222614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:07 np0005625203.localdomain python3.9[222616]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:07 np0005625203.localdomain sudo[222614]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:23:07.635 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:23:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:23:07.636 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:23:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:23:07.636 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:23:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43729 DF PROTO=TCP SPT=56904 DPT=9105 SEQ=1765691620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF2BC00000000001030307) 
Feb 20 09:23:08 np0005625203.localdomain sudo[222724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xwrckrfjmbdnoysvwbgoedcdbrhmehju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579387.849105-2141-205637972295871/AnsiballZ_file.py
Feb 20 09:23:08 np0005625203.localdomain sudo[222724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:08 np0005625203.localdomain python3.9[222726]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:08 np0005625203.localdomain sudo[222724]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52771 DF PROTO=TCP SPT=41524 DPT=9105 SEQ=2360808995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF38810000000001030307) 
Feb 20 09:23:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:23:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:23:12 np0005625203.localdomain podman[222745]: 2026-02-20 09:23:12.772519625 +0000 UTC m=+0.084936005 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:23:12 np0005625203.localdomain podman[222744]: 2026-02-20 09:23:12.836766157 +0000 UTC m=+0.150106247 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 20 09:23:12 np0005625203.localdomain podman[222744]: 2026-02-20 09:23:12.845210196 +0000 UTC m=+0.158550326 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:23:12 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:23:12 np0005625203.localdomain podman[222745]: 2026-02-20 09:23:12.871276747 +0000 UTC m=+0.183693087 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:23:12 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:23:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43731 DF PROTO=TCP SPT=56904 DPT=9105 SEQ=1765691620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF43800000000001030307) 
Feb 20 09:23:16 np0005625203.localdomain sudo[222879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pdpnxquuovzvaqshxztpghcsbkgnkpsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579396.2260008-2506-210392043332661/AnsiballZ_getent.py
Feb 20 09:23:16 np0005625203.localdomain sudo[222879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:16 np0005625203.localdomain python3.9[222881]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb 20 09:23:16 np0005625203.localdomain sudo[222879]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:16 np0005625203.localdomain sshd[222900]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:23:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12664 DF PROTO=TCP SPT=49396 DPT=9101 SEQ=2760633743 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF4EC00000000001030307) 
Feb 20 09:23:17 np0005625203.localdomain auditd[725]: Audit daemon rotating log files
Feb 20 09:23:17 np0005625203.localdomain sudo[222992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-thdilisljheughfjngouasjdveqjxlax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579397.0454373-2530-130197311847786/AnsiballZ_group.py
Feb 20 09:23:17 np0005625203.localdomain sudo[222992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:17 np0005625203.localdomain python3.9[222994]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 20 09:23:17 np0005625203.localdomain groupadd[222995]: group added to /etc/group: name=nova, GID=42436
Feb 20 09:23:17 np0005625203.localdomain groupadd[222995]: group added to /etc/gshadow: name=nova
Feb 20 09:23:17 np0005625203.localdomain groupadd[222995]: new group: name=nova, GID=42436
Feb 20 09:23:17 np0005625203.localdomain sudo[222992]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:18 np0005625203.localdomain sshd[222900]: Invalid user ubuntu from 103.48.192.48 port 38761
Feb 20 09:23:18 np0005625203.localdomain sudo[223108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bayhzrhsvpflctpeneizjzvpafkuhosk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579397.9430678-2554-57624435953893/AnsiballZ_user.py
Feb 20 09:23:18 np0005625203.localdomain sudo[223108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:18 np0005625203.localdomain sshd[222900]: Received disconnect from 103.48.192.48 port 38761:11: Bye Bye [preauth]
Feb 20 09:23:18 np0005625203.localdomain sshd[222900]: Disconnected from invalid user ubuntu 103.48.192.48 port 38761 [preauth]
Feb 20 09:23:18 np0005625203.localdomain python3.9[223110]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005625203.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 20 09:23:18 np0005625203.localdomain useradd[223112]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Feb 20 09:23:18 np0005625203.localdomain useradd[223112]: add 'nova' to group 'libvirt'
Feb 20 09:23:18 np0005625203.localdomain useradd[223112]: add 'nova' to shadow group 'libvirt'
Feb 20 09:23:18 np0005625203.localdomain sudo[223108]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=41510 DF PROTO=TCP SPT=39344 DPT=9101 SEQ=2072273638 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF5A800000000001030307) 
Feb 20 09:23:20 np0005625203.localdomain sshd[223136]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:23:20 np0005625203.localdomain sshd[223136]: Accepted publickey for zuul from 192.168.122.30 port 56784 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:23:20 np0005625203.localdomain systemd-logind[759]: New session 54 of user zuul.
Feb 20 09:23:20 np0005625203.localdomain systemd[1]: Started Session 54 of User zuul.
Feb 20 09:23:20 np0005625203.localdomain sshd[223136]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:23:20 np0005625203.localdomain sshd[223139]: Received disconnect from 192.168.122.30 port 56784:11: disconnected by user
Feb 20 09:23:20 np0005625203.localdomain sshd[223139]: Disconnected from user zuul 192.168.122.30 port 56784
Feb 20 09:23:20 np0005625203.localdomain sshd[223136]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:23:20 np0005625203.localdomain systemd[1]: session-54.scope: Deactivated successfully.
Feb 20 09:23:20 np0005625203.localdomain systemd-logind[759]: Session 54 logged out. Waiting for processes to exit.
Feb 20 09:23:20 np0005625203.localdomain systemd-logind[759]: Removed session 54.
Feb 20 09:23:21 np0005625203.localdomain sshd[223211]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:23:21 np0005625203.localdomain python3.9[223249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:23:22 np0005625203.localdomain python3.9[223304]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:22 np0005625203.localdomain sshd[223211]: Invalid user bitrix from 152.32.129.236 port 34592
Feb 20 09:23:22 np0005625203.localdomain sshd[223211]: Received disconnect from 152.32.129.236 port 34592:11: Bye Bye [preauth]
Feb 20 09:23:22 np0005625203.localdomain sshd[223211]: Disconnected from invalid user bitrix 152.32.129.236 port 34592 [preauth]
Feb 20 09:23:22 np0005625203.localdomain python3.9[223412]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:23:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=24637 DF PROTO=TCP SPT=49976 DPT=9100 SEQ=2172774708 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF66800000000001030307) 
Feb 20 09:23:23 np0005625203.localdomain python3.9[223498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579402.2727869-2629-5654801947994/.source _original_basename=ssh-config follow=False checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:23 np0005625203.localdomain python3.9[223606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:23:24 np0005625203.localdomain python3.9[223692]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579403.419083-2629-116234685155831/.source.py _original_basename=nova_statedir_ownership.py follow=False checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:24 np0005625203.localdomain python3.9[223800]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:23:25 np0005625203.localdomain python3.9[223886]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579404.5077724-2629-74408386499442/.source _original_basename=run-on-host follow=False checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:26 np0005625203.localdomain python3.9[223994]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:23:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=193 DF PROTO=TCP SPT=33912 DPT=9100 SEQ=4160601369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF72C00000000001030307) 
Feb 20 09:23:26 np0005625203.localdomain python3.9[224080]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579405.7316566-2791-36872248471787/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=5c86faa791c2b2de3923873eeab6b1f262f557b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:27 np0005625203.localdomain sudo[224188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dpvsfynrzzbisjicvyjhjduoxnmdihkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579407.2322173-2836-83503507122755/AnsiballZ_file.py
Feb 20 09:23:27 np0005625203.localdomain sudo[224188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:27 np0005625203.localdomain python3.9[224190]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:23:27 np0005625203.localdomain sudo[224188]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:28 np0005625203.localdomain sudo[224298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ulmxyqnlieyvcfsgywiibzxkvgyqqmse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579407.9165988-2860-137683795607083/AnsiballZ_copy.py
Feb 20 09:23:28 np0005625203.localdomain sudo[224298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:28 np0005625203.localdomain python3.9[224300]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:23:28 np0005625203.localdomain sudo[224298]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:28 np0005625203.localdomain sudo[224408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ptqyhjmygzihkmybkwowjjyftvgcwqhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579408.6043568-2884-197794027803170/AnsiballZ_stat.py
Feb 20 09:23:28 np0005625203.localdomain sudo[224408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30684 DF PROTO=TCP SPT=36366 DPT=9882 SEQ=3242990750 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF7D7C0000000001030307) 
Feb 20 09:23:29 np0005625203.localdomain python3.9[224410]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:23:29 np0005625203.localdomain sudo[224408]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:29 np0005625203.localdomain sudo[224520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uolvlffbwikpkxvdipvtrmnknneiwfdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579409.4186563-2911-204336305415975/AnsiballZ_file.py
Feb 20 09:23:29 np0005625203.localdomain sudo[224520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:29 np0005625203.localdomain python3.9[224522]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:23:29 np0005625203.localdomain sudo[224520]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:30 np0005625203.localdomain python3.9[224630]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:23:31 np0005625203.localdomain sudo[224740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gimdwyvsyqlhzmchgcofirusxegneojv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579411.6281567-2968-22726681548665/AnsiballZ_file.py
Feb 20 09:23:31 np0005625203.localdomain sudo[224740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30686 DF PROTO=TCP SPT=36366 DPT=9882 SEQ=3242990750 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF89800000000001030307) 
Feb 20 09:23:32 np0005625203.localdomain python3.9[224742]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:23:32 np0005625203.localdomain sudo[224740]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:32 np0005625203.localdomain sudo[224850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ahhfdlzumalzcgyamwygkprxxfdiymip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579412.3033195-2993-196545078617769/AnsiballZ_file.py
Feb 20 09:23:32 np0005625203.localdomain sudo[224850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:32 np0005625203.localdomain python3.9[224852]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:32 np0005625203.localdomain sudo[224850]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:33 np0005625203.localdomain python3.9[224960]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute_init state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:23:36 np0005625203.localdomain sudo[225262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eimiqorafnnqdmoeicgbetksvkmbvqyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579415.601591-3094-21929638393053/AnsiballZ_container_config_data.py
Feb 20 09:23:36 np0005625203.localdomain sudo[225262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=30687 DF PROTO=TCP SPT=36366 DPT=9882 SEQ=3242990750 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FF99400000000001030307) 
Feb 20 09:23:36 np0005625203.localdomain python3.9[225264]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute_init config_pattern=*.json debug=False
Feb 20 09:23:36 np0005625203.localdomain sudo[225262]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:36 np0005625203.localdomain sshd[225282]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:23:37 np0005625203.localdomain sudo[225374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cgpjwtihfpnfirkzxtghqefbozvimxet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579416.70023-3127-152141768939430/AnsiballZ_container_config_hash.py
Feb 20 09:23:37 np0005625203.localdomain sudo[225374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:37 np0005625203.localdomain python3.9[225376]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 20 09:23:37 np0005625203.localdomain sudo[225374]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38328 DF PROTO=TCP SPT=45168 DPT=9105 SEQ=1349808251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FFA1010000000001030307) 
Feb 20 09:23:38 np0005625203.localdomain sshd[225282]: Received disconnect from 103.61.123.132 port 37674:11: Bye Bye [preauth]
Feb 20 09:23:38 np0005625203.localdomain sshd[225282]: Disconnected from authenticating user root 103.61.123.132 port 37674 [preauth]
Feb 20 09:23:38 np0005625203.localdomain sudo[225484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxvpnticicrrhuhmhvplcbwgzuxjxkru ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771579417.7141433-3157-52758310792618/AnsiballZ_edpm_container_manage.py
Feb 20 09:23:38 np0005625203.localdomain sudo[225484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:38 np0005625203.localdomain python3[225486]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute_init config_id=nova_compute_init config_overrides={} config_patterns=*.json containers=['nova_compute_init'] log_base_path=/var/log/containers/stdouts debug=False
Feb 20 09:23:38 np0005625203.localdomain podman[225524]: 
Feb 20 09:23:38 np0005625203.localdomain podman[225524]: 2026-02-20 09:23:38.727460132 +0000 UTC m=+0.060369101 container create 77edbe1d945e2b38c97d8a18f93ee06ae61ca7889352e35af1b678b276f7cbeb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=nova_compute_init, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true)
Feb 20 09:23:38 np0005625203.localdomain podman[225524]: 2026-02-20 09:23:38.695362167 +0000 UTC m=+0.028271106 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 20 09:23:38 np0005625203.localdomain python3[225486]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --env EDPM_CONFIG_HASH=ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f --label config_id=nova_compute_init --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb 20 09:23:38 np0005625203.localdomain sudo[225484]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:39 np0005625203.localdomain sudo[225669]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ybredgucgsutycdnuaiohgfxbxndylmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579419.0955155-3181-50207491387758/AnsiballZ_stat.py
Feb 20 09:23:39 np0005625203.localdomain sudo[225669]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:39 np0005625203.localdomain python3.9[225671]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:23:39 np0005625203.localdomain sudo[225669]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:40 np0005625203.localdomain python3.9[225781]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 20 09:23:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=1036 DF PROTO=TCP SPT=60198 DPT=9105 SEQ=922119022 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FFAC800000000001030307) 
Feb 20 09:23:42 np0005625203.localdomain sudo[225889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ireiedjmimauejuczayynliyrtaowpma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579421.3673046-3262-270951539473317/AnsiballZ_stat.py
Feb 20 09:23:42 np0005625203.localdomain sudo[225889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:42 np0005625203.localdomain python3.9[225891]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:23:42 np0005625203.localdomain sudo[225889]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:42 np0005625203.localdomain sudo[225979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-glijcpcusprnbunmsfolmcrewxhakbup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579421.3673046-3262-270951539473317/AnsiballZ_copy.py
Feb 20 09:23:42 np0005625203.localdomain sudo[225979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:23:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:23:43 np0005625203.localdomain systemd[1]: tmp-crun.CWELdR.mount: Deactivated successfully.
Feb 20 09:23:43 np0005625203.localdomain podman[225982]: 2026-02-20 09:23:43.074133248 +0000 UTC m=+0.095067300 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 20 09:23:43 np0005625203.localdomain podman[225982]: 2026-02-20 09:23:43.079952052 +0000 UTC m=+0.100886074 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:23:43 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:23:43 np0005625203.localdomain podman[225983]: 2026-02-20 09:23:43.161163153 +0000 UTC m=+0.181840157 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 20 09:23:43 np0005625203.localdomain python3.9[225981]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579421.3673046-3262-270951539473317/.source.yaml _original_basename=.5bhjqf55 follow=False checksum=201984e070e9869531933fce67c78d3ce61bb83b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:23:43 np0005625203.localdomain sudo[225979]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:43 np0005625203.localdomain podman[225983]: 2026-02-20 09:23:43.208278765 +0000 UTC m=+0.228955759 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:23:43 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:23:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38330 DF PROTO=TCP SPT=45168 DPT=9105 SEQ=1349808251 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FFB8C00000000001030307) 
Feb 20 09:23:45 np0005625203.localdomain sudo[226132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ggemxfmeunbshvekdsniplvqkomyjuhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579424.8996112-3313-111246363956071/AnsiballZ_file.py
Feb 20 09:23:45 np0005625203.localdomain sudo[226132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:45 np0005625203.localdomain python3.9[226134]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:23:45 np0005625203.localdomain sudo[226132]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:45 np0005625203.localdomain sudo[226242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yynvqlhbattciwwlhknshdxabijcdind ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579425.5612335-3337-11786721687522/AnsiballZ_file.py
Feb 20 09:23:45 np0005625203.localdomain sudo[226242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:46 np0005625203.localdomain python3.9[226244]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:23:46 np0005625203.localdomain sudo[226242]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:46 np0005625203.localdomain sudo[226352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wokizhqpzlknymfligdtgjcqbxexveio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579426.60115-3361-132673716495082/AnsiballZ_stat.py
Feb 20 09:23:46 np0005625203.localdomain sudo[226352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=14 DF PROTO=TCP SPT=44702 DPT=9101 SEQ=3337074015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FFC4000000000001030307) 
Feb 20 09:23:47 np0005625203.localdomain python3.9[226354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:23:47 np0005625203.localdomain sudo[226352]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:47 np0005625203.localdomain sudo[226442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hjrpazczegzpcsxvqgiqgligzbtiiire ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579426.60115-3361-132673716495082/AnsiballZ_copy.py
Feb 20 09:23:47 np0005625203.localdomain sudo[226442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:47 np0005625203.localdomain python3.9[226444]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/nova_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579426.60115-3361-132673716495082/.source.json _original_basename=.34arc_4q follow=False checksum=0018389a48392615f4a8869cad43008a907328ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:23:47 np0005625203.localdomain sudo[226442]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:48 np0005625203.localdomain python3.9[226552]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:23:48 np0005625203.localdomain sudo[226624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:23:48 np0005625203.localdomain sudo[226624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:23:48 np0005625203.localdomain sudo[226624]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:48 np0005625203.localdomain sudo[226676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:23:48 np0005625203.localdomain sudo[226676]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:23:49 np0005625203.localdomain sudo[226676]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5051 DF PROTO=TCP SPT=59280 DPT=9100 SEQ=2697206795 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FFD0000000000001030307) 
Feb 20 09:23:50 np0005625203.localdomain sudo[226831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:23:50 np0005625203.localdomain sudo[226831]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:23:50 np0005625203.localdomain sudo[226831]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:51 np0005625203.localdomain sudo[226939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zpqdeogkroaqujyrreafhijlkvccojyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579430.788526-3481-162596972468731/AnsiballZ_container_config_data.py
Feb 20 09:23:51 np0005625203.localdomain sudo[226939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:51 np0005625203.localdomain python3.9[226941]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute config_pattern=*.json debug=False
Feb 20 09:23:51 np0005625203.localdomain sudo[226939]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:51 np0005625203.localdomain sudo[227049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-omkomrecmtxgnpilgxfmamrstpzulhyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579431.7135575-3514-61864850924044/AnsiballZ_container_config_hash.py
Feb 20 09:23:51 np0005625203.localdomain sudo[227049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:52 np0005625203.localdomain python3.9[227051]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 20 09:23:52 np0005625203.localdomain sudo[227049]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16 DF PROTO=TCP SPT=44702 DPT=9101 SEQ=3337074015 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FFDBC00000000001030307) 
Feb 20 09:23:53 np0005625203.localdomain sudo[227159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dulqdkxmkyufmevuhourvmlsraqycbtq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771579432.609609-3544-82382065589877/AnsiballZ_edpm_container_manage.py
Feb 20 09:23:53 np0005625203.localdomain sudo[227159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:53 np0005625203.localdomain python3[227161]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute config_id=nova_compute config_overrides={} config_patterns=*.json containers=['nova_compute'] log_base_path=/var/log/containers/stdouts debug=False
Feb 20 09:23:53 np0005625203.localdomain python3[227161]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd",
                                                                    "Digest": "sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2026-01-30T06:31:38.534497001Z",
                                                                    "Config": {
                                                                         "User": "nova",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20260127",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 1214548351,
                                                                    "VirtualSize": 1214548351,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",
                                                                              "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",
                                                                              "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",
                                                                              "sha256:6cac2e473d63cf2a9b8ef2ea3f4fbc7fb780c57021c3588efd56da3aa8cf8843",
                                                                              "sha256:927dd86a09392106af537557be80232b7e8ca154daa00857c24fe20f9e550a50"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20260127",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "nova",
                                                                    "History": [
                                                                         {
                                                                              "created": "2026-01-28T05:56:51.126388624Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-28T05:56:51.126459235Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20260127\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-28T05:56:53.726938221Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890429494Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890534417Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890553228Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890570688Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890616649Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890659121Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:19.232761948Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:52.670543613Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:55.650316471Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:55.970652058Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:56.274301506Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:56.82928237Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.134416869Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.444274899Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.746599531Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.041383545Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.352119949Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.671042058Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.969834612Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.264649297Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.518696627Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.800434902Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:00.115933627Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:00.41398479Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:03.414738437Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:03.709666444Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:04.019868523Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:05.41751141Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124324267Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124384329Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124399349Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124410339Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:08.028503475Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:12:56.089921987Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:34.524252589Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:37.262239859Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:39.234075496Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:42.686286019Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:54.133364958Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:20:10.283411186Z",
                                                                              "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:20:19.407054412Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:29:42.656365894Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:37.451289936Z",
                                                                              "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:38.151652427Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:38.532191009Z",
                                                                              "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:38.532298572Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:44.609081717Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 20 09:23:53 np0005625203.localdomain podman[227212]: 2026-02-20 09:23:53.992070864 +0000 UTC m=+0.094470402 container remove 31ac299c8d38a97a7f018ebad9a994b277cd67513dccd41d5719e8750ed321c4 (image=registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1, name=nova_compute, name=rhosp-rhel9/openstack-nova-compute, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-nova-compute, vcs-ref=fb12bb7868525ee01e6b8233454fd15b039a7ffe, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, vcs-type=git, container_name=nova_compute, vendor=Red Hat, Inc., build-date=2026-01-12T23:32:04Z, konflux.additional-tags=17.1.13 17.1_20260112.1, release=1766032510, version=17.1.13, batch=17.1_20260112.1, org.opencontainers.image.created=2026-01-12T23:32:04Z, cpe=cpe:/a:redhat:openstack:17.1::el9, description=Red Hat OpenStack Platform 17.1 nova-compute, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'LIBGUESTFS_BACKEND': 'direct', 'TRIPLEO_CONFIG_HASH': 'ea40a11d6c51260bfa854053d924f0d3-2eb7e8e9794eebaba92e1ff8facc8868'}, 'healthcheck': {'test': '/openstack/healthcheck 5672'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-nova-compute:17.1', 'ipc': 'host', 'net': 'host', 'privileged': True, 'restart': 'always', 'start_order': 3, 'ulimit': ['nofile=131072', 'memlock=67108864'], 'user': 'nova', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/log/containers/nova:/var/log/nova', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro', '/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/nova_libvirt:/var/lib/kolla/config_files/src:ro', '/var/lib/config-data/puppet-generated/iscsid/etc/iscsi:/var/lib/kolla/config_files/src-iscsid:ro', '/var/lib/tripleo-config/ceph:/var/lib/kolla/config_files/src-ceph:z', '/dev:/dev', '/lib/modules:/lib/modules:ro', '/run:/run', '/run/nova:/run/nova:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/libvirt:/var/lib/libvirt:shared', '/sys/class/net:/sys/class/net', '/sys/bus/pci:/sys/bus/pci', '/boot:/boot:ro', '/var/lib/nova:/var/lib/nova:shared']}, io.buildah.version=1.41.5, io.openshift.expose-services=, tcib_managed=true, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=tripleo_ansible, architecture=x86_64, io.k8s.display-name=Red Hat OpenStack Platform 17.1 nova-compute, url=https://www.redhat.com, com.redhat.component=openstack-nova-compute-container, maintainer=OpenStack TripleO Team, org.opencontainers.image.revision=fb12bb7868525ee01e6b8233454fd15b039a7ffe, config_id=tripleo_step5, io.k8s.description=Red Hat OpenStack Platform 17.1 nova-compute, summary=Red Hat OpenStack Platform 17.1 nova-compute)
Feb 20 09:23:53 np0005625203.localdomain python3[227161]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute
Feb 20 09:23:54 np0005625203.localdomain podman[227225]: 
Feb 20 09:23:54 np0005625203.localdomain podman[227225]: 2026-02-20 09:23:54.098417261 +0000 UTC m=+0.088259145 container create 6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, config_id=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 20 09:23:54 np0005625203.localdomain podman[227225]: 2026-02-20 09:23:54.055999078 +0000 UTC m=+0.045841032 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 20 09:23:54 np0005625203.localdomain python3[227161]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f --label config_id=nova_compute --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb 20 09:23:54 np0005625203.localdomain sudo[227159]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:55 np0005625203.localdomain sudo[227367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzxnzzvpljncfyptohilanwqfvhpfgen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579434.436754-3568-251955197369769/AnsiballZ_stat.py
Feb 20 09:23:55 np0005625203.localdomain sudo[227367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:55 np0005625203.localdomain python3.9[227369]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:23:55 np0005625203.localdomain sudo[227367]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:56 np0005625203.localdomain sudo[227479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zsyzohwdqartclvxhhglffyxawqcupvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579435.7686756-3595-114651940424927/AnsiballZ_file.py
Feb 20 09:23:56 np0005625203.localdomain sudo[227479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5053 DF PROTO=TCP SPT=59280 DPT=9100 SEQ=2697206795 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FFE7C00000000001030307) 
Feb 20 09:23:56 np0005625203.localdomain python3.9[227481]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:23:56 np0005625203.localdomain sudo[227479]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:56 np0005625203.localdomain sudo[227534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zannqlkmozbzkxiqdhjqwtgcccjkzrit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579435.7686756-3595-114651940424927/AnsiballZ_stat.py
Feb 20 09:23:56 np0005625203.localdomain sudo[227534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:56 np0005625203.localdomain python3.9[227536]: ansible-stat Invoked with path=/etc/systemd/system/edpm_nova_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:23:56 np0005625203.localdomain sudo[227534]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:57 np0005625203.localdomain sudo[227643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ukwxmtfcsjxokzgeprnapfuvfyrkuyqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579436.7565084-3595-256804606692241/AnsiballZ_copy.py
Feb 20 09:23:57 np0005625203.localdomain sudo[227643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:57 np0005625203.localdomain python3.9[227645]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771579436.7565084-3595-256804606692241/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:23:57 np0005625203.localdomain sudo[227643]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:57 np0005625203.localdomain sudo[227698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wtfmvyemljqyhegfhxqxubqosxjiuuer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579436.7565084-3595-256804606692241/AnsiballZ_systemd.py
Feb 20 09:23:57 np0005625203.localdomain sudo[227698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:57 np0005625203.localdomain python3.9[227700]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:23:57 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:23:58 np0005625203.localdomain systemd-rc-local-generator[227724]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:23:58 np0005625203.localdomain systemd-sysv-generator[227729]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:23:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:23:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:23:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:23:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:23:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:23:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:23:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:23:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:23:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:23:58 np0005625203.localdomain sudo[227698]: pam_unix(sudo:session): session closed for user root
Feb 20 09:23:58 np0005625203.localdomain sudo[227789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xihhmdtcvzqxchotinjfhnyhwchowujz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579436.7565084-3595-256804606692241/AnsiballZ_systemd.py
Feb 20 09:23:58 np0005625203.localdomain sudo[227789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:23:58 np0005625203.localdomain python3.9[227791]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:23:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47595 DF PROTO=TCP SPT=54584 DPT=9882 SEQ=4143302830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FFF2AD0000000001030307) 
Feb 20 09:23:59 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:24:00 np0005625203.localdomain systemd-sysv-generator[227824]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:24:00 np0005625203.localdomain systemd-rc-local-generator[227821]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:24:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:24:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:00 np0005625203.localdomain systemd[1]: Starting nova_compute container...
Feb 20 09:24:00 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:24:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3e0db24b6063b24bb4df15bb988811b5bcf9e6789b213c4536d36feae343ea/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3e0db24b6063b24bb4df15bb988811b5bcf9e6789b213c4536d36feae343ea/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3e0db24b6063b24bb4df15bb988811b5bcf9e6789b213c4536d36feae343ea/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3e0db24b6063b24bb4df15bb988811b5bcf9e6789b213c4536d36feae343ea/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3e0db24b6063b24bb4df15bb988811b5bcf9e6789b213c4536d36feae343ea/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:00 np0005625203.localdomain podman[227833]: 2026-02-20 09:24:00.4143965 +0000 UTC m=+0.111461980 container init 6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:24:00 np0005625203.localdomain podman[227833]: 2026-02-20 09:24:00.423970983 +0000 UTC m=+0.121036463 container start 6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 20 09:24:00 np0005625203.localdomain podman[227833]: nova_compute
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: + sudo -E kolla_set_configs
Feb 20 09:24:00 np0005625203.localdomain systemd[1]: Started nova_compute container.
Feb 20 09:24:00 np0005625203.localdomain sudo[227789]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Validating config file
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Copying service configuration files
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Copying /var/lib/kolla/config_files/src/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Deleting /etc/ceph
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Creating directory /etc/ceph
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /etc/ceph
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Writing out command to execute
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: ++ cat /run_command
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: + CMD=nova-compute
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: + ARGS=
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: + sudo kolla_copy_cacerts
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: + [[ ! -n '' ]]
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: + . kolla_extend_start
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: + echo 'Running command: '\''nova-compute'\'''
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: Running command: 'nova-compute'
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: + umask 0022
Feb 20 09:24:00 np0005625203.localdomain nova_compute[227848]: + exec nova-compute
Feb 20 09:24:01 np0005625203.localdomain python3.9[227967]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 20 09:24:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47597 DF PROTO=TCP SPT=54584 DPT=9882 SEQ=4143302830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AE9FFFEC10000000001030307) 
Feb 20 09:24:02 np0005625203.localdomain sudo[228078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vjpguppblcplyhpbnoevdprnblkrmqsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579441.9342146-3730-93717199814373/AnsiballZ_stat.py
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.224 227852 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 20 09:24:02 np0005625203.localdomain sudo[228078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.225 227852 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.225 227852 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.225 227852 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.339 227852 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.360 227852 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.360 227852 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 20 09:24:02 np0005625203.localdomain python3.9[228080]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:24:02 np0005625203.localdomain sudo[228078]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.753 227852 INFO nova.virt.driver [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 20 09:24:02 np0005625203.localdomain sudo[228170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bwigrohftrxeegwqjijkbvfrykswcmld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579441.9342146-3730-93717199814373/AnsiballZ_copy.py
Feb 20 09:24:02 np0005625203.localdomain sudo[228170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.870 227852 INFO nova.compute.provider_config [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.878 227852 WARNING nova.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.879 227852 DEBUG oslo_concurrency.lockutils [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.879 227852 DEBUG oslo_concurrency.lockutils [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.879 227852 DEBUG oslo_concurrency.lockutils [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.879 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.880 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.880 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.880 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.880 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.880 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.880 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.881 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.881 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.881 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.881 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.881 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.881 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.881 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.882 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.882 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.882 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.882 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.882 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.882 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] console_host                   = np0005625203.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.882 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.883 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.883 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.883 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.883 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.883 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.883 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.883 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.884 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.884 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.884 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.885 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.886 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.886 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.886 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.887 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.887 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.887 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.888 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] host                           = np0005625203.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.888 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.888 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.889 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.889 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.889 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.889 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.889 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.890 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.890 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.890 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.890 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.890 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.891 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.891 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.891 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.891 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.891 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.891 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.892 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.892 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.892 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.892 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.892 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.893 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.893 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.893 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.893 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.893 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.893 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.894 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.894 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.894 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.894 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.894 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.895 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.895 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.895 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.895 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.895 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.895 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.896 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.896 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] my_block_storage_ip            = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.896 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] my_ip                          = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.896 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.896 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.897 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.897 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.897 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.897 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.897 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.898 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.898 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.898 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.898 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.898 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.898 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.899 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.899 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.899 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.899 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.899 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.900 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.900 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.900 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.900 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.900 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.900 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.901 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.901 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.901 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.901 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.901 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.901 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.901 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.901 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.902 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.902 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.902 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.902 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.902 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.902 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.902 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.903 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.903 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.903 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.903 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.903 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.903 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.903 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.903 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.904 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.904 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.904 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.904 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.904 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.904 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.904 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.905 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.905 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.905 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.905 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.905 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.905 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.905 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.905 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.906 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.906 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.906 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.906 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.906 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.906 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.906 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.907 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.907 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.907 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.907 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.907 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.907 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.907 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.908 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.908 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.908 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.908 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.908 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.908 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.908 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.908 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.909 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.909 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.909 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.909 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.909 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.909 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.909 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.910 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.910 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.910 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.910 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.910 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.910 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.910 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.911 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.911 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.911 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.911 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.911 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.911 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.911 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.911 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.912 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.912 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.912 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.912 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.912 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.912 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.912 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.913 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.913 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.913 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.913 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.913 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.913 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.913 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.913 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.914 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.914 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.914 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.914 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.914 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.914 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.914 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.915 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.915 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.915 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.915 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.915 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.915 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.915 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.916 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.916 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.916 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.916 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.916 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.916 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.916 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.917 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.917 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.917 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.917 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.917 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.917 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.917 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.918 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.918 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.918 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.918 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.918 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.918 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.918 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.919 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.919 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.919 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.919 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.919 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.919 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.919 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.919 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.920 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.920 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.920 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.920 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.920 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.920 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.920 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.921 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.921 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.921 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.921 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.921 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.921 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.921 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.921 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.922 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.922 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.922 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.922 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.922 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.922 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.922 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.923 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.923 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.923 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.923 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.923 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.923 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.923 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.923 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.924 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.924 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.924 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.924 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.924 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.924 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.924 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.925 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.925 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.925 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.925 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.925 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.925 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.925 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.926 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.926 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.926 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.926 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.926 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.926 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.926 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.926 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.927 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.927 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.927 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.927 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.927 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.927 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.927 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.927 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.928 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.928 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.928 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.928 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.928 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.928 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.928 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.929 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.929 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.929 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.929 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.929 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.929 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.929 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.929 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.930 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.930 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.930 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.930 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.930 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.930 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.930 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.931 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.931 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.931 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.931 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.931 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.931 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.931 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.931 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.932 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.932 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.932 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.932 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.932 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.932 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.932 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.933 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.933 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.933 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.933 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.933 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.933 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.933 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.934 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.934 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.934 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.934 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.934 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.934 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.934 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.935 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.935 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.935 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.935 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.935 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.935 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.935 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.936 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.936 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.936 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.936 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.936 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.936 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.936 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.937 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.937 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.937 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.937 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.937 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.937 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.937 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.937 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.938 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.938 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.938 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.938 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.938 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.938 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.938 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.938 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.939 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.939 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.939 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.939 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.939 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.939 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.939 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.940 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.940 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.940 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.940 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.940 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.940 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.940 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.941 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.941 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.941 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.941 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.941 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.941 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.941 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.941 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.942 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.942 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.942 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.942 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.942 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.942 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.942 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.942 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.943 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.943 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.943 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.943 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.943 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.943 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.943 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.944 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.944 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.944 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.944 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.944 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.944 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.944 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.945 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.945 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.945 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.945 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.945 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.945 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.945 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.945 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.946 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.946 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.946 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.946 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.946 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.946 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.946 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.946 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.947 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.947 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.947 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.947 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.947 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.947 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.947 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.948 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.948 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.948 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.948 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.948 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.948 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.948 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.949 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.949 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.949 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.949 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.949 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.949 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.949 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.950 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.950 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.950 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.950 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.950 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.950 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.950 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.950 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.951 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.951 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.951 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.951 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.951 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.951 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.951 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.952 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.952 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.952 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.952 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.952 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.952 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.952 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.952 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.953 227852 WARNING oslo_config.cfg [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: and ``live_migration_inbound_addr`` respectively.
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: ).  Its value may be silently ignored in the future.
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.953 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.953 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.953 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.953 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.953 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.954 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.954 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.954 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.954 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.954 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.954 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.954 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.955 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.955 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.955 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.955 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.955 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.955 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.955 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.rbd_secret_uuid        = a8557ee9-b55d-5519-942c-cf8f6172f1d8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.956 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.956 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.956 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.956 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.956 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.956 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.956 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.956 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.957 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.957 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.957 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.957 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.957 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.957 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.957 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.958 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.958 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.958 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.958 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.958 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.958 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.958 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.959 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.959 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.959 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.959 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.959 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.959 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.959 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.959 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.960 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.960 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.960 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.960 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.960 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.960 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.960 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.961 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.961 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.961 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.961 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.961 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.961 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.961 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.962 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.962 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.962 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.962 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.962 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.962 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.962 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.962 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.963 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.963 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.963 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.963 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.963 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.963 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.963 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.963 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.964 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.964 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.964 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.964 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.964 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.964 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.964 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.965 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.965 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.965 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.965 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.965 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.965 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.965 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.966 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.966 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.966 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.966 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.966 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.966 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.966 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.966 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.967 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.967 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.967 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.967 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.967 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.967 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.967 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.967 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.968 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.968 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.968 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.968 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.968 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.968 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.968 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.969 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.969 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.969 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.969 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.969 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.969 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.969 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.969 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.970 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.970 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.970 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.970 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.970 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.970 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.970 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.971 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.971 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.971 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.971 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.971 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.971 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.971 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.972 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.972 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.972 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.972 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.972 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.972 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.972 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.973 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.973 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.973 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.973 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.973 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.973 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.973 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.973 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.974 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.974 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.974 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.974 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.974 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.974 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.974 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.975 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.975 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.975 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.975 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.975 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.975 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.975 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.976 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.976 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.976 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.976 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.976 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.976 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.976 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.976 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.977 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.977 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.977 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.977 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.977 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.977 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.977 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.978 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.978 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.978 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.978 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.978 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.978 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.978 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.979 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.979 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.979 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.979 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.979 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.979 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.979 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.979 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.980 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.980 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.980 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.980 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.980 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.980 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.981 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.981 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.981 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.981 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.981 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.981 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.981 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.981 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.982 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.982 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.982 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.982 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.982 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.982 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.982 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.982 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.983 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.983 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.983 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.983 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.983 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.983 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.983 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.984 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.984 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.984 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.984 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.984 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.984 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.984 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.984 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.985 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.985 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.985 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.985 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.985 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.985 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.986 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.986 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.986 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.986 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.986 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.986 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.986 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.986 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.987 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.987 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.987 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.987 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.987 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.987 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.988 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.988 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.988 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.988 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.988 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.988 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.988 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.989 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.989 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.989 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.989 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.989 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.989 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.989 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.989 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.990 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.990 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.990 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.990 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.990 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.990 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.990 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.991 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.991 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.991 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.991 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.991 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.991 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.991 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.991 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.992 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.992 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.992 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.992 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.992 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.992 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.992 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.993 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.993 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.993 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.993 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.993 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.993 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.993 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.994 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.994 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.994 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.994 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.994 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.994 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.994 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.994 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.995 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.995 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.995 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.995 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.995 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.995 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.995 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.996 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.996 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.996 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.996 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.996 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.996 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.996 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.996 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.997 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.997 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.997 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.997 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.997 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.997 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.997 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.998 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.998 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.998 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.998 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.998 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.998 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.998 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.999 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.999 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.999 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.999 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.999 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.999 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:02 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.999 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:02.999 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.000 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.000 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.000 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.000 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.000 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.000 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.000 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.001 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.001 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.001 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.001 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.001 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.001 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.001 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.001 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.002 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.002 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.002 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.002 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.002 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.002 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.002 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.002 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.003 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.003 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.003 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.003 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.003 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.003 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.003 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.003 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.004 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.004 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.004 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.004 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.004 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.004 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.004 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.005 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.005 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.005 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.005 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.005 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.005 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.005 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.006 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.006 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain python3.9[228172]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579441.9342146-3730-93717199814373/.source.yaml _original_basename=.pv52uuc0 follow=False checksum=a8e9a640ed2d11815875c8a03dd8e15172eb268a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.006 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.006 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.006 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.006 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.006 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.006 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.007 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.007 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.007 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.007 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.007 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.007 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.007 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.008 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.008 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.008 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.008 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.008 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.008 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.008 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.009 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.009 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.009 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.009 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.009 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.009 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.009 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.009 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.010 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.010 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.010 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.010 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.010 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.010 227852 DEBUG oslo_service.service [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.011 227852 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Feb 20 09:24:03 np0005625203.localdomain sudo[228170]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.028 227852 INFO nova.virt.node [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Determined node identity e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from /var/lib/nova/compute_id
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.028 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.029 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.029 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.029 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 20 09:24:03 np0005625203.localdomain systemd[1]: Started libvirt QEMU daemon.
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.097 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0c6e688400> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.099 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0c6e688400> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.100 227852 INFO nova.virt.libvirt.driver [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Connection event '1' reason 'None'
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.113 227852 DEBUG nova.virt.libvirt.volume.mount [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.982 227852 INFO nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Libvirt host capabilities <capabilities>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:   <host>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <uuid>a53ba227-4db8-45ed-bb70-5a295cbaca1c</uuid>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <cpu>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <arch>x86_64</arch>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <model>EPYC-Rome-v4</model>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <vendor>AMD</vendor>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <microcode version='16777317'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <signature family='23' model='49' stepping='0'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <maxphysaddr mode='emulate' bits='40'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='x2apic'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='tsc-deadline'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='osxsave'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='hypervisor'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='tsc_adjust'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='spec-ctrl'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='stibp'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='arch-capabilities'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='ssbd'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='cmp_legacy'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='topoext'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='virt-ssbd'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='lbrv'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='tsc-scale'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='vmcb-clean'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='pause-filter'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='pfthreshold'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='svme-addr-chk'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='rdctl-no'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='skip-l1dfl-vmentry'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='mds-no'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <feature name='pschange-mc-no'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <pages unit='KiB' size='4'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <pages unit='KiB' size='2048'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <pages unit='KiB' size='1048576'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     </cpu>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <power_management>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <suspend_mem/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <suspend_disk/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <suspend_hybrid/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     </power_management>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <iommu support='no'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <migration_features>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <live/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <uri_transports>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:         <uri_transport>tcp</uri_transport>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:         <uri_transport>rdma</uri_transport>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       </uri_transports>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     </migration_features>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <topology>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <cells num='1'>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:         <cell id='0'>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:           <memory unit='KiB'>16116612</memory>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:           <pages unit='KiB' size='4'>4029153</pages>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:           <pages unit='KiB' size='2048'>0</pages>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:           <pages unit='KiB' size='1048576'>0</pages>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:           <distances>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:             <sibling id='0' value='10'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:           </distances>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:           <cpus num='8'>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:           </cpus>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:         </cell>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       </cells>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     </topology>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <cache>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     </cache>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <secmodel>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <model>selinux</model>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <doi>0</doi>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     </secmodel>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <secmodel>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <model>dac</model>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <doi>0</doi>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <baselabel type='kvm'>+107:+107</baselabel>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <baselabel type='qemu'>+107:+107</baselabel>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     </secmodel>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:   </host>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:   <guest>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <os_type>hvm</os_type>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <arch name='i686'>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <wordsize>32</wordsize>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <domain type='qemu'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <domain type='kvm'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     </arch>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <features>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <pae/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <nonpae/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <acpi default='on' toggle='yes'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <apic default='on' toggle='no'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <cpuselection/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <deviceboot/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <disksnapshot default='on' toggle='no'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <externalSnapshot/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     </features>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:   </guest>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:   <guest>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <os_type>hvm</os_type>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <arch name='x86_64'>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <wordsize>64</wordsize>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <domain type='qemu'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <domain type='kvm'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     </arch>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     <features>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <acpi default='on' toggle='yes'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <apic default='on' toggle='no'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <cpuselection/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <deviceboot/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <disksnapshot default='on' toggle='no'/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:       <externalSnapshot/>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:     </features>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]:   </guest>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: </capabilities>
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 
Feb 20 09:24:03 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:03.995 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.013 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: <domainCapabilities>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <domain>kvm</domain>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <arch>i686</arch>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <vcpu max='1024'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <iothreads supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <os supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <enum name='firmware'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <loader supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>rom</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pflash</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='readonly'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>yes</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>no</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='secure'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>no</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </loader>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </os>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <cpu>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>on</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>off</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='maximum' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='maximumMigratable'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>on</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>off</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='host-model' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <vendor>AMD</vendor>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='x2apic'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='stibp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='ssbd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='succor'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='lbrv'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='custom' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='ClearwaterForest'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ddpd-u'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sha512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm3'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ddpd-u'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sha512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm3'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cooperlake'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cooperlake-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cooperlake-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Dhyana-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Genoa'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='perfmon-v2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Turin'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='perfmon-v2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbpb'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='perfmon-v2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbpb'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-128'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-256'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-128'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-256'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='KnightsMill'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512er'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512pf'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='KnightsMill-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512er'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512pf'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tbm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tbm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='athlon'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='athlon-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='core2duo'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='core2duo-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='coreduo'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='coreduo-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='n270'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='n270-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='phenom'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='phenom-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </cpu>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <memoryBacking supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <enum name='sourceType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>file</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>anonymous</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>memfd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </memoryBacking>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <devices>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <disk supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='diskDevice'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>disk</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>cdrom</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>floppy</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>lun</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='bus'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>fdc</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>scsi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>usb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>sata</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-non-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </disk>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <graphics supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vnc</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>egl-headless</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>dbus</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </graphics>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <video supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='modelType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vga</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>cirrus</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>none</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>bochs</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>ramfb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </video>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <hostdev supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='mode'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>subsystem</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='startupPolicy'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>default</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>mandatory</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>requisite</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>optional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='subsysType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>usb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pci</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>scsi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='capsType'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='pciBackend'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </hostdev>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <rng supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-non-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendModel'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>random</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>egd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>builtin</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </rng>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <filesystem supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='driverType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>path</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>handle</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtiofs</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </filesystem>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <tpm supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tpm-tis</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tpm-crb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendModel'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>emulator</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>external</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendVersion'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>2.0</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </tpm>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <redirdev supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='bus'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>usb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </redirdev>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <channel supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pty</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>unix</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </channel>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <crypto supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>qemu</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendModel'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>builtin</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </crypto>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <interface supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>default</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>passt</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </interface>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <panic supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>isa</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>hyperv</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </panic>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <console supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>null</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vc</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pty</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>dev</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>file</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pipe</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>stdio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>udp</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tcp</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>unix</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>qemu-vdagent</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>dbus</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </console>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </devices>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <features>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <gic supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <vmcoreinfo supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <genid supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <backingStoreInput supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <backup supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <async-teardown supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <s390-pv supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <ps2 supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <tdx supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <sev supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <sgx supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <hyperv supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='features'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>relaxed</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vapic</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>spinlocks</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vpindex</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>runtime</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>synic</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>stimer</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>reset</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vendor_id</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>frequencies</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>reenlightenment</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tlbflush</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>ipi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>avic</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>emsr_bitmap</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>xmm_input</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <defaults>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <spinlocks>4095</spinlocks>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <stimer_direct>on</stimer_direct>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </defaults>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </hyperv>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <launchSecurity supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </features>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: </domainCapabilities>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.022 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: <domainCapabilities>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <domain>kvm</domain>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <arch>i686</arch>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <vcpu max='240'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <iothreads supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <os supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <enum name='firmware'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <loader supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>rom</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pflash</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='readonly'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>yes</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>no</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='secure'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>no</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </loader>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </os>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <cpu>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>on</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>off</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='maximum' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='maximumMigratable'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>on</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>off</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='host-model' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <vendor>AMD</vendor>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='x2apic'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='stibp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='ssbd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='succor'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='lbrv'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='custom' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='ClearwaterForest'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ddpd-u'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sha512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm3'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ddpd-u'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sha512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm3'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cooperlake'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cooperlake-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cooperlake-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Dhyana-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Genoa'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='perfmon-v2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Turin'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='perfmon-v2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbpb'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='perfmon-v2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbpb'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-128'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-256'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-128'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-256'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='KnightsMill'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512er'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512pf'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='KnightsMill-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512er'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512pf'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tbm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tbm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='athlon'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='athlon-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='core2duo'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='core2duo-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='coreduo'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='coreduo-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='n270'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='n270-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='phenom'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='phenom-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </cpu>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <memoryBacking supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <enum name='sourceType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>file</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>anonymous</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>memfd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </memoryBacking>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <devices>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <disk supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='diskDevice'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>disk</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>cdrom</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>floppy</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>lun</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='bus'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>ide</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>fdc</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>scsi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>usb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>sata</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-non-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </disk>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <graphics supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vnc</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>egl-headless</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>dbus</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </graphics>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <video supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='modelType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vga</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>cirrus</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>none</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>bochs</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>ramfb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </video>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <hostdev supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='mode'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>subsystem</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='startupPolicy'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>default</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>mandatory</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>requisite</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>optional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='subsysType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>usb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pci</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>scsi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='capsType'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='pciBackend'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </hostdev>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <rng supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-non-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendModel'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>random</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>egd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>builtin</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </rng>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <filesystem supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='driverType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>path</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>handle</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtiofs</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </filesystem>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <tpm supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tpm-tis</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tpm-crb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendModel'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>emulator</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>external</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendVersion'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>2.0</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </tpm>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <redirdev supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='bus'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>usb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </redirdev>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <channel supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pty</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>unix</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </channel>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <crypto supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>qemu</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendModel'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>builtin</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </crypto>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <interface supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>default</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>passt</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </interface>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <panic supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>isa</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>hyperv</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </panic>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <console supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>null</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vc</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pty</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>dev</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>file</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pipe</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>stdio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>udp</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tcp</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>unix</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>qemu-vdagent</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>dbus</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </console>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </devices>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <features>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <gic supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <vmcoreinfo supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <genid supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <backingStoreInput supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <backup supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <async-teardown supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <s390-pv supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <ps2 supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <tdx supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <sev supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <sgx supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <hyperv supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='features'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>relaxed</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vapic</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>spinlocks</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vpindex</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>runtime</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>synic</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>stimer</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>reset</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vendor_id</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>frequencies</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>reenlightenment</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tlbflush</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>ipi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>avic</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>emsr_bitmap</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>xmm_input</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <defaults>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <spinlocks>4095</spinlocks>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <stimer_direct>on</stimer_direct>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </defaults>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </hyperv>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <launchSecurity supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </features>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: </domainCapabilities>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.209 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.213 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: <domainCapabilities>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <domain>kvm</domain>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <arch>x86_64</arch>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <vcpu max='1024'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <iothreads supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <os supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <enum name='firmware'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>efi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <loader supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>rom</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pflash</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='readonly'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>yes</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>no</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='secure'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>yes</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>no</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </loader>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </os>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <cpu>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>on</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>off</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='maximum' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='maximumMigratable'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>on</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>off</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='host-model' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <vendor>AMD</vendor>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='x2apic'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='stibp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='ssbd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='succor'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='lbrv'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='custom' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='ClearwaterForest'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ddpd-u'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sha512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm3'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ddpd-u'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sha512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm3'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cooperlake'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cooperlake-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cooperlake-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Dhyana-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Genoa'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='perfmon-v2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Turin'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='perfmon-v2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbpb'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='perfmon-v2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbpb'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-128'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-256'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-128'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-256'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='KnightsMill'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512er'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512pf'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='KnightsMill-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512er'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512pf'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tbm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tbm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='athlon'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='athlon-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='core2duo'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='core2duo-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='coreduo'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='coreduo-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='n270'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='n270-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='phenom'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='phenom-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </cpu>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <memoryBacking supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <enum name='sourceType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>file</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>anonymous</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>memfd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </memoryBacking>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <devices>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <disk supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='diskDevice'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>disk</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>cdrom</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>floppy</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>lun</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='bus'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>fdc</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>scsi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>usb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>sata</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-non-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </disk>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <graphics supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vnc</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>egl-headless</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>dbus</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </graphics>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <video supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='modelType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vga</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>cirrus</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>none</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>bochs</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>ramfb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </video>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <hostdev supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='mode'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>subsystem</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='startupPolicy'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>default</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>mandatory</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>requisite</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>optional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='subsysType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>usb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pci</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>scsi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='capsType'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='pciBackend'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </hostdev>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <rng supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-non-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendModel'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>random</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>egd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>builtin</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </rng>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <filesystem supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='driverType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>path</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>handle</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtiofs</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </filesystem>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <tpm supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tpm-tis</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tpm-crb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendModel'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>emulator</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>external</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendVersion'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>2.0</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </tpm>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <redirdev supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='bus'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>usb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </redirdev>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <channel supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pty</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>unix</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </channel>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <crypto supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>qemu</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendModel'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>builtin</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </crypto>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <interface supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>default</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>passt</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </interface>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <panic supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>isa</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>hyperv</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </panic>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <console supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>null</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vc</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pty</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>dev</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>file</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pipe</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>stdio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>udp</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tcp</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>unix</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>qemu-vdagent</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>dbus</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </console>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </devices>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <features>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <gic supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <vmcoreinfo supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <genid supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <backingStoreInput supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <backup supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <async-teardown supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <s390-pv supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <ps2 supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <tdx supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <sev supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <sgx supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <hyperv supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='features'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>relaxed</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vapic</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>spinlocks</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vpindex</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>runtime</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>synic</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>stimer</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>reset</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vendor_id</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>frequencies</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>reenlightenment</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tlbflush</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>ipi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>avic</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>emsr_bitmap</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>xmm_input</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <defaults>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <spinlocks>4095</spinlocks>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <stimer_direct>on</stimer_direct>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </defaults>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </hyperv>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <launchSecurity supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </features>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: </domainCapabilities>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.240 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: <domainCapabilities>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <domain>kvm</domain>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <arch>x86_64</arch>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <vcpu max='240'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <iothreads supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <os supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <enum name='firmware'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <loader supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>rom</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pflash</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='readonly'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>yes</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>no</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='secure'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>no</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </loader>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </os>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <cpu>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>on</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>off</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='maximum' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='maximumMigratable'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>on</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>off</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='host-model' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <vendor>AMD</vendor>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='x2apic'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='stibp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='ssbd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='succor'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='lbrv'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <mode name='custom' supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Broadwell-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='ClearwaterForest'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ddpd-u'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sha512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm3'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ddpd-u'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sha512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm3'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sm4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cooperlake'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cooperlake-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Cooperlake-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Denverton-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Dhyana-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Genoa'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='perfmon-v2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Turin'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='perfmon-v2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbpb'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amd-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='auto-ibrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='perfmon-v2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbpb'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='stibp-always-on'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='EPYC-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-128'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-256'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-128'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-256'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx10-512'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='prefetchiti'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Haswell-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='IvyBridge-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='KnightsMill'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512er'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512pf'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='KnightsMill-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512er'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512pf'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tbm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fma4'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tbm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xop'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='amx-tile'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-bf16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-fp16'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bitalg'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrc'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fzrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='la57'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='taa-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='SierraForest-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ifma'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cmpccxadd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fbsdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='fsrs'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ibrs-all'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='intel-psfd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='lam'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mcdt-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pbrsb-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='psdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rfds-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='serialize'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vaes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='hle'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='rtm'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512bw'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512cd'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512dq'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512f'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='avx512vl'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='invpcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pcid'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='pku'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='mpx'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v2'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v3'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='core-capability'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='split-lock-detect'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='Snowridge-v4'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='cldemote'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='erms'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='gfni'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdir64b'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='movdiri'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='xsaves'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='athlon'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='athlon-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='core2duo'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='core2duo-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='coreduo'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='coreduo-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='n270'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='n270-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='ss'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='phenom'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <blockers model='phenom-v1'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnow'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <feature name='3dnowext'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </blockers>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </mode>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </cpu>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <memoryBacking supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <enum name='sourceType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>file</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>anonymous</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <value>memfd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </memoryBacking>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <devices>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <disk supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='diskDevice'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>disk</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>cdrom</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>floppy</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>lun</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='bus'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>ide</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>fdc</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>scsi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>usb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>sata</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-non-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </disk>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <graphics supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vnc</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>egl-headless</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>dbus</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </graphics>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <video supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='modelType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vga</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>cirrus</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>none</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>bochs</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>ramfb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </video>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <hostdev supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='mode'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>subsystem</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='startupPolicy'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>default</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>mandatory</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>requisite</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>optional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='subsysType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>usb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pci</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>scsi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='capsType'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='pciBackend'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </hostdev>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <rng supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtio-non-transitional</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendModel'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>random</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>egd</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>builtin</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </rng>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <filesystem supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='driverType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>path</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>handle</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>virtiofs</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </filesystem>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <tpm supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tpm-tis</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tpm-crb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendModel'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>emulator</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>external</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendVersion'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>2.0</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </tpm>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <redirdev supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='bus'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>usb</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </redirdev>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <channel supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pty</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>unix</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </channel>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <crypto supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>qemu</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendModel'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>builtin</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </crypto>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <interface supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='backendType'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>default</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>passt</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </interface>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <panic supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='model'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>isa</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>hyperv</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </panic>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <console supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='type'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>null</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vc</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pty</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>dev</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>file</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>pipe</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>stdio</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>udp</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tcp</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>unix</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>qemu-vdagent</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>dbus</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </console>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </devices>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   <features>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <gic supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <vmcoreinfo supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <genid supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <backingStoreInput supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <backup supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <async-teardown supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <s390-pv supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <ps2 supported='yes'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <tdx supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <sev supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <sgx supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <hyperv supported='yes'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <enum name='features'>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>relaxed</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vapic</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>spinlocks</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vpindex</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>runtime</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>synic</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>stimer</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>reset</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>vendor_id</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>frequencies</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>reenlightenment</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>tlbflush</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>ipi</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>avic</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>emsr_bitmap</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <value>xmm_input</value>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </enum>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       <defaults>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <spinlocks>4095</spinlocks>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <stimer_direct>on</stimer_direct>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:       </defaults>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     </hyperv>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:     <launchSecurity supported='no'/>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:   </features>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: </domainCapabilities>
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.298 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.298 227852 INFO nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Secure Boot support detected
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.300 227852 INFO nova.virt.libvirt.driver [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.300 227852 INFO nova.virt.libvirt.driver [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.312 227852 DEBUG nova.virt.libvirt.driver [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.329 227852 INFO nova.virt.node [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Determined node identity e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from /var/lib/nova/compute_id
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.347 227852 DEBUG nova.compute.manager [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Verified node e5d5157a-2df2-4f51-b5fb-cd2da3a8584e matches my host np0005625203.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Feb 20 09:24:04 np0005625203.localdomain python3.9[228341]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.379 227852 INFO nova.compute.manager [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.815 227852 INFO nova.service [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Updating service version for nova-compute on np0005625203.localdomain from 57 to 66
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.860 227852 DEBUG oslo_concurrency.lockutils [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.860 227852 DEBUG oslo_concurrency.lockutils [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.861 227852 DEBUG oslo_concurrency.lockutils [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.861 227852 DEBUG nova.compute.resource_tracker [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:24:04 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:04.862 227852 DEBUG oslo_concurrency.processutils [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:24:05 np0005625203.localdomain sshd[228474]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.327 227852 DEBUG oslo_concurrency.processutils [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:24:05 np0005625203.localdomain python3.9[228473]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:24:05 np0005625203.localdomain systemd[1]: Started libvirt nodedev daemon.
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.644 227852 WARNING nova.virt.libvirt.driver [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.646 227852 DEBUG nova.compute.resource_tracker [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=13605MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.646 227852 DEBUG oslo_concurrency.lockutils [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.647 227852 DEBUG oslo_concurrency.lockutils [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.755 227852 DEBUG nova.compute.resource_tracker [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.756 227852 DEBUG nova.compute.resource_tracker [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.772 227852 DEBUG nova.scheduler.client.report [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Refreshing inventories for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.830 227852 DEBUG nova.scheduler.client.report [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Updating ProviderTree inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.830 227852 DEBUG nova.compute.provider_tree [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.845 227852 DEBUG nova.scheduler.client.report [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Refreshing aggregate associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.874 227852 DEBUG nova.scheduler.client.report [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Refreshing trait associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, traits: COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_FMA3,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE2,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_AESNI,HW_CPU_X86_SSE,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE41,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_F16C,HW_CPU_X86_MMX,COMPUTE_ACCELERATORS,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NODE,HW_CPU_X86_BMI,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_LAN9118,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_ABM,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSSE3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 20 09:24:05 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:05.892 227852 DEBUG oslo_concurrency.processutils [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:24:05 np0005625203.localdomain sshd[228474]: Received disconnect from 185.196.11.208 port 47810:11: Bye Bye [preauth]
Feb 20 09:24:05 np0005625203.localdomain sshd[228474]: Disconnected from authenticating user root 185.196.11.208 port 47810 [preauth]
Feb 20 09:24:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47598 DF PROTO=TCP SPT=54584 DPT=9882 SEQ=4143302830 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA000E800000000001030307) 
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:06.401 227852 DEBUG oslo_concurrency.processutils [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:06.407 227852 DEBUG nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:06.407 227852 INFO nova.virt.libvirt.host [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] kernel doesn't support AMD SEV
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:06.409 227852 DEBUG nova.compute.provider_tree [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:06.410 227852 DEBUG nova.virt.libvirt.driver [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:06.433 227852 DEBUG nova.scheduler.client.report [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:06.486 227852 DEBUG nova.compute.provider_tree [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Updating resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e generation from 2 to 3 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:06.507 227852 DEBUG nova.compute.resource_tracker [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:06.507 227852 DEBUG oslo_concurrency.lockutils [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:06.507 227852 DEBUG nova.service [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:06.554 227852 DEBUG nova.service [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Feb 20 09:24:06 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:06.554 227852 DEBUG nova.servicegroup.drivers.db [None req-72786067-f2d8-46dc-976a-584437415957 - - - - - -] DB_Driver: join new ServiceGroup member np0005625203.localdomain to the compute group, service = <Service: host=np0005625203.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Feb 20 09:24:06 np0005625203.localdomain python3.9[228630]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:24:07 np0005625203.localdomain sudo[228738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckpbkbrkncqrqmcopwtroxlhfvtnjdmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579447.0906646-3880-158729722442224/AnsiballZ_podman_container.py
Feb 20 09:24:07 np0005625203.localdomain sudo[228738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:24:07.636 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:24:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:24:07.636 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:24:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:24:07.637 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:24:07 np0005625203.localdomain python3.9[228740]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 20 09:24:08 np0005625203.localdomain sudo[228738]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:08 np0005625203.localdomain systemd-journald[48285]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 115.0 (383 of 333 items), suggesting rotation.
Feb 20 09:24:08 np0005625203.localdomain systemd-journald[48285]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating.
Feb 20 09:24:08 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:24:08 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:24:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3950 DF PROTO=TCP SPT=59440 DPT=9105 SEQ=1194372364 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0016410000000001030307) 
Feb 20 09:24:08 np0005625203.localdomain sudo[228872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzoxvnasnpeqjlsntjlgozdgauyhzcyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579448.3296227-3904-18069962048755/AnsiballZ_systemd.py
Feb 20 09:24:08 np0005625203.localdomain sudo[228872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:08 np0005625203.localdomain python3.9[228874]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:24:09 np0005625203.localdomain systemd[1]: Stopping nova_compute container...
Feb 20 09:24:10 np0005625203.localdomain systemd[1]: tmp-crun.3WBdCW.mount: Deactivated successfully.
Feb 20 09:24:10 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:10.672 227852 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Feb 20 09:24:10 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:10.675 227852 DEBUG oslo_concurrency.lockutils [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:24:10 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:10.675 227852 DEBUG oslo_concurrency.lockutils [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:24:10 np0005625203.localdomain nova_compute[227848]: 2026-02-20 09:24:10.676 227852 DEBUG oslo_concurrency.lockutils [None req-b9840400-cfec-47ec-8154-e81e8357a053 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:24:11 np0005625203.localdomain virtqemud[228198]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, )
Feb 20 09:24:11 np0005625203.localdomain systemd[1]: libpod-6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f.scope: Deactivated successfully.
Feb 20 09:24:11 np0005625203.localdomain virtqemud[228198]: hostname: np0005625203.localdomain
Feb 20 09:24:11 np0005625203.localdomain virtqemud[228198]: End of file while reading data: Input/output error
Feb 20 09:24:11 np0005625203.localdomain systemd[1]: libpod-6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f.scope: Consumed 3.853s CPU time.
Feb 20 09:24:11 np0005625203.localdomain podman[228878]: 2026-02-20 09:24:11.042515402 +0000 UTC m=+1.084181854 container died 6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:24:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f-userdata-shm.mount: Deactivated successfully.
Feb 20 09:24:11 np0005625203.localdomain podman[228878]: 2026-02-20 09:24:11.092405721 +0000 UTC m=+1.134072143 container cleanup 6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Feb 20 09:24:11 np0005625203.localdomain podman[228878]: nova_compute
Feb 20 09:24:11 np0005625203.localdomain podman[228918]: error opening file `/run/crun/6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f/status`: No such file or directory
Feb 20 09:24:11 np0005625203.localdomain podman[228907]: 2026-02-20 09:24:11.186217791 +0000 UTC m=+0.065740213 container cleanup 6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:24:11 np0005625203.localdomain podman[228907]: nova_compute
Feb 20 09:24:11 np0005625203.localdomain systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb 20 09:24:11 np0005625203.localdomain systemd[1]: Stopped nova_compute container.
Feb 20 09:24:11 np0005625203.localdomain systemd[1]: Starting nova_compute container...
Feb 20 09:24:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=43734 DF PROTO=TCP SPT=56904 DPT=9105 SEQ=1765691620 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0022800000000001030307) 
Feb 20 09:24:11 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:24:11 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3e0db24b6063b24bb4df15bb988811b5bcf9e6789b213c4536d36feae343ea/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:11 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3e0db24b6063b24bb4df15bb988811b5bcf9e6789b213c4536d36feae343ea/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:11 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3e0db24b6063b24bb4df15bb988811b5bcf9e6789b213c4536d36feae343ea/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:11 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3e0db24b6063b24bb4df15bb988811b5bcf9e6789b213c4536d36feae343ea/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:11 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3e0db24b6063b24bb4df15bb988811b5bcf9e6789b213c4536d36feae343ea/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:11 np0005625203.localdomain podman[228922]: 2026-02-20 09:24:11.323124375 +0000 UTC m=+0.107310379 container init 6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=nova_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=nova_compute, org.label-schema.build-date=20260127)
Feb 20 09:24:11 np0005625203.localdomain podman[228922]: 2026-02-20 09:24:11.331751448 +0000 UTC m=+0.115937472 container start 6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=nova_compute, org.label-schema.license=GPLv2)
Feb 20 09:24:11 np0005625203.localdomain podman[228922]: nova_compute
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: + sudo -E kolla_set_configs
Feb 20 09:24:11 np0005625203.localdomain systemd[1]: Started nova_compute container.
Feb 20 09:24:11 np0005625203.localdomain sudo[228872]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Validating config file
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Copying service configuration files
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Copying /var/lib/kolla/config_files/src/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Deleting /etc/ceph
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Creating directory /etc/ceph
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /etc/ceph
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Writing out command to execute
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: ++ cat /run_command
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: + CMD=nova-compute
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: + ARGS=
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: + sudo kolla_copy_cacerts
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: + [[ ! -n '' ]]
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: + . kolla_extend_start
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: Running command: 'nova-compute'
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: + echo 'Running command: '\''nova-compute'\'''
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: + umask 0022
Feb 20 09:24:11 np0005625203.localdomain nova_compute[228937]: + exec nova-compute
Feb 20 09:24:11 np0005625203.localdomain sudo[229056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpudkchralgecponpdrlawkxdugehyuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579451.5762992-3931-274111737591736/AnsiballZ_podman_container.py
Feb 20 09:24:11 np0005625203.localdomain sudo[229056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:12 np0005625203.localdomain python3.9[229058]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 20 09:24:12 np0005625203.localdomain systemd[1]: Started libpod-conmon-77edbe1d945e2b38c97d8a18f93ee06ae61ca7889352e35af1b678b276f7cbeb.scope.
Feb 20 09:24:12 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:24:12 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd350f63fa5cb4de5fd35b027e0dd392fc094bab6ce558ae5afdf1311f190a48/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:12 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd350f63fa5cb4de5fd35b027e0dd392fc094bab6ce558ae5afdf1311f190a48/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:12 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd350f63fa5cb4de5fd35b027e0dd392fc094bab6ce558ae5afdf1311f190a48/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 09:24:12 np0005625203.localdomain podman[229083]: 2026-02-20 09:24:12.325477018 +0000 UTC m=+0.110643465 container init 77edbe1d945e2b38c97d8a18f93ee06ae61ca7889352e35af1b678b276f7cbeb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=nova_compute_init, org.label-schema.build-date=20260127, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:24:12 np0005625203.localdomain podman[229083]: 2026-02-20 09:24:12.336494366 +0000 UTC m=+0.121660813 container start 77edbe1d945e2b38c97d8a18f93ee06ae61ca7889352e35af1b678b276f7cbeb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_id=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:24:12 np0005625203.localdomain python3.9[229058]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Applying nova statedir ownership
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/fc52238ffcbdcb325c6bf3fe6412477fc4bdb6cd9151f39289b74f25e08e0db9
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/d301d14069645d8c23fee2987984776b3e88a570e1aa96d6cf3e31fa880385fd
Feb 20 09:24:12 np0005625203.localdomain nova_compute_init[229104]: INFO:nova_statedir:Nova statedir ownership complete
Feb 20 09:24:12 np0005625203.localdomain systemd[1]: libpod-77edbe1d945e2b38c97d8a18f93ee06ae61ca7889352e35af1b678b276f7cbeb.scope: Deactivated successfully.
Feb 20 09:24:12 np0005625203.localdomain podman[229105]: 2026-02-20 09:24:12.41022736 +0000 UTC m=+0.054049912 container died 77edbe1d945e2b38c97d8a18f93ee06ae61ca7889352e35af1b678b276f7cbeb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.vendor=CentOS)
Feb 20 09:24:12 np0005625203.localdomain podman[229117]: 2026-02-20 09:24:12.491461142 +0000 UTC m=+0.081056597 container cleanup 77edbe1d945e2b38c97d8a18f93ee06ae61ca7889352e35af1b678b276f7cbeb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=nova_compute_init, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=nova_compute_init)
Feb 20 09:24:12 np0005625203.localdomain systemd[1]: libpod-conmon-77edbe1d945e2b38c97d8a18f93ee06ae61ca7889352e35af1b678b276f7cbeb.scope: Deactivated successfully.
Feb 20 09:24:12 np0005625203.localdomain sudo[229056]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.050 228941 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.051 228941 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.051 228941 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.051 228941 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 20 09:24:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-cd350f63fa5cb4de5fd35b027e0dd392fc094bab6ce558ae5afdf1311f190a48-merged.mount: Deactivated successfully.
Feb 20 09:24:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77edbe1d945e2b38c97d8a18f93ee06ae61ca7889352e35af1b678b276f7cbeb-userdata-shm.mount: Deactivated successfully.
Feb 20 09:24:13 np0005625203.localdomain sshd[210036]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:24:13 np0005625203.localdomain systemd[1]: session-53.scope: Deactivated successfully.
Feb 20 09:24:13 np0005625203.localdomain systemd[1]: session-53.scope: Consumed 1min 38.015s CPU time.
Feb 20 09:24:13 np0005625203.localdomain systemd-logind[759]: Session 53 logged out. Waiting for processes to exit.
Feb 20 09:24:13 np0005625203.localdomain systemd-logind[759]: Removed session 53.
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.168 228941 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.190 228941 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.191 228941 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.617 228941 INFO nova.virt.driver [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 20 09:24:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:24:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.735 228941 INFO nova.compute.provider_config [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.746 228941 WARNING nova.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.: nova.exception.TooOldComputeService: Current Nova version does not support computes older than Yoga but the minimum compute service level in your cell is 57 and the oldest supported service level is 61.
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.746 228941 DEBUG oslo_concurrency.lockutils [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.746 228941 DEBUG oslo_concurrency.lockutils [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.747 228941 DEBUG oslo_concurrency.lockutils [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.747 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.747 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.747 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.747 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.747 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.748 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.748 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.748 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.748 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.748 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.748 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.748 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.748 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.749 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.749 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.749 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.749 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.749 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.749 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.749 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] console_host                   = np0005625203.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.750 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.750 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.750 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.750 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.750 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.750 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.750 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.751 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.751 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.751 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.751 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.751 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.751 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.751 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.751 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.752 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.752 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.752 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.752 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] host                           = np0005625203.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.752 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.752 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.752 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.753 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.753 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.753 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.753 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.753 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.753 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.753 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.754 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.754 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.754 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.754 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.754 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.754 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.754 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.755 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.755 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.755 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.755 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.755 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.755 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.755 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.755 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.756 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.756 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.756 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.756 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.756 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.756 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.756 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.756 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.757 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.757 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.757 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.757 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.757 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.757 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.757 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.757 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.758 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] my_block_storage_ip            = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.758 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] my_ip                          = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.758 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.758 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.758 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.758 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.758 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.759 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.759 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.759 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.759 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.759 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.759 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.759 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.759 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.760 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.760 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.760 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.760 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.760 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.760 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.760 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.760 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.761 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.761 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.761 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.761 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.761 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.761 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.761 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.761 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.762 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.762 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.762 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.762 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.762 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.762 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.762 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.763 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.763 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.763 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.763 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.763 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.763 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.763 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.763 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.764 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.764 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.764 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.764 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.764 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.764 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.764 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.764 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.765 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.765 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.765 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.765 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.765 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.765 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.765 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.766 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.766 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.766 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.766 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.766 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.766 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.766 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.766 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.767 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.767 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.767 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.767 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.767 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.767 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.767 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.768 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.768 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.768 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.768 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.768 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.768 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.768 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.769 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.769 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.769 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.769 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.769 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.769 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.769 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.769 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.770 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.770 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.770 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.770 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.770 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.770 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.770 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.771 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.771 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.771 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.771 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.771 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.771 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.771 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.772 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.772 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.772 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.772 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.772 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.772 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.772 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.772 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.773 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.773 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.773 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.773 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.773 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.773 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.773 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.774 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.774 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.774 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.774 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.774 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.774 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.775 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.775 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.775 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.775 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.775 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.775 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.775 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.776 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.776 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.776 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.776 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.776 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.776 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.776 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.776 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.777 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.777 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.777 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.777 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.780 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.780 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.780 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.781 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.781 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.781 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.781 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.781 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.781 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.781 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.782 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.782 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.782 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.782 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.782 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.782 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.782 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.783 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.783 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.783 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.783 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.783 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.783 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.783 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.784 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.784 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.784 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.784 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.784 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.784 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.784 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.785 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.785 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.785 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.785 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.785 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.785 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.786 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.786 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.786 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.786 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.786 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.787 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.787 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.787 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.787 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.787 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.788 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.788 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.788 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.788 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.788 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.788 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.789 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.789 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.789 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.789 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.789 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.789 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.790 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.790 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.790 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.790 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.790 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.790 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.791 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.791 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.791 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.791 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.791 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.791 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.791 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.792 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.792 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.792 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.792 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.792 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain podman[229162]: 2026-02-20 09:24:13.788145742 +0000 UTC m=+0.099205612 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent)
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.792 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.793 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.793 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.793 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.793 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.793 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.793 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.793 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.794 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.794 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.794 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.794 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.794 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.794 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.795 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.795 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.795 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.795 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.795 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.795 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.795 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.795 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.796 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.796 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.796 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.796 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.796 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.796 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.796 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.797 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.797 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.797 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.797 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.797 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.797 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.798 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.798 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.798 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.798 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.798 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.798 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.798 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.798 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.799 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.799 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.799 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.799 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.799 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.799 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.799 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.800 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.800 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.800 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.800 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.800 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.800 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.801 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.801 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.801 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.801 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.801 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.801 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.802 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.802 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.802 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.802 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.802 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.803 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.803 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.803 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.803 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.803 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.803 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.803 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.804 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.804 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.804 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.804 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.804 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.804 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.804 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.805 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.805 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.805 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.805 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.805 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.805 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.805 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.806 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.806 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.806 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.806 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.806 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.806 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.806 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.807 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.807 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.807 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.807 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.807 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.807 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.807 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.807 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.808 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.808 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.808 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.808 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.808 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.808 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.808 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.809 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.809 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.809 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.809 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.809 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.809 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.809 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.810 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.810 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.810 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.810 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.810 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.810 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.810 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.810 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.811 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.811 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.811 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.811 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.811 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.811 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.811 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.812 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.812 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.812 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.812 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.812 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.812 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.812 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.813 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.813 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.813 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.813 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.813 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.813 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.813 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.813 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.814 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.814 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.814 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.814 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.814 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.814 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.814 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.815 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.815 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.815 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.815 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.815 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.815 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.815 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.816 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.816 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.816 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.816 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.816 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.816 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.816 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.817 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.817 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.817 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.817 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.817 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.817 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.817 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.817 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.818 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.818 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.818 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.818 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.818 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.818 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.818 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.819 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.819 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.819 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.819 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.819 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.819 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.819 228941 WARNING oslo_config.cfg [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: and ``live_migration_inbound_addr`` respectively.
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: ).  Its value may be silently ignored in the future.
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.820 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.820 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.820 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.820 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.820 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.820 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.821 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.821 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.821 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.821 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.821 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.821 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.821 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.822 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.822 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.822 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.822 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.822 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.822 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.rbd_secret_uuid        = a8557ee9-b55d-5519-942c-cf8f6172f1d8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.822 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.823 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.823 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.823 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.823 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.823 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.823 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.823 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.823 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.824 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.824 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.824 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.824 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.824 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.824 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.824 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.825 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.825 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain podman[229162]: 2026-02-20 09:24:13.825255287 +0000 UTC m=+0.136315177 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.825 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.825 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.825 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.825 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.825 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.826 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.826 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.826 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.826 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.826 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.826 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.826 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.827 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.827 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.827 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.827 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.827 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.827 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.827 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.828 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.828 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.828 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.828 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.828 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.828 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.828 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.829 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.829 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.829 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.829 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.829 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.829 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.829 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.829 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.830 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.830 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.830 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.830 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.830 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.830 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.830 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.831 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.831 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.831 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.831 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.831 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.831 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.831 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.832 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.832 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.832 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.832 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.832 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.832 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.832 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.833 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.833 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.833 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.833 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.833 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.833 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.833 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.834 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.834 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.834 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.834 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.834 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.834 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.835 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.835 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.835 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.835 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.835 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.835 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.836 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.836 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.836 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.836 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.836 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.836 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.837 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.837 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.837 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.837 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.837 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.837 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.837 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.838 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.838 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.838 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.838 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.838 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.838 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.838 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.839 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.839 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.839 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.839 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.839 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.839 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.839 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.839 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.840 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.840 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.840 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.840 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.840 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.840 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.841 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.841 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.841 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.841 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.841 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.841 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.841 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.842 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.842 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.842 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.842 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.842 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.842 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.842 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.843 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.843 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.843 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.843 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.843 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.843 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.843 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.844 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.844 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.844 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.844 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.844 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.844 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.844 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.844 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.845 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.845 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.845 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.845 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.845 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.845 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.846 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.846 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.846 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.846 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.846 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.846 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.846 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.846 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.847 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.847 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.847 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.847 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.847 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.847 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.847 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.847 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.848 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.848 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.848 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.848 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.848 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.848 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.849 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.849 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.849 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.849 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.849 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.849 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.849 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.849 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.850 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.850 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.850 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.850 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.850 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.850 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.850 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.851 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.851 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.851 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.851 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.851 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.851 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.851 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.851 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.852 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.852 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.852 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.852 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.852 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.852 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.852 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.852 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.853 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.853 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.853 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.853 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.853 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.853 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.853 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.854 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.854 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.854 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.854 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.854 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.854 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.854 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.855 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.855 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.855 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.855 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.855 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.855 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.856 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.856 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.856 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.856 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.disable_compute_service_check_for_ffu = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.856 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.856 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.856 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.856 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.857 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.857 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.857 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.857 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.857 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.857 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.857 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.858 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.858 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.858 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.858 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.858 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.858 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.858 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.859 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.859 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.859 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.859 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.859 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.859 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.859 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.859 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.860 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.860 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.860 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.860 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.860 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.860 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.860 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.861 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.861 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.861 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.861 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.861 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.862 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.862 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.862 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.862 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.862 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.862 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.862 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.863 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.863 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.863 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.863 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.863 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.863 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.863 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.864 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.864 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.864 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.864 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.866 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.866 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.867 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.867 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.867 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.867 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.867 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.867 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.868 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.868 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.868 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.868 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.868 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.868 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.868 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.869 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.869 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.869 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.869 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.869 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.869 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.869 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.870 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.870 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.870 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.870 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.870 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.870 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.870 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.871 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.871 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.871 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.871 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.871 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain podman[229163]: 2026-02-20 09:24:13.871529891 +0000 UTC m=+0.182436816 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.871 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.871 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.872 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.872 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.872 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.872 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.872 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.872 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.872 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.873 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.873 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.873 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.873 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.873 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.873 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.874 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.874 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.874 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.874 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.874 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.874 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.874 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.875 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.875 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.875 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.875 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.875 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.875 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.875 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.876 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.876 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.876 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.876 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.876 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.876 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.876 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.876 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.877 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.877 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.877 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.877 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.877 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.877 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.877 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.878 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.878 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.878 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.878 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.878 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.878 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.878 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.879 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.879 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.879 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.879 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.879 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.879 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.880 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.880 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.880 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.880 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.880 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.880 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.880 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.881 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.881 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.881 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.881 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.881 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.881 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.881 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.882 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.882 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.882 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.882 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.882 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.882 228941 DEBUG oslo_service.service [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.884 228941 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.897 228941 INFO nova.virt.node [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Determined node identity e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from /var/lib/nova/compute_id
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.898 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.898 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.898 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.898 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.907 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f8f99259070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.909 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f8f99259070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.910 228941 INFO nova.virt.libvirt.driver [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Connection event '1' reason 'None'
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.918 228941 INFO nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Libvirt host capabilities <capabilities>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <host>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <uuid>a53ba227-4db8-45ed-bb70-5a295cbaca1c</uuid>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <cpu>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <arch>x86_64</arch>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model>EPYC-Rome-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <vendor>AMD</vendor>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <microcode version='16777317'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <signature family='23' model='49' stepping='0'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <maxphysaddr mode='emulate' bits='40'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='x2apic'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='tsc-deadline'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='osxsave'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='hypervisor'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='tsc_adjust'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='spec-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='stibp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='arch-capabilities'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='ssbd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='cmp_legacy'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='topoext'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='virt-ssbd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='lbrv'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='tsc-scale'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='vmcb-clean'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='pause-filter'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='pfthreshold'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='svme-addr-chk'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='rdctl-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='skip-l1dfl-vmentry'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='mds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature name='pschange-mc-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <pages unit='KiB' size='4'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <pages unit='KiB' size='2048'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <pages unit='KiB' size='1048576'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </cpu>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <power_management>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <suspend_mem/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <suspend_disk/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <suspend_hybrid/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </power_management>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <iommu support='no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <migration_features>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <live/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <uri_transports>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <uri_transport>tcp</uri_transport>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <uri_transport>rdma</uri_transport>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </uri_transports>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </migration_features>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <topology>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <cells num='1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <cell id='0'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:           <memory unit='KiB'>16116612</memory>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:           <pages unit='KiB' size='4'>4029153</pages>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:           <pages unit='KiB' size='2048'>0</pages>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:           <pages unit='KiB' size='1048576'>0</pages>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:           <distances>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:             <sibling id='0' value='10'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:           </distances>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:           <cpus num='8'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:           </cpus>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         </cell>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </cells>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </topology>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <cache>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </cache>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <secmodel>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model>selinux</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <doi>0</doi>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </secmodel>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <secmodel>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model>dac</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <doi>0</doi>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <baselabel type='kvm'>+107:+107</baselabel>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <baselabel type='qemu'>+107:+107</baselabel>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </secmodel>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   </host>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <guest>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <os_type>hvm</os_type>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <arch name='i686'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <wordsize>32</wordsize>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <domain type='qemu'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <domain type='kvm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </arch>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <features>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <pae/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <nonpae/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <acpi default='on' toggle='yes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <apic default='on' toggle='no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <cpuselection/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <deviceboot/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <disksnapshot default='on' toggle='no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <externalSnapshot/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </features>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   </guest>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <guest>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <os_type>hvm</os_type>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <arch name='x86_64'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <wordsize>64</wordsize>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <domain type='qemu'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <domain type='kvm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </arch>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <features>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <acpi default='on' toggle='yes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <apic default='on' toggle='no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <cpuselection/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <deviceboot/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <disksnapshot default='on' toggle='no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <externalSnapshot/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </features>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   </guest>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: </capabilities>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.923 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 20 09:24:13 np0005625203.localdomain podman[229163]: 2026-02-20 09:24:13.927669099 +0000 UTC m=+0.238576034 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.930 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: <domainCapabilities>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <domain>kvm</domain>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <arch>i686</arch>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <vcpu max='240'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <iothreads supported='yes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <os supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <enum name='firmware'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <loader supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>rom</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>pflash</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='readonly'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>yes</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>no</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='secure'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>no</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </loader>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   </os>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <cpu>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>on</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>off</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <mode name='maximum' supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='maximumMigratable'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>on</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>off</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <mode name='host-model' supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <vendor>AMD</vendor>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='x2apic'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='stibp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='ssbd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='succor'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='ibrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='lbrv'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <mode name='custom' supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='ClearwaterForest'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ddpd-u'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sha512'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sm3'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sm4'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ddpd-u'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sha512'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sm3'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sm4'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cooperlake'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cooperlake-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cooperlake-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Dhyana-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Genoa'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='perfmon-v2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Turin'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='perfmon-v2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbpb'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='perfmon-v2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbpb'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-v4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-v5'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-128'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-256'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-512'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-128'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-256'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-512'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-IBRS'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-noTSX'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='KnightsMill'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512er'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512pf'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='KnightsMill-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512er'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512pf'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G5'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='tbm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='tbm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='athlon'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='athlon-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='core2duo'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='core2duo-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='coreduo'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='coreduo-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='n270'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='n270-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='phenom'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='phenom-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   </cpu>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <memoryBacking supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <enum name='sourceType'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <value>file</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <value>anonymous</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <value>memfd</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   </memoryBacking>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <devices>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <disk supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='diskDevice'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>disk</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>cdrom</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>floppy</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>lun</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='bus'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>ide</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>fdc</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>scsi</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>usb</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>sata</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>virtio-transitional</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>virtio-non-transitional</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </disk>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <graphics supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>vnc</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>egl-headless</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>dbus</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </graphics>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <video supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='modelType'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>vga</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>cirrus</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>none</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>bochs</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>ramfb</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </video>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <hostdev supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='mode'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>subsystem</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='startupPolicy'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>default</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>mandatory</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>requisite</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>optional</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='subsysType'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>usb</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>pci</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>scsi</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='capsType'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='pciBackend'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </hostdev>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <rng supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>virtio-transitional</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>virtio-non-transitional</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='backendModel'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>random</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>egd</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>builtin</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </rng>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <filesystem supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='driverType'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>path</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>handle</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>virtiofs</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </filesystem>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <tpm supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>tpm-tis</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>tpm-crb</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='backendModel'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>emulator</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>external</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='backendVersion'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>2.0</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </tpm>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <redirdev supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='bus'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>usb</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </redirdev>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <channel supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>pty</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>unix</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </channel>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <crypto supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='model'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>qemu</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='backendModel'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>builtin</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </crypto>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <interface supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='backendType'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>default</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>passt</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </interface>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <panic supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>isa</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>hyperv</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </panic>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <console supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>null</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>vc</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>pty</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>dev</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>file</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>pipe</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>stdio</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>udp</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>tcp</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>unix</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>qemu-vdagent</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>dbus</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </console>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   </devices>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <features>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <gic supported='no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <vmcoreinfo supported='yes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <genid supported='yes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <backingStoreInput supported='yes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <backup supported='yes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <async-teardown supported='yes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <s390-pv supported='no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <ps2 supported='yes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <tdx supported='no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <sev supported='no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <sgx supported='no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <hyperv supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='features'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>relaxed</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>vapic</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>spinlocks</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>vpindex</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>runtime</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>synic</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>stimer</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>reset</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>vendor_id</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>frequencies</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>reenlightenment</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>tlbflush</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>ipi</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>avic</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>emsr_bitmap</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>xmm_input</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <defaults>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <spinlocks>4095</spinlocks>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <stimer_direct>on</stimer_direct>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </defaults>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </hyperv>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <launchSecurity supported='no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   </features>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: </domainCapabilities>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.935 228941 DEBUG nova.virt.libvirt.volume.mount [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.937 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]: <domainCapabilities>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <domain>kvm</domain>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <arch>i686</arch>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <vcpu max='1024'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <iothreads supported='yes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <os supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <enum name='firmware'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <loader supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>rom</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>pflash</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='readonly'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>yes</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>no</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='secure'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>no</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </loader>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   </os>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:   <cpu>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>on</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>off</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <mode name='maximum' supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <enum name='maximumMigratable'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>on</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <value>off</value>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <mode name='host-model' supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <vendor>AMD</vendor>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='x2apic'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='stibp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='ssbd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='succor'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='ibrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='lbrv'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:     <mode name='custom' supported='yes'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='ClearwaterForest'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ddpd-u'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sha512'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sm3'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sm4'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ddpd-u'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sha512'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sm3'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sm4'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cooperlake'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cooperlake-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cooperlake-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='Dhyana-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Genoa'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='perfmon-v2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Turin'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='perfmon-v2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbpb'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='perfmon-v2'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchi'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='sbpb'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-v3'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-v4'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-v5'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids'>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:13 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-128'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-256'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-512'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-128'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-256'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-512'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-noTSX'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='KnightsMill'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512er'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512pf'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='KnightsMill-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512er'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512pf'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tbm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tbm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='athlon'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='athlon-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='core2duo'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='core2duo-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='coreduo'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='coreduo-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='n270'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='n270-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='phenom'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='phenom-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </cpu>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <memoryBacking supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <enum name='sourceType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>file</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>anonymous</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>memfd</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </memoryBacking>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <devices>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <disk supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='diskDevice'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>disk</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>cdrom</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>floppy</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>lun</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='bus'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>fdc</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>scsi</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>usb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>sata</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio-transitional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio-non-transitional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </disk>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <graphics supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vnc</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>egl-headless</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>dbus</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </graphics>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <video supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='modelType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vga</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>cirrus</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>none</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>bochs</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>ramfb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </video>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <hostdev supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='mode'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>subsystem</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='startupPolicy'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>default</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>mandatory</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>requisite</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>optional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='subsysType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>usb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pci</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>scsi</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='capsType'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='pciBackend'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </hostdev>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <rng supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio-transitional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio-non-transitional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendModel'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>random</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>egd</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>builtin</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </rng>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <filesystem supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='driverType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>path</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>handle</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtiofs</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </filesystem>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <tpm supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>tpm-tis</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>tpm-crb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendModel'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>emulator</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>external</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendVersion'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>2.0</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </tpm>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <redirdev supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='bus'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>usb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </redirdev>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <channel supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pty</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>unix</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </channel>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <crypto supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>qemu</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendModel'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>builtin</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </crypto>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <interface supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>default</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>passt</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </interface>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <panic supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>isa</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>hyperv</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </panic>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <console supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>null</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vc</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pty</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>dev</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>file</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pipe</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>stdio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>udp</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>tcp</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>unix</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>qemu-vdagent</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>dbus</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </console>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </devices>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <features>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <gic supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <vmcoreinfo supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <genid supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <backingStoreInput supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <backup supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <async-teardown supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <s390-pv supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <ps2 supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <tdx supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <sev supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <sgx supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <hyperv supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='features'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>relaxed</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vapic</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>spinlocks</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vpindex</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>runtime</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>synic</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>stimer</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>reset</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vendor_id</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>frequencies</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>reenlightenment</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>tlbflush</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>ipi</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>avic</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>emsr_bitmap</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>xmm_input</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <defaults>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <spinlocks>4095</spinlocks>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <stimer_direct>on</stimer_direct>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </defaults>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </hyperv>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <launchSecurity supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </features>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: </domainCapabilities>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.982 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:13.986 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: <domainCapabilities>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <domain>kvm</domain>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <arch>x86_64</arch>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <vcpu max='240'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <iothreads supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <os supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <enum name='firmware'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <loader supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>rom</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pflash</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='readonly'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>yes</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>no</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='secure'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>no</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </loader>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </os>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <cpu>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>on</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>off</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <mode name='maximum' supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='maximumMigratable'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>on</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>off</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <mode name='host-model' supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <vendor>AMD</vendor>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='x2apic'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='stibp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='ssbd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='succor'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='ibrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='lbrv'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <mode name='custom' supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='ClearwaterForest'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ddpd-u'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sha512'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sm3'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sm4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ddpd-u'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sha512'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sm3'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sm4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cooperlake'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cooperlake-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cooperlake-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Dhyana-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Genoa'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='perfmon-v2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Turin'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='perfmon-v2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbpb'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='perfmon-v2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbpb'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-v5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-128'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-256'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-512'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-128'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-256'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-512'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-noTSX'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='KnightsMill'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512er'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512pf'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='KnightsMill-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512er'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512pf'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tbm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tbm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='athlon'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='athlon-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='core2duo'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='core2duo-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='coreduo'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='coreduo-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='n270'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='n270-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='phenom'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='phenom-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </cpu>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <memoryBacking supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <enum name='sourceType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>file</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>anonymous</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>memfd</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </memoryBacking>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <devices>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <disk supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='diskDevice'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>disk</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>cdrom</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>floppy</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>lun</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='bus'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>ide</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>fdc</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>scsi</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>usb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>sata</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio-transitional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio-non-transitional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </disk>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <graphics supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vnc</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>egl-headless</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>dbus</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </graphics>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <video supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='modelType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vga</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>cirrus</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>none</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>bochs</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>ramfb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </video>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <hostdev supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='mode'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>subsystem</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='startupPolicy'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>default</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>mandatory</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>requisite</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>optional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='subsysType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>usb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pci</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>scsi</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='capsType'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='pciBackend'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </hostdev>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <rng supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio-transitional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio-non-transitional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendModel'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>random</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>egd</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>builtin</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </rng>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <filesystem supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='driverType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>path</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>handle</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtiofs</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </filesystem>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <tpm supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>tpm-tis</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>tpm-crb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendModel'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>emulator</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>external</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendVersion'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>2.0</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </tpm>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <redirdev supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='bus'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>usb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </redirdev>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <channel supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pty</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>unix</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </channel>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <crypto supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>qemu</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendModel'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>builtin</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </crypto>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <interface supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>default</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>passt</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </interface>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <panic supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>isa</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>hyperv</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </panic>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <console supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>null</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vc</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pty</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>dev</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>file</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pipe</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>stdio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>udp</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>tcp</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>unix</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>qemu-vdagent</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>dbus</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </console>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </devices>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <features>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <gic supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <vmcoreinfo supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <genid supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <backingStoreInput supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <backup supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <async-teardown supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <s390-pv supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <ps2 supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <tdx supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <sev supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <sgx supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <hyperv supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='features'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>relaxed</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vapic</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>spinlocks</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vpindex</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>runtime</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>synic</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>stimer</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>reset</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vendor_id</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>frequencies</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>reenlightenment</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>tlbflush</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>ipi</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>avic</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>emsr_bitmap</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>xmm_input</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <defaults>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <spinlocks>4095</spinlocks>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <stimer_direct>on</stimer_direct>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </defaults>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </hyperv>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <launchSecurity supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </features>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: </domainCapabilities>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.051 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: <domainCapabilities>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <domain>kvm</domain>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <arch>x86_64</arch>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <vcpu max='1024'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <iothreads supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <os supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <enum name='firmware'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>efi</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <loader supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>rom</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pflash</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='readonly'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>yes</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>no</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='secure'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>yes</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>no</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </loader>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </os>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <cpu>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>on</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>off</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <mode name='maximum' supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='maximumMigratable'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>on</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>off</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <mode name='host-model' supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <vendor>AMD</vendor>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='x2apic'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='stibp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='ssbd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='succor'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='ibrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='lbrv'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <mode name='custom' supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Broadwell-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='ClearwaterForest'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ddpd-u'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sha512'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sm3'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sm4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ddpd-u'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sha512'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sm3'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sm4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cooperlake'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cooperlake-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Cooperlake-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Denverton-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Dhyana-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Genoa'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='perfmon-v2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Turin'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='perfmon-v2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbpb'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amd-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='auto-ibrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibpb-brtype'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='no-nested-data-bp'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='null-sel-clr-base'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='perfmon-v2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbpb'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='stibp-always-on'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='EPYC-v5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-128'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-256'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-512'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-128'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-256'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx10-512'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='prefetchiti'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-noTSX'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Haswell-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='IvyBridge-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='KnightsMill'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512er'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512pf'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='KnightsMill-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4fmaps'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-4vnniw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512er'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512pf'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tbm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fma4'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tbm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xop'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='amx-tile'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-bf16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-fp16'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bitalg'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vbmi2'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrc'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fzrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='la57'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='taa-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='tsx-ldtrk'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='SierraForest-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ifma'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-ne-convert'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx-vnni-int8'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bhi-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='bus-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cmpccxadd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fbsdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='fsrs'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ibrs-all'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='intel-psfd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ipred-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='lam'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mcdt-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pbrsb-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='psdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rfds-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rrsba-ctrl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='serialize'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vaes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='vpclmulqdq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='hle'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='rtm'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512bw'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512cd'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512dq'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512f'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='avx512vl'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='invpcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pcid'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='pku'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='mpx'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v2'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v3'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='core-capability'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='split-lock-detect'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='Snowridge-v4'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='cldemote'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='erms'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='gfni'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdir64b'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='movdiri'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='xsaves'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='athlon'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='athlon-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='core2duo'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='core2duo-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='coreduo'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='coreduo-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='n270'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='n270-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='ss'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='phenom'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <blockers model='phenom-v1'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnow'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <feature name='3dnowext'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </blockers>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </mode>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </cpu>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <memoryBacking supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <enum name='sourceType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>file</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>anonymous</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <value>memfd</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </memoryBacking>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <devices>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <disk supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='diskDevice'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>disk</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>cdrom</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>floppy</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>lun</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='bus'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>fdc</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>scsi</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>usb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>sata</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio-transitional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio-non-transitional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </disk>
Feb 20 09:24:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3952 DF PROTO=TCP SPT=59440 DPT=9105 SEQ=1194372364 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA002E000000000001030307) 
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <graphics supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vnc</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>egl-headless</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>dbus</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </graphics>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <video supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='modelType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vga</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>cirrus</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>none</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>bochs</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>ramfb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </video>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <hostdev supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='mode'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>subsystem</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='startupPolicy'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>default</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>mandatory</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>requisite</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>optional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='subsysType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>usb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pci</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>scsi</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='capsType'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='pciBackend'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </hostdev>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <rng supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio-transitional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtio-non-transitional</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendModel'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>random</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>egd</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>builtin</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </rng>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <filesystem supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='driverType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>path</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>handle</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>virtiofs</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </filesystem>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <tpm supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>tpm-tis</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>tpm-crb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendModel'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>emulator</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>external</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendVersion'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>2.0</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </tpm>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <redirdev supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='bus'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>usb</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </redirdev>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <channel supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pty</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>unix</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </channel>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <crypto supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>qemu</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendModel'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>builtin</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </crypto>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <interface supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='backendType'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>default</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>passt</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </interface>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <panic supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='model'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>isa</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>hyperv</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </panic>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <console supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='type'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>null</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vc</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pty</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>dev</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>file</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>pipe</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>stdio</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>udp</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>tcp</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>unix</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>qemu-vdagent</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>dbus</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </console>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </devices>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   <features>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <gic supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <vmcoreinfo supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <genid supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <backingStoreInput supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <backup supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <async-teardown supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <s390-pv supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <ps2 supported='yes'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <tdx supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <sev supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <sgx supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <hyperv supported='yes'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <enum name='features'>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>relaxed</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vapic</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>spinlocks</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vpindex</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>runtime</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>synic</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>stimer</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>reset</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>vendor_id</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>frequencies</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>reenlightenment</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>tlbflush</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>ipi</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>avic</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>emsr_bitmap</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <value>xmm_input</value>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </enum>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       <defaults>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <spinlocks>4095</spinlocks>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <stimer_direct>on</stimer_direct>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:       </defaults>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     </hyperv>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:     <launchSecurity supported='no'/>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:   </features>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: </domainCapabilities>
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.111 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.111 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.114 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.114 228941 INFO nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Secure Boot support detected
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.116 228941 INFO nova.virt.libvirt.driver [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.117 228941 INFO nova.virt.libvirt.driver [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.127 228941 DEBUG nova.virt.libvirt.driver [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.140 228941 INFO nova.virt.node [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Determined node identity e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from /var/lib/nova/compute_id
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.153 228941 DEBUG nova.compute.manager [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Verified node e5d5157a-2df2-4f51-b5fb-cd2da3a8584e matches my host np0005625203.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.173 228941 INFO nova.compute.manager [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.269 228941 DEBUG oslo_concurrency.lockutils [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.269 228941 DEBUG oslo_concurrency.lockutils [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.270 228941 DEBUG oslo_concurrency.lockutils [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.270 228941 DEBUG nova.compute.resource_tracker [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.270 228941 DEBUG oslo_concurrency.processutils [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.708 228941 DEBUG oslo_concurrency.processutils [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.889 228941 WARNING nova.virt.libvirt.driver [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.891 228941 DEBUG nova.compute.resource_tracker [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=13619MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.891 228941 DEBUG oslo_concurrency.lockutils [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:24:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:14.891 228941 DEBUG oslo_concurrency.lockutils [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:24:14 np0005625203.localdomain rsyslogd[758]: imjournal from <localhost:nova_compute>: begin to drop messages due to rate-limiting
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.027 228941 DEBUG nova.compute.resource_tracker [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.027 228941 DEBUG nova.compute.resource_tracker [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.041 228941 DEBUG nova.scheduler.client.report [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Refreshing inventories for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.087 228941 DEBUG nova.scheduler.client.report [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Updating ProviderTree inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.088 228941 DEBUG nova.compute.provider_tree [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.102 228941 DEBUG nova.scheduler.client.report [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Refreshing aggregate associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.120 228941 DEBUG nova.scheduler.client.report [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Refreshing trait associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, traits: HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AESNI,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SHA,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.133 228941 DEBUG oslo_concurrency.processutils [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.587 228941 DEBUG oslo_concurrency.processutils [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.593 228941 DEBUG nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.593 228941 INFO nova.virt.libvirt.host [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] kernel doesn't support AMD SEV
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.595 228941 DEBUG nova.compute.provider_tree [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.595 228941 DEBUG nova.virt.libvirt.driver [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.618 228941 DEBUG nova.scheduler.client.report [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.644 228941 DEBUG nova.compute.resource_tracker [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.644 228941 DEBUG oslo_concurrency.lockutils [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.645 228941 DEBUG nova.service [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.674 228941 DEBUG nova.service [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Feb 20 09:24:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:24:15.675 228941 DEBUG nova.servicegroup.drivers.db [None req-db5ae08e-aeac-447b-8df7-01ada35025b3 - - - - - -] DB_Driver: join new ServiceGroup member np0005625203.localdomain to the compute group, service = <Service: host=np0005625203.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Feb 20 09:24:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54264 DF PROTO=TCP SPT=46854 DPT=9101 SEQ=306080038 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0039000000000001030307) 
Feb 20 09:24:18 np0005625203.localdomain sshd[229271]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:24:19 np0005625203.localdomain sshd[229272]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:24:19 np0005625203.localdomain sshd[229272]: Accepted publickey for zuul from 192.168.122.30 port 50768 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:24:19 np0005625203.localdomain systemd-logind[759]: New session 55 of user zuul.
Feb 20 09:24:19 np0005625203.localdomain systemd[1]: Started Session 55 of User zuul.
Feb 20 09:24:19 np0005625203.localdomain sshd[229272]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:24:19 np0005625203.localdomain sshd[229271]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:24:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53953 DF PROTO=TCP SPT=50806 DPT=9100 SEQ=2153464641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0045400000000001030307) 
Feb 20 09:24:20 np0005625203.localdomain python3.9[229384]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:24:21 np0005625203.localdomain sudo[229496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rthoiulhyhujwpzlbdpuaupwgnctopii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579461.1191578-64-143298862957926/AnsiballZ_systemd_service.py
Feb 20 09:24:21 np0005625203.localdomain sudo[229496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:21 np0005625203.localdomain python3.9[229498]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:24:21 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:24:22 np0005625203.localdomain systemd-rc-local-generator[229524]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:24:22 np0005625203.localdomain systemd-sysv-generator[229527]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:24:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:24:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:22 np0005625203.localdomain sudo[229496]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=196 DF PROTO=TCP SPT=33912 DPT=9100 SEQ=4160601369 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0050800000000001030307) 
Feb 20 09:24:23 np0005625203.localdomain python3.9[229641]: ansible-ansible.builtin.service_facts Invoked
Feb 20 09:24:23 np0005625203.localdomain network[229658]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 09:24:23 np0005625203.localdomain network[229659]: 'network-scripts' will be removed from distribution in near future.
Feb 20 09:24:23 np0005625203.localdomain network[229660]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 09:24:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=53955 DF PROTO=TCP SPT=50806 DPT=9100 SEQ=2153464641 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA005D000000000001030307) 
Feb 20 09:24:26 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:24:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46607 DF PROTO=TCP SPT=44558 DPT=9882 SEQ=1716464507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0067DC0000000001030307) 
Feb 20 09:24:29 np0005625203.localdomain sudo[229891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmnzfzleirwwvbajhcvomfoceogkcbzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579469.325611-121-254827657563539/AnsiballZ_systemd_service.py
Feb 20 09:24:29 np0005625203.localdomain sudo[229891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:29 np0005625203.localdomain python3.9[229893]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:24:29 np0005625203.localdomain sudo[229891]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:30 np0005625203.localdomain sudo[230002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ezhcnqqefsxhqutpbnsdqciyshbqvlja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579470.281352-151-232188432174371/AnsiballZ_file.py
Feb 20 09:24:30 np0005625203.localdomain sudo[230002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:30 np0005625203.localdomain python3.9[230004]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:24:30 np0005625203.localdomain sudo[230002]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:30 np0005625203.localdomain systemd-journald[48285]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 76.3 (254 of 333 items), suggesting rotation.
Feb 20 09:24:30 np0005625203.localdomain systemd-journald[48285]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating.
Feb 20 09:24:30 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:24:30 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:24:31 np0005625203.localdomain sudo[230113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lmmmpfoxkrecfuzajsslyzifaefjshai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579471.0872455-175-41540834289485/AnsiballZ_file.py
Feb 20 09:24:31 np0005625203.localdomain sudo[230113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:31 np0005625203.localdomain python3.9[230115]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:24:31 np0005625203.localdomain sudo[230113]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46609 DF PROTO=TCP SPT=44558 DPT=9882 SEQ=1716464507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0074000000000001030307) 
Feb 20 09:24:32 np0005625203.localdomain sudo[230223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grfmpmtjioloybwlskrrqrzcrogwbrzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579471.8605282-202-11760489094778/AnsiballZ_command.py
Feb 20 09:24:32 np0005625203.localdomain sudo[230223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:32 np0005625203.localdomain python3.9[230225]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:24:32 np0005625203.localdomain sudo[230223]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:33 np0005625203.localdomain python3.9[230335]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 20 09:24:33 np0005625203.localdomain sudo[230443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ydngtrjzojfltoyqlhtkufapnlnxxjvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579473.4921558-256-263296974375690/AnsiballZ_systemd_service.py
Feb 20 09:24:33 np0005625203.localdomain sudo[230443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:34 np0005625203.localdomain python3.9[230445]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:24:34 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:24:34 np0005625203.localdomain systemd-sysv-generator[230473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:24:34 np0005625203.localdomain systemd-rc-local-generator[230470]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:24:34 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:34 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:34 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:34 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:34 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:24:34 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:34 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:34 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:34 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:34 np0005625203.localdomain sudo[230443]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:35 np0005625203.localdomain sudo[230589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mgigqyycnfyymsnkonfptbruztufvcda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579474.860225-281-85198752445740/AnsiballZ_command.py
Feb 20 09:24:35 np0005625203.localdomain sudo[230589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:35 np0005625203.localdomain python3.9[230591]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:24:35 np0005625203.localdomain sudo[230589]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46610 DF PROTO=TCP SPT=44558 DPT=9882 SEQ=1716464507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0083C10000000001030307) 
Feb 20 09:24:36 np0005625203.localdomain sudo[230700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rsimyonwquummeyjzuqydyocjmomkzsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579476.5724607-307-179528011702995/AnsiballZ_file.py
Feb 20 09:24:36 np0005625203.localdomain sudo[230700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:37 np0005625203.localdomain python3.9[230702]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:24:37 np0005625203.localdomain sudo[230700]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47565 DF PROTO=TCP SPT=49410 DPT=9105 SEQ=3444906268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA008B810000000001030307) 
Feb 20 09:24:38 np0005625203.localdomain python3.9[230810]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:24:39 np0005625203.localdomain sudo[230920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-papghplufvatwzzqojjzfuppzebwhhtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579478.6377852-355-124753988528593/AnsiballZ_group.py
Feb 20 09:24:39 np0005625203.localdomain sudo[230920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:39 np0005625203.localdomain python3.9[230922]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Feb 20 09:24:39 np0005625203.localdomain sudo[230920]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:41 np0005625203.localdomain sudo[231030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sximbafeffuzgwgsbatjuvpowgrgxyyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579480.67701-388-58951695555914/AnsiballZ_getent.py
Feb 20 09:24:41 np0005625203.localdomain sudo[231030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:41 np0005625203.localdomain python3.9[231032]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Feb 20 09:24:41 np0005625203.localdomain sudo[231030]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:41 np0005625203.localdomain sudo[231141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uymmcrwbugnxjtzsxiwxhgyszgpzapfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579481.4716268-412-114922225989711/AnsiballZ_group.py
Feb 20 09:24:41 np0005625203.localdomain sudo[231141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:41 np0005625203.localdomain python3.9[231143]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Feb 20 09:24:41 np0005625203.localdomain groupadd[231144]: group added to /etc/group: name=ceilometer, GID=42405
Feb 20 09:24:41 np0005625203.localdomain groupadd[231144]: group added to /etc/gshadow: name=ceilometer
Feb 20 09:24:42 np0005625203.localdomain groupadd[231144]: new group: name=ceilometer, GID=42405
Feb 20 09:24:42 np0005625203.localdomain sudo[231141]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:42 np0005625203.localdomain sudo[231257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kvwaoccviounxnosbneoeutopvybfyui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579482.2739713-436-51585568374597/AnsiballZ_user.py
Feb 20 09:24:42 np0005625203.localdomain sudo[231257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:42 np0005625203.localdomain python3.9[231259]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005625203.localdomain update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Feb 20 09:24:42 np0005625203.localdomain useradd[231261]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Feb 20 09:24:42 np0005625203.localdomain useradd[231261]: add 'ceilometer' to group 'libvirt'
Feb 20 09:24:42 np0005625203.localdomain useradd[231261]: add 'ceilometer' to shadow group 'libvirt'
Feb 20 09:24:43 np0005625203.localdomain sudo[231257]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47567 DF PROTO=TCP SPT=49410 DPT=9105 SEQ=3444906268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA00A3410000000001030307) 
Feb 20 09:24:44 np0005625203.localdomain python3.9[231375]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:24:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46611 DF PROTO=TCP SPT=44558 DPT=9882 SEQ=1716464507 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA00A4800000000001030307) 
Feb 20 09:24:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:24:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:24:44 np0005625203.localdomain podman[231425]: 2026-02-20 09:24:44.765273779 +0000 UTC m=+0.074418831 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Feb 20 09:24:44 np0005625203.localdomain podman[231425]: 2026-02-20 09:24:44.775277215 +0000 UTC m=+0.084422267 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:24:44 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:24:44 np0005625203.localdomain systemd[1]: tmp-crun.3l3AY0.mount: Deactivated successfully.
Feb 20 09:24:44 np0005625203.localdomain podman[231426]: 2026-02-20 09:24:44.835696539 +0000 UTC m=+0.142036816 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:24:44 np0005625203.localdomain podman[231426]: 2026-02-20 09:24:44.905196448 +0000 UTC m=+0.211536675 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Feb 20 09:24:44 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:24:45 np0005625203.localdomain python3.9[231490]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771579483.898011-514-252827013682594/.source.conf _original_basename=ceilometer.conf follow=False checksum=995f60cd4d2c51f98e8243d6429f9405f206b7a7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:24:45 np0005625203.localdomain python3.9[231612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:24:46 np0005625203.localdomain python3.9[231698]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771579485.1336634-514-241030488813921/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:24:46 np0005625203.localdomain python3.9[231806]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:24:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37109 DF PROTO=TCP SPT=42424 DPT=9101 SEQ=4024048267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA00AE410000000001030307) 
Feb 20 09:24:47 np0005625203.localdomain python3.9[231892]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1771579486.129454-514-109323139970553/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:24:47 np0005625203.localdomain python3.9[232000]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:24:48 np0005625203.localdomain python3.9[232108]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:24:48 np0005625203.localdomain python3.9[232216]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:24:49 np0005625203.localdomain python3.9[232302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579488.5444167-691-57162738095598/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=afc16bf7956b96e4f42fc9d00ace8150463776da backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:24:50 np0005625203.localdomain python3.9[232410]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:24:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31126 DF PROTO=TCP SPT=43626 DPT=9100 SEQ=1600238134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA00BA800000000001030307) 
Feb 20 09:24:50 np0005625203.localdomain python3.9[232496]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/openstack_network_exporter.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579489.6104274-691-114855391851674/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=e2858327749c09c7b8ca5fc97985d7885b95bd4b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:24:50 np0005625203.localdomain sudo[232514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:24:50 np0005625203.localdomain sudo[232514]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:24:50 np0005625203.localdomain sudo[232514]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:50 np0005625203.localdomain sudo[232536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:24:50 np0005625203.localdomain sudo[232536]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:24:51 np0005625203.localdomain python3.9[232640]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:24:51 np0005625203.localdomain podman[232800]: 2026-02-20 09:24:51.65264879 +0000 UTC m=+0.091402490 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1770267347, GIT_BRANCH=main, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.openshift.tags=rhceph ceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, CEPH_POINT_RELEASE=, GIT_CLEAN=True, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, vcs-type=git)
Feb 20 09:24:51 np0005625203.localdomain python3.9[232799]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579490.7753868-778-243099293038309/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:24:51 np0005625203.localdomain podman[232800]: 2026-02-20 09:24:51.755807297 +0000 UTC m=+0.194561017 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, release=1770267347, io.buildah.version=1.42.2, vcs-type=git, architecture=x86_64, name=rhceph, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, distribution-scope=public, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph)
Feb 20 09:24:52 np0005625203.localdomain sudo[232536]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:52 np0005625203.localdomain sudo[232886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:24:52 np0005625203.localdomain sudo[232886]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:24:52 np0005625203.localdomain sudo[232886]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:52 np0005625203.localdomain sudo[232920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:24:52 np0005625203.localdomain sudo[232920]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:24:52 np0005625203.localdomain python3.9[233012]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:24:52 np0005625203.localdomain sudo[232920]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37111 DF PROTO=TCP SPT=42424 DPT=9101 SEQ=4024048267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA00C6000000000001030307) 
Feb 20 09:24:53 np0005625203.localdomain python3.9[233153]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:24:53 np0005625203.localdomain sudo[233157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:24:53 np0005625203.localdomain sudo[233157]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:24:53 np0005625203.localdomain sudo[233157]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:53 np0005625203.localdomain python3.9[233279]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:24:54 np0005625203.localdomain sudo[233387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ffdosmdzmcwwyarisngqxdvyqkgdxnza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579494.0931656-904-72293854210828/AnsiballZ_file.py
Feb 20 09:24:54 np0005625203.localdomain sudo[233387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:54 np0005625203.localdomain python3.9[233389]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:24:54 np0005625203.localdomain sudo[233387]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:55 np0005625203.localdomain sudo[233497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uyputlfxevlmvimnlytbaltfjqehnsdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579494.8274145-928-277102614906464/AnsiballZ_systemd_service.py
Feb 20 09:24:55 np0005625203.localdomain sudo[233497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:55 np0005625203.localdomain python3.9[233499]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:24:55 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:24:55 np0005625203.localdomain systemd-rc-local-generator[233525]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:24:55 np0005625203.localdomain systemd-sysv-generator[233531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:24:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:24:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:24:55 np0005625203.localdomain systemd[1]: Listening on Podman API Socket.
Feb 20 09:24:55 np0005625203.localdomain sudo[233497]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31128 DF PROTO=TCP SPT=43626 DPT=9100 SEQ=1600238134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA00D2400000000001030307) 
Feb 20 09:24:58 np0005625203.localdomain sudo[233646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wxnfdpzvxvirjomlfwgzdkwwgeiqsivo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579497.8422384-955-196232851130957/AnsiballZ_stat.py
Feb 20 09:24:58 np0005625203.localdomain sudo[233646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:58 np0005625203.localdomain python3.9[233648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:24:58 np0005625203.localdomain sudo[233646]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:58 np0005625203.localdomain sudo[233734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zfrvkekzygbpozvaurlrntvgjwdqtltm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579497.8422384-955-196232851130957/AnsiballZ_copy.py
Feb 20 09:24:58 np0005625203.localdomain sudo[233734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:58 np0005625203.localdomain python3.9[233736]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579497.8422384-955-196232851130957/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:24:58 np0005625203.localdomain sudo[233734]: pam_unix(sudo:session): session closed for user root
Feb 20 09:24:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34105 DF PROTO=TCP SPT=58682 DPT=9882 SEQ=3313430519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA00DD0C0000000001030307) 
Feb 20 09:24:59 np0005625203.localdomain sudo[233789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ysknymvvhzbpxwxnzfjhobnteywsnyia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579497.8422384-955-196232851130957/AnsiballZ_stat.py
Feb 20 09:24:59 np0005625203.localdomain sudo[233789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:24:59 np0005625203.localdomain python3.9[233791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:24:59 np0005625203.localdomain sudo[233789]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:00 np0005625203.localdomain sudo[233877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kqgwcfcznnqywmtaqpevaooqnmowmpkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579497.8422384-955-196232851130957/AnsiballZ_copy.py
Feb 20 09:25:00 np0005625203.localdomain sudo[233877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:00 np0005625203.localdomain python3.9[233879]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579497.8422384-955-196232851130957/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:25:00 np0005625203.localdomain sudo[233877]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:01 np0005625203.localdomain sudo[233987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qwphunfwfxtugjzbzvlafdfmxzihkwnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579501.2224977-1051-26122347831618/AnsiballZ_file.py
Feb 20 09:25:01 np0005625203.localdomain sudo[233987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:01 np0005625203.localdomain python3.9[233989]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:01 np0005625203.localdomain sudo[233987]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34107 DF PROTO=TCP SPT=58682 DPT=9882 SEQ=3313430519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA00E9010000000001030307) 
Feb 20 09:25:02 np0005625203.localdomain sudo[234097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxnblbzrjzmfueygksttogpwdslsjmub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579501.9018888-1075-131353077815415/AnsiballZ_file.py
Feb 20 09:25:02 np0005625203.localdomain sudo[234097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:02 np0005625203.localdomain python3.9[234099]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:25:02 np0005625203.localdomain sudo[234097]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:02 np0005625203.localdomain sudo[234207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mjtvpuhwlyfhmkwjpnsvnphukpkfuleg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579502.5800984-1099-88998107534751/AnsiballZ_stat.py
Feb 20 09:25:02 np0005625203.localdomain sudo[234207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:03 np0005625203.localdomain python3.9[234209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:25:03 np0005625203.localdomain sudo[234207]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:03 np0005625203.localdomain sudo[234297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ugyoqtslapqosmmoaaglavtfstugpohy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579502.5800984-1099-88998107534751/AnsiballZ_copy.py
Feb 20 09:25:03 np0005625203.localdomain sudo[234297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:03 np0005625203.localdomain python3.9[234299]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579502.5800984-1099-88998107534751/.source.json _original_basename=.t23nf36p follow=False checksum=ce2b0c83293a970bafffa087afa083dd7c93a79c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:03 np0005625203.localdomain sudo[234297]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:04 np0005625203.localdomain python3.9[234407]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=34108 DF PROTO=TCP SPT=58682 DPT=9882 SEQ=3313430519 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA00F8C00000000001030307) 
Feb 20 09:25:06 np0005625203.localdomain sudo[234709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewhmacurtfjtkjsbfnbppapmhvwrawdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579506.148121-1219-163688248414861/AnsiballZ_container_config_data.py
Feb 20 09:25:06 np0005625203.localdomain sudo[234709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:06 np0005625203.localdomain python3.9[234711]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_pattern=*.json debug=False
Feb 20 09:25:06 np0005625203.localdomain sudo[234709]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:07 np0005625203.localdomain sudo[234819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxusmqzgcrkqclbtitxdshcjloasttni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579507.1753974-1252-1253755588160/AnsiballZ_container_config_hash.py
Feb 20 09:25:07 np0005625203.localdomain sudo[234819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:25:07.637 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:25:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:25:07.638 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:25:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:25:07.638 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:25:07 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:07.677 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:25:07 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:07.701 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:25:07 np0005625203.localdomain python3.9[234821]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 20 09:25:07 np0005625203.localdomain sudo[234819]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16506 DF PROTO=TCP SPT=41482 DPT=9105 SEQ=1702601889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0100800000000001030307) 
Feb 20 09:25:09 np0005625203.localdomain sudo[234929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bkujukzeqlrpqbbiaxshlyodatksmgpx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771579508.980913-1282-110508419912297/AnsiballZ_edpm_container_manage.py
Feb 20 09:25:09 np0005625203.localdomain sudo[234929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:09 np0005625203.localdomain python3[234931]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_id=ceilometer_agent_compute config_overrides={} config_patterns=*.json containers=['ceilometer_agent_compute'] log_base_path=/var/log/containers/stdouts debug=False
Feb 20 09:25:09 np0005625203.localdomain python3[234931]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "be811c7ef606e5fdf21f4bb60e867487043c4ca0ef316c864692549ee6c1c369",
                                                                    "Digest": "sha256:ac1f7272c172d96937d32067aeabcc7fe133ed3e13c60a2317e815e24d8d2689",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ceilometer-compute@sha256:ac1f7272c172d96937d32067aeabcc7fe133ed3e13c60a2317e815e24d8d2689"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2026-01-30T06:22:47.562315026Z",
                                                                    "Config": {
                                                                         "User": "root",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20260127",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 506512639,
                                                                    "VirtualSize": 506512639,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/c649efc911c887686c8351fe543502de582148a048396cbc7ad85b29ea075fe6/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/71578ef0b1e4f969b13e723033957534bc6c3b31bff47c5fdb42a55e43d4cef9/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/71578ef0b1e4f969b13e723033957534bc6c3b31bff47c5fdb42a55e43d4cef9/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",
                                                                              "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",
                                                                              "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",
                                                                              "sha256:439ba2a9156018a21d5d8f457e8fb5fa9d39d0de094f0cf38abf8f5215170cd7",
                                                                              "sha256:dd5ae5ce1d5c4d01e233915d61f7cac1450768a920fde6603b0c84bf26180c44"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20260127",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "root",
                                                                    "History": [
                                                                         {
                                                                              "created": "2026-01-28T05:56:51.126388624Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-28T05:56:51.126459235Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20260127\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-28T05:56:53.726938221Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890429494Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890534417Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890553228Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890570688Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890616649Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890659121Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:19.232761948Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:52.670543613Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:55.650316471Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:55.970652058Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:56.274301506Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:56.82928237Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.134416869Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.444274899Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.746599531Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.041383545Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.352119949Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.671042058Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.969834612Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.264649297Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.518696627Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.800434902Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:00.115933627Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:00.41398479Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:03.414738437Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:03.709666444Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:04.019868523Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:05.41751141Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124324267Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124384329Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124399349Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124410339Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:08.028503475Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:12:56.089921987Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:34.524252589Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:37.262239859Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:15:04.692187463Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:15:07.73027664Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage ceilometer",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:15:50.46772776Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:15:52.957817153Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:22:08.791988588Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-ceilometer-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:22:47.559747806Z",
                                                                              "created_by": "/bin/sh -c dnf -y install openstack-ceilometer-compute && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:22:51.022505453Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Feb 20 09:25:10 np0005625203.localdomain podman[234984]: 2026-02-20 09:25:10.077846056 +0000 UTC m=+0.092103231 container remove 5603dc0b2e5c559c9b5cd2d514bcaa3111f8ce120870a3ce02864d5b3c8d1a29 (image=registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1, name=ceilometer_agent_compute, vcs-ref=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, version=17.1.13, config_id=tripleo_step4, com.redhat.component=openstack-ceilometer-compute-container, cpe=cpe:/a:redhat:openstack:17.1::el9, config_data={'depends_on': ['tripleo_nova_libvirt.target'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '201974126bd6c3f7e7b4f5296aea3207'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': 'registry.redhat.io/rhosp-rhel9/openstack-ceilometer-compute:17.1', 'net': 'host', 'privileged': False, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ceilometer:/var/lib/kolla/config_files/src:ro', '/run/libvirt:/run/libvirt:shared,z', '/var/log/containers/ceilometer:/var/log/ceilometer:z']}, managed_by=tripleo_ansible, vcs-type=git, summary=Red Hat OpenStack Platform 17.1 ceilometer-compute, batch=17.1_20260112.1, io.k8s.description=Red Hat OpenStack Platform 17.1 ceilometer-compute, distribution-scope=public, name=rhosp-rhel9/openstack-ceilometer-compute, container_name=ceilometer_agent_compute, architecture=x86_64, org.opencontainers.image.created=2026-01-12T23:07:47Z, vendor=Red Hat, Inc., konflux.additional-tags=17.1.13 17.1_20260112.1, io.openshift.expose-services=, description=Red Hat OpenStack Platform 17.1 ceilometer-compute, tcib_managed=true, org.opencontainers.image.revision=4f6dafcdcf2b41e56d843f4a43eac1db1e9fee06, maintainer=OpenStack TripleO Team, io.openshift.tags=rhosp osp openstack osp-17.1 openstack-ceilometer-compute, io.k8s.display-name=Red Hat OpenStack Platform 17.1 ceilometer-compute, release=1766032510, baseimage=registry.redhat.io/rhel9-2-els/rhel:9.2@sha256:746ff7bb175b780f02d72aa0177aa6d6ba4ebbee4456d01e05d465c1515e02b0, build-date=2026-01-12T23:07:47Z, url=https://www.redhat.com, io.buildah.version=1.41.5, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:25:10 np0005625203.localdomain python3[234931]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force ceilometer_agent_compute
Feb 20 09:25:10 np0005625203.localdomain podman[234997]: 
Feb 20 09:25:10 np0005625203.localdomain podman[234997]: 2026-02-20 09:25:10.184731047 +0000 UTC m=+0.088132839 container create aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127)
Feb 20 09:25:10 np0005625203.localdomain podman[234997]: 2026-02-20 09:25:10.144261753 +0000 UTC m=+0.047663605 image pull  quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified
Feb 20 09:25:10 np0005625203.localdomain python3[234931]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683 --healthcheck-command /openstack/healthcheck compute --label config_id=ceilometer_agent_compute --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified kolla_start
Feb 20 09:25:10 np0005625203.localdomain sudo[234929]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=3955 DF PROTO=TCP SPT=59440 DPT=9105 SEQ=1194372364 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA010C800000000001030307) 
Feb 20 09:25:11 np0005625203.localdomain sudo[235142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ckwsjtzjneliagegurhvmaouuuosfcvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579511.1919725-1306-68252403256005/AnsiballZ_stat.py
Feb 20 09:25:11 np0005625203.localdomain sudo[235142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:11 np0005625203.localdomain python3.9[235144]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:25:11 np0005625203.localdomain sudo[235142]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:12 np0005625203.localdomain sudo[235254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wutkkowkkuhkazbmuscapdbanczvoahe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579512.0808756-1333-98907776732151/AnsiballZ_file.py
Feb 20 09:25:12 np0005625203.localdomain sudo[235254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:12 np0005625203.localdomain python3.9[235256]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:12 np0005625203.localdomain sudo[235254]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:12 np0005625203.localdomain sudo[235309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kxpibyvsmfuvdoqvtwbazyyqxdsydunu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579512.0808756-1333-98907776732151/AnsiballZ_stat.py
Feb 20 09:25:12 np0005625203.localdomain sudo[235309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:12 np0005625203.localdomain python3.9[235311]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:25:12 np0005625203.localdomain sudo[235309]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.201 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.202 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.202 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.202 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.220 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.220 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.221 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.222 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.222 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.222 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.223 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.223 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.224 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.247 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.247 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.247 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.248 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.248 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:25:13 np0005625203.localdomain sudo[235438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vzjppxfxebqufqhpjianxtlinkperavn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579513.0223312-1333-191901897524430/AnsiballZ_copy.py
Feb 20 09:25:13 np0005625203.localdomain sudo[235438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:13 np0005625203.localdomain python3.9[235440]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771579513.0223312-1333-191901897524430/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:13 np0005625203.localdomain sudo[235438]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.737 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.935 228941 WARNING nova.virt.libvirt.driver [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.936 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=13613MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.936 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:25:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:13.937 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:25:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:14.010 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:25:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:14.010 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:25:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:14.042 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:25:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16508 DF PROTO=TCP SPT=41482 DPT=9105 SEQ=1702601889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0118400000000001030307) 
Feb 20 09:25:14 np0005625203.localdomain sudo[235515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhuovpdfwexopqeptblijfaoghsskliu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579513.0223312-1333-191901897524430/AnsiballZ_systemd.py
Feb 20 09:25:14 np0005625203.localdomain sudo[235515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:14.497 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:25:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:14.504 228941 DEBUG nova.compute.provider_tree [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:25:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:14.526 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:25:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:14.529 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:25:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:25:14.529 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:25:14 np0005625203.localdomain python3.9[235517]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:25:14 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:25:14 np0005625203.localdomain systemd-sysv-generator[235550]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:25:14 np0005625203.localdomain systemd-rc-local-generator[235547]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:25:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:25:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:14 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:25:14 np0005625203.localdomain sudo[235515]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:25:15 np0005625203.localdomain podman[235556]: 2026-02-20 09:25:15.003329573 +0000 UTC m=+0.106495550 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb 20 09:25:15 np0005625203.localdomain podman[235556]: 2026-02-20 09:25:15.038228468 +0000 UTC m=+0.141394465 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:25:15 np0005625203.localdomain podman[235591]: 2026-02-20 09:25:15.080788716 +0000 UTC m=+0.075819934 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Feb 20 09:25:15 np0005625203.localdomain podman[235591]: 2026-02-20 09:25:15.18252429 +0000 UTC m=+0.177555518 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:25:15 np0005625203.localdomain sudo[235652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ivqknycdnmmgliwqoponlwikoylcxbcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579513.0223312-1333-191901897524430/AnsiballZ_systemd.py
Feb 20 09:25:15 np0005625203.localdomain sudo[235652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:15 np0005625203.localdomain python3.9[235654]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:25:15 np0005625203.localdomain systemd-rc-local-generator[235683]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:25:15 np0005625203.localdomain systemd-sysv-generator[235686]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:15 np0005625203.localdomain systemd[1]: Starting ceilometer_agent_compute container...
Feb 20 09:25:16 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:25:16 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3574fa9636cd93d60277cce89b182b5d3ee2348eeb2f6f18492b615df0cec532/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Feb 20 09:25:16 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3574fa9636cd93d60277cce89b182b5d3ee2348eeb2f6f18492b615df0cec532/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Feb 20 09:25:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:25:16 np0005625203.localdomain podman[235694]: 2026-02-20 09:25:16.071909814 +0000 UTC m=+0.157701433 container init aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: + sudo -E kolla_set_configs
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: sudo: unable to send audit message: Operation not permitted
Feb 20 09:25:16 np0005625203.localdomain sudo[235715]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Feb 20 09:25:16 np0005625203.localdomain sudo[235715]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 20 09:25:16 np0005625203.localdomain sudo[235715]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 20 09:25:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:25:16 np0005625203.localdomain podman[235694]: 2026-02-20 09:25:16.11179685 +0000 UTC m=+0.197588479 container start aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:25:16 np0005625203.localdomain podman[235694]: ceilometer_agent_compute
Feb 20 09:25:16 np0005625203.localdomain systemd[1]: Started ceilometer_agent_compute container.
Feb 20 09:25:16 np0005625203.localdomain sshd[235724]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:25:16 np0005625203.localdomain sudo[235652]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Validating config file
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Copying service configuration files
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: INFO:__main__:Writing out command to execute
Feb 20 09:25:16 np0005625203.localdomain sudo[235715]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: ++ cat /run_command
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: + ARGS=
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: + sudo kolla_copy_cacerts
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: sudo: unable to send audit message: Operation not permitted
Feb 20 09:25:16 np0005625203.localdomain sudo[235732]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Feb 20 09:25:16 np0005625203.localdomain sudo[235732]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Feb 20 09:25:16 np0005625203.localdomain sudo[235732]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Feb 20 09:25:16 np0005625203.localdomain sudo[235732]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: + [[ ! -n '' ]]
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: + . kolla_extend_start
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: + umask 0022
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Feb 20 09:25:16 np0005625203.localdomain podman[235718]: 2026-02-20 09:25:16.228258474 +0000 UTC m=+0.108772550 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:25:16 np0005625203.localdomain podman[235718]: 2026-02-20 09:25:16.260230189 +0000 UTC m=+0.140744275 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:25:16 np0005625203.localdomain podman[235718]: unhealthy
Feb 20 09:25:16 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:25:16 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Failed with result 'exit-code'.
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.964 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.965 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.965 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.965 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.965 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.965 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.965 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.965 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.965 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.965 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.966 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.966 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.966 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.966 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.966 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.966 2 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005625203.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.966 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.966 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.966 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.966 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.966 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.967 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.968 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.969 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.969 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.969 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.969 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.969 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.969 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.969 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.969 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.969 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.969 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.969 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.969 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.970 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.970 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.970 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.970 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.970 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.970 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.970 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.970 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.970 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.970 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.970 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.970 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.971 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.971 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.971 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.971 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.971 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.971 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.971 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.971 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.971 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.971 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.971 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.971 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.972 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.972 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.972 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.972 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.972 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.972 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.972 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.972 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.972 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.972 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.972 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.972 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.973 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.973 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.973 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.973 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.973 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.973 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.973 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.973 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.973 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.973 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.973 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.974 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.974 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.974 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.974 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.974 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.974 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.974 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.974 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.974 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.974 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.974 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.974 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.975 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.975 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.975 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.975 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.975 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.975 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.975 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.975 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.975 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.975 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.975 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.975 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.976 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.976 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.976 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.976 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.976 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.976 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.976 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.976 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.976 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.976 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.976 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.976 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.977 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.978 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.978 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.996 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.998 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Feb 20 09:25:16 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:16.999 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Feb 20 09:25:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49156 DF PROTO=TCP SPT=47466 DPT=9101 SEQ=3032420798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0123800000000001030307) 
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.091 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.149 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.149 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.149 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.149 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.149 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.149 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.149 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.149 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.149 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.150 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.150 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.150 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.150 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.150 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.150 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.150 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.150 12 DEBUG cotyledon.oslo_config_glue [-] host                           = np0005625203.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.150 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.150 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.150 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.151 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.152 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.153 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.153 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.153 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.153 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.153 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.153 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.153 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.153 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.153 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.153 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.153 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.153 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.154 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.155 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.155 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.155 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.155 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.155 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.155 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.155 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.155 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.155 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.155 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.155 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.155 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.156 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.157 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.157 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.157 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.157 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.157 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.157 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.157 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.157 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.157 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.157 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.157 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.158 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.158 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.158 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.158 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.158 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.158 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.158 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.158 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.158 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.158 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.158 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.158 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.159 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.160 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.161 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.162 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.163 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.164 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.165 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.166 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.166 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.166 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.166 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.166 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.166 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.166 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.166 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.166 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.166 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.166 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.169 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.177 12 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.9/site-packages/ceilometer/compute/virt/libvirt/utils.py:93
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:25:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:25:17 np0005625203.localdomain sshd[235724]: Invalid user sybase from 118.99.80.29 port 26625
Feb 20 09:25:17 np0005625203.localdomain sshd[235724]: Received disconnect from 118.99.80.29 port 26625:11: Bye Bye [preauth]
Feb 20 09:25:17 np0005625203.localdomain sshd[235724]: Disconnected from invalid user sybase 118.99.80.29 port 26625 [preauth]
Feb 20 09:25:18 np0005625203.localdomain python3.9[235855]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 20 09:25:19 np0005625203.localdomain sudo[235963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bkwevqfttuvubvzlwmrtymoiqjaqoiat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579519.0091372-1468-103826739249266/AnsiballZ_stat.py
Feb 20 09:25:19 np0005625203.localdomain sudo[235963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:19 np0005625203.localdomain python3.9[235965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:25:19 np0005625203.localdomain sudo[235963]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:19 np0005625203.localdomain sudo[236053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grwjottvyghmmidhtmyoejlpfyjhlctr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579519.0091372-1468-103826739249266/AnsiballZ_copy.py
Feb 20 09:25:19 np0005625203.localdomain sudo[236053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:20 np0005625203.localdomain python3.9[236055]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579519.0091372-1468-103826739249266/.source.yaml _original_basename=.7rzyzwpm follow=False checksum=759c0783fe604271cc6640bac1339e1b1de19d54 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:20 np0005625203.localdomain sudo[236053]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33114 DF PROTO=TCP SPT=36000 DPT=9100 SEQ=2048368840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA012FC00000000001030307) 
Feb 20 09:25:21 np0005625203.localdomain sudo[236163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cuxmlvrfvehwbanxpunpbcgeboddwvwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579520.8956788-1513-88032372906587/AnsiballZ_stat.py
Feb 20 09:25:21 np0005625203.localdomain sudo[236163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:21 np0005625203.localdomain python3.9[236165]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:25:21 np0005625203.localdomain sudo[236163]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:21 np0005625203.localdomain sudo[236251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pqouwynlvsovrkpveecntqqzycxjjoxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579520.8956788-1513-88032372906587/AnsiballZ_copy.py
Feb 20 09:25:21 np0005625203.localdomain sudo[236251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:22 np0005625203.localdomain python3.9[236253]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579520.8956788-1513-88032372906587/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:25:22 np0005625203.localdomain sudo[236251]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49158 DF PROTO=TCP SPT=47466 DPT=9101 SEQ=3032420798 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA013B400000000001030307) 
Feb 20 09:25:23 np0005625203.localdomain sudo[236361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tgshpntuyhfkocggpfhtqkkeeddwkqvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579522.9869604-1576-163099992479246/AnsiballZ_file.py
Feb 20 09:25:23 np0005625203.localdomain sudo[236361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:23 np0005625203.localdomain python3.9[236363]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:23 np0005625203.localdomain sudo[236361]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:24 np0005625203.localdomain sudo[236471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-opqlokgxyzhobyipunhmmmjkslutrogr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579523.7764678-1600-60903118815207/AnsiballZ_file.py
Feb 20 09:25:24 np0005625203.localdomain sudo[236471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:24 np0005625203.localdomain python3.9[236473]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:25:24 np0005625203.localdomain sudo[236471]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:24 np0005625203.localdomain sudo[236581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-syfxpscngrrdhoqirwaxapfhkrkvanmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579524.4956138-1624-206218689124674/AnsiballZ_stat.py
Feb 20 09:25:24 np0005625203.localdomain sudo[236581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:24 np0005625203.localdomain python3.9[236583]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:25:24 np0005625203.localdomain sudo[236581]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:25 np0005625203.localdomain sudo[236638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djkbhpqndeywrntnwdosyzqcxuvcnest ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579524.4956138-1624-206218689124674/AnsiballZ_file.py
Feb 20 09:25:25 np0005625203.localdomain sudo[236638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:25 np0005625203.localdomain python3.9[236640]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.m_kglfwo recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:25 np0005625203.localdomain sudo[236638]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:26 np0005625203.localdomain python3.9[236748]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/node_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33116 DF PROTO=TCP SPT=36000 DPT=9100 SEQ=2048368840 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0147810000000001030307) 
Feb 20 09:25:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59613 DF PROTO=TCP SPT=37150 DPT=9882 SEQ=198243455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA01523D0000000001030307) 
Feb 20 09:25:30 np0005625203.localdomain sudo[237050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-owcolbjskdfrwvejarueurjlfatgilsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579529.8322358-1735-115291844781337/AnsiballZ_container_config_data.py
Feb 20 09:25:30 np0005625203.localdomain sudo[237050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:30 np0005625203.localdomain python3.9[237052]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/node_exporter config_pattern=*.json debug=False
Feb 20 09:25:30 np0005625203.localdomain sudo[237050]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:31 np0005625203.localdomain sudo[237160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fnvndhwhgekkagglsnrgafmukzkyrhlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579531.5367885-1768-36774448048000/AnsiballZ_container_config_hash.py
Feb 20 09:25:31 np0005625203.localdomain sudo[237160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:32 np0005625203.localdomain python3.9[237162]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 20 09:25:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59615 DF PROTO=TCP SPT=37150 DPT=9882 SEQ=198243455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA015E400000000001030307) 
Feb 20 09:25:32 np0005625203.localdomain sudo[237160]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:33 np0005625203.localdomain sudo[237270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dwosqookkoeslcparnxhqsfgclwawdjh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771579532.904837-1798-70623335415360/AnsiballZ_edpm_container_manage.py
Feb 20 09:25:33 np0005625203.localdomain sudo[237270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:33 np0005625203.localdomain python3[237272]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/node_exporter config_id=node_exporter config_overrides={} config_patterns=*.json containers=['node_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Feb 20 09:25:33 np0005625203.localdomain podman[237309]: 
Feb 20 09:25:33 np0005625203.localdomain podman[237309]: 2026-02-20 09:25:33.795299829 +0000 UTC m=+0.080515787 container create 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:25:33 np0005625203.localdomain podman[237309]: 2026-02-20 09:25:33.754088492 +0000 UTC m=+0.039304470 image pull  quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c
Feb 20 09:25:33 np0005625203.localdomain python3[237272]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=node_exporter --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /:/rootfs:ro --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl --path.rootfs=/rootfs
Feb 20 09:25:33 np0005625203.localdomain sudo[237270]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:35 np0005625203.localdomain sudo[237454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vcljwxcbwsrdaybwxdfuvxjzvlqpglhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579534.1881285-1822-168682569078207/AnsiballZ_stat.py
Feb 20 09:25:35 np0005625203.localdomain sudo[237454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:35 np0005625203.localdomain python3.9[237456]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:25:35 np0005625203.localdomain sudo[237454]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:35 np0005625203.localdomain sudo[237566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xnwyinyksigwasphmjwslyzeztjetheg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579535.672942-1849-186182308496098/AnsiballZ_file.py
Feb 20 09:25:35 np0005625203.localdomain sudo[237566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=59616 DF PROTO=TCP SPT=37150 DPT=9882 SEQ=198243455 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA016E010000000001030307) 
Feb 20 09:25:36 np0005625203.localdomain python3.9[237568]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:36 np0005625203.localdomain sudo[237566]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:36 np0005625203.localdomain sudo[237621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-acaipeibfsygycgxoyzoaidfuooonagz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579535.672942-1849-186182308496098/AnsiballZ_stat.py
Feb 20 09:25:36 np0005625203.localdomain sudo[237621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:36 np0005625203.localdomain python3.9[237623]: ansible-stat Invoked with path=/etc/systemd/system/edpm_node_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:25:36 np0005625203.localdomain sudo[237621]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:36 np0005625203.localdomain sshd[237629]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:25:37 np0005625203.localdomain sudo[237732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hamfsxyncupaicuokpebrbfmmlgmlscu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579536.6352067-1849-29187848283932/AnsiballZ_copy.py
Feb 20 09:25:37 np0005625203.localdomain sudo[237732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:37 np0005625203.localdomain sshd[237629]: Invalid user admin from 5.253.59.68 port 32964
Feb 20 09:25:37 np0005625203.localdomain sshd[237629]: Received disconnect from 5.253.59.68 port 32964:11: Bye Bye [preauth]
Feb 20 09:25:37 np0005625203.localdomain sshd[237629]: Disconnected from invalid user admin 5.253.59.68 port 32964 [preauth]
Feb 20 09:25:37 np0005625203.localdomain python3.9[237734]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771579536.6352067-1849-29187848283932/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:37 np0005625203.localdomain sudo[237732]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:37 np0005625203.localdomain sudo[237787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dagmagnaaorxwshqshvxxxkpgbosvxzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579536.6352067-1849-29187848283932/AnsiballZ_systemd.py
Feb 20 09:25:37 np0005625203.localdomain sudo[237787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:37 np0005625203.localdomain python3.9[237789]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:25:37 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:25:37 np0005625203.localdomain systemd-sysv-generator[237815]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:25:37 np0005625203.localdomain systemd-rc-local-generator[237809]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:25:38 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:38 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:38 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:38 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:38 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:25:38 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:38 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:38 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:38 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39542 DF PROTO=TCP SPT=53238 DPT=9105 SEQ=1193739873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0175C10000000001030307) 
Feb 20 09:25:38 np0005625203.localdomain sudo[237787]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:38 np0005625203.localdomain sudo[237877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pxftqjeatcoxgfznfxbtpmtfyrqmgffa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579536.6352067-1849-29187848283932/AnsiballZ_systemd.py
Feb 20 09:25:38 np0005625203.localdomain sudo[237877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:38 np0005625203.localdomain python3.9[237879]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:25:38 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:25:39 np0005625203.localdomain systemd-rc-local-generator[237907]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:25:39 np0005625203.localdomain systemd-sysv-generator[237913]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: Starting node_exporter container...
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:25:39 np0005625203.localdomain podman[237920]: 2026-02-20 09:25:39.408505327 +0000 UTC m=+0.148053338 container init 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:25:39 np0005625203.localdomain podman[237920]: 2026-02-20 09:25:39.441010759 +0000 UTC m=+0.180558770 container start 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:25:39 np0005625203.localdomain podman[237920]: node_exporter
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.441Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.442Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.442Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.443Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.443Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.444Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.444Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.444Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.444Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=arp
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=bcache
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=bonding
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=btrfs
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=conntrack
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=cpu
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=cpufreq
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=diskstats
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=edac
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=fibrechannel
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=filefd
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=filesystem
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=infiniband
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=ipvs
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=loadavg
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=mdadm
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=meminfo
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=netclass
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=netdev
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=netstat
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=nfs
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=nfsd
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=nvme
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=schedstat
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=sockstat
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=softnet
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=systemd
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=tapestats
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=udp_queues
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=vmstat
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=xfs
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.445Z caller=node_exporter.go:117 level=info collector=zfs
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.446Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Feb 20 09:25:39 np0005625203.localdomain node_exporter[237935]: ts=2026-02-20T09:25:39.447Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=[::]:9100
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: Started node_exporter container.
Feb 20 09:25:39 np0005625203.localdomain sudo[237877]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:39 np0005625203.localdomain podman[237944]: 2026-02-20 09:25:39.52299175 +0000 UTC m=+0.075818534 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:25:39 np0005625203.localdomain podman[237944]: 2026-02-20 09:25:39.534438459 +0000 UTC m=+0.087265263 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:25:39 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:25:40 np0005625203.localdomain python3.9[238076]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 20 09:25:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=47570 DF PROTO=TCP SPT=49410 DPT=9105 SEQ=3444906268 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0182800000000001030307) 
Feb 20 09:25:41 np0005625203.localdomain sudo[238184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yotjvncbvdiqprumsopdokcbsfksvlgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579541.126456-1984-42176912952953/AnsiballZ_stat.py
Feb 20 09:25:41 np0005625203.localdomain sudo[238184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:41 np0005625203.localdomain python3.9[238186]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:25:41 np0005625203.localdomain sudo[238184]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:41 np0005625203.localdomain sudo[238274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orogexvsysmbdxyvskujizmqhmghfnpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579541.126456-1984-42176912952953/AnsiballZ_copy.py
Feb 20 09:25:41 np0005625203.localdomain sudo[238274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:42 np0005625203.localdomain python3.9[238276]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579541.126456-1984-42176912952953/.source.yaml _original_basename=.4hibte3p follow=False checksum=18a0b6e78403f3ab12e5e8e6e71bc7fd62c02b34 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:42 np0005625203.localdomain sudo[238274]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:42 np0005625203.localdomain sudo[238384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ldxkkjqnvrrgjpsdauzvtzmexmrklsmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579542.4029448-2029-182275228767525/AnsiballZ_stat.py
Feb 20 09:25:42 np0005625203.localdomain sudo[238384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:42 np0005625203.localdomain python3.9[238386]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:25:42 np0005625203.localdomain sudo[238384]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:43 np0005625203.localdomain sudo[238472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ipcqravlbilszjefmtwozlihqgfxqgbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579542.4029448-2029-182275228767525/AnsiballZ_copy.py
Feb 20 09:25:43 np0005625203.localdomain sudo[238472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:44 np0005625203.localdomain python3.9[238474]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579542.4029448-2029-182275228767525/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:25:44 np0005625203.localdomain sudo[238472]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39544 DF PROTO=TCP SPT=53238 DPT=9105 SEQ=1193739873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA018D800000000001030307) 
Feb 20 09:25:45 np0005625203.localdomain sudo[238582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xknnfxkuatrhzdcsxguytbmtecoluvcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579544.7720854-2092-180656288380076/AnsiballZ_file.py
Feb 20 09:25:45 np0005625203.localdomain sudo[238582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:25:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:25:45 np0005625203.localdomain systemd[1]: tmp-crun.RxshHO.mount: Deactivated successfully.
Feb 20 09:25:45 np0005625203.localdomain podman[238585]: 2026-02-20 09:25:45.728500906 +0000 UTC m=+0.093238878 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:25:45 np0005625203.localdomain podman[238586]: 2026-02-20 09:25:45.770205281 +0000 UTC m=+0.132759969 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 20 09:25:45 np0005625203.localdomain podman[238585]: 2026-02-20 09:25:45.799467003 +0000 UTC m=+0.164204975 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:25:45 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:25:45 np0005625203.localdomain python3.9[238584]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:45 np0005625203.localdomain podman[238586]: 2026-02-20 09:25:45.849700697 +0000 UTC m=+0.212255385 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 20 09:25:45 np0005625203.localdomain sudo[238582]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:45 np0005625203.localdomain sshd[238628]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:25:45 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:25:46 np0005625203.localdomain sudo[238737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jueqsjumyngjeaevmfffqnezyoyopyvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579546.044908-2116-258651777433361/AnsiballZ_file.py
Feb 20 09:25:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:25:46 np0005625203.localdomain sudo[238737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:46 np0005625203.localdomain podman[238739]: 2026-02-20 09:25:46.41536419 +0000 UTC m=+0.097235544 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:25:46 np0005625203.localdomain podman[238739]: 2026-02-20 09:25:46.44830524 +0000 UTC m=+0.130176594 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 20 09:25:46 np0005625203.localdomain podman[238739]: unhealthy
Feb 20 09:25:46 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:25:46 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Failed with result 'exit-code'.
Feb 20 09:25:46 np0005625203.localdomain python3.9[238740]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:25:46 np0005625203.localdomain sudo[238737]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:46 np0005625203.localdomain sshd[238628]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:25:47 np0005625203.localdomain sudo[238864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rvhhvxujxpectkdubnvtyjzqgqhfgmzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579546.744834-2140-35318570720682/AnsiballZ_stat.py
Feb 20 09:25:47 np0005625203.localdomain sudo[238864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=13658 DF PROTO=TCP SPT=57768 DPT=9101 SEQ=1313951756 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0198C00000000001030307) 
Feb 20 09:25:47 np0005625203.localdomain python3.9[238866]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:25:47 np0005625203.localdomain sudo[238864]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:47 np0005625203.localdomain sudo[238921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lprcjytnnxhojvrjbxmrkrnvjtsnxwqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579546.744834-2140-35318570720682/AnsiballZ_file.py
Feb 20 09:25:47 np0005625203.localdomain sudo[238921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:47 np0005625203.localdomain python3.9[238923]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.hpkh6iuj recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:47 np0005625203.localdomain sudo[238921]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:48 np0005625203.localdomain python3.9[239031]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/podman_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37114 DF PROTO=TCP SPT=42424 DPT=9101 SEQ=4024048267 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA01A4800000000001030307) 
Feb 20 09:25:50 np0005625203.localdomain sudo[239333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-exqzdrhkbreezibyadqbrgoupeqprazs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579550.0616298-2251-46823879490105/AnsiballZ_container_config_data.py
Feb 20 09:25:50 np0005625203.localdomain sudo[239333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:50 np0005625203.localdomain python3.9[239335]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/podman_exporter config_pattern=*.json debug=False
Feb 20 09:25:50 np0005625203.localdomain sudo[239333]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:51 np0005625203.localdomain sudo[239443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zvdqglcpyaqdvwmhokyguecqoqrauanj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579551.0502868-2284-249511924993711/AnsiballZ_container_config_hash.py
Feb 20 09:25:51 np0005625203.localdomain sudo[239443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:51 np0005625203.localdomain python3.9[239445]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 20 09:25:51 np0005625203.localdomain sudo[239443]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:52 np0005625203.localdomain sudo[239553]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-apgrmfiqsnyyecqjkkdystrqcxxpjupk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771579551.9943576-2314-76766671386700/AnsiballZ_edpm_container_manage.py
Feb 20 09:25:52 np0005625203.localdomain sudo[239553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:52 np0005625203.localdomain python3[239555]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/podman_exporter config_id=podman_exporter config_overrides={} config_patterns=*.json containers=['podman_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Feb 20 09:25:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31131 DF PROTO=TCP SPT=43626 DPT=9100 SEQ=1600238134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA01B0810000000001030307) 
Feb 20 09:25:53 np0005625203.localdomain sudo[239582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:25:53 np0005625203.localdomain sudo[239582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:25:53 np0005625203.localdomain sudo[239582]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:53 np0005625203.localdomain sudo[239611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:25:53 np0005625203.localdomain sudo[239611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:25:54 np0005625203.localdomain podman[239569]: 2026-02-20 09:25:52.640790812 +0000 UTC m=+0.045715464 image pull  quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Feb 20 09:25:54 np0005625203.localdomain podman[239695]: 
Feb 20 09:25:54 np0005625203.localdomain podman[239695]: 2026-02-20 09:25:54.285031313 +0000 UTC m=+0.072980069 container create 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:25:54 np0005625203.localdomain podman[239695]: 2026-02-20 09:25:54.248106606 +0000 UTC m=+0.036055382 image pull  quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Feb 20 09:25:54 np0005625203.localdomain python3[239555]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env CONTAINER_HOST=unix:///run/podman/podman.sock --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=podman_exporter --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd
Feb 20 09:25:54 np0005625203.localdomain sudo[239611]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:54 np0005625203.localdomain sudo[239553]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:55 np0005625203.localdomain sudo[239858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-foorjvakwyqlduukinhkwebfdrcqxwon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579554.745688-2338-267909024761995/AnsiballZ_stat.py
Feb 20 09:25:55 np0005625203.localdomain sudo[239858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:55 np0005625203.localdomain sudo[239846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:25:55 np0005625203.localdomain sudo[239846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:25:55 np0005625203.localdomain sudo[239846]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:55 np0005625203.localdomain python3.9[239870]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:25:55 np0005625203.localdomain sudo[239858]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49196 DF PROTO=TCP SPT=41062 DPT=9100 SEQ=84278636 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA01BC800000000001030307) 
Feb 20 09:25:56 np0005625203.localdomain sudo[239981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jinnbxjrakrcucpwyhnmctlvrqcwivcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579556.09717-2365-122270457736375/AnsiballZ_file.py
Feb 20 09:25:56 np0005625203.localdomain sudo[239981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:56 np0005625203.localdomain python3.9[239983]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:56 np0005625203.localdomain sudo[239981]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:56 np0005625203.localdomain sudo[240036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-akfhaabxxzeifthceikymqtrpjaieoyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579556.09717-2365-122270457736375/AnsiballZ_stat.py
Feb 20 09:25:56 np0005625203.localdomain sudo[240036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:56 np0005625203.localdomain python3.9[240038]: ansible-stat Invoked with path=/etc/systemd/system/edpm_podman_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:25:57 np0005625203.localdomain sudo[240036]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:57 np0005625203.localdomain sudo[240145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sxwoxjgpppavtcxszwyuetuxokcbnwmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579557.055368-2365-97151711121505/AnsiballZ_copy.py
Feb 20 09:25:57 np0005625203.localdomain sudo[240145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:57 np0005625203.localdomain python3.9[240147]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771579557.055368-2365-97151711121505/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:25:57 np0005625203.localdomain sudo[240145]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:57 np0005625203.localdomain sudo[240200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ajmuxzrgklldilbwwxzcmtwvixtwttfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579557.055368-2365-97151711121505/AnsiballZ_systemd.py
Feb 20 09:25:57 np0005625203.localdomain sudo[240200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:58 np0005625203.localdomain python3.9[240202]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:25:58 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:25:58 np0005625203.localdomain systemd-rc-local-generator[240230]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:25:58 np0005625203.localdomain systemd-sysv-generator[240233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:25:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:25:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:58 np0005625203.localdomain sudo[240200]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:58 np0005625203.localdomain sudo[240291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qeblcomcsmwsekeuwjtjlcsbewcvechi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579557.055368-2365-97151711121505/AnsiballZ_systemd.py
Feb 20 09:25:58 np0005625203.localdomain sudo[240291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:25:59 np0005625203.localdomain python3.9[240293]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:25:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=50287 DF PROTO=TCP SPT=59268 DPT=9102 SEQ=3643554212 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA01C8A90000000001030307) 
Feb 20 09:25:59 np0005625203.localdomain systemd-sysv-generator[240323]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:25:59 np0005625203.localdomain systemd-rc-local-generator[240318]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: Starting podman_exporter container...
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:25:59 np0005625203.localdomain podman[240334]: 2026-02-20 09:25:59.728045224 +0000 UTC m=+0.145422877 container init 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:25:59 np0005625203.localdomain podman_exporter[240348]: ts=2026-02-20T09:25:59.746Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Feb 20 09:25:59 np0005625203.localdomain podman_exporter[240348]: ts=2026-02-20T09:25:59.747Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Feb 20 09:25:59 np0005625203.localdomain podman_exporter[240348]: ts=2026-02-20T09:25:59.747Z caller=handler.go:94 level=info msg="enabled collectors"
Feb 20 09:25:59 np0005625203.localdomain podman_exporter[240348]: ts=2026-02-20T09:25:59.747Z caller=handler.go:105 level=info collector=container
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:25:59 np0005625203.localdomain podman[240334]: 2026-02-20 09:25:59.762432927 +0000 UTC m=+0.179810570 container start 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:25:59 np0005625203.localdomain podman[240334]: podman_exporter
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: Starting Podman API Service...
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: Started podman_exporter container.
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: Started Podman API Service.
Feb 20 09:25:59 np0005625203.localdomain sudo[240291]: pam_unix(sudo:session): session closed for user root
Feb 20 09:25:59 np0005625203.localdomain podman[240359]: time="2026-02-20T09:25:59Z" level=info msg="/usr/bin/podman filtering at log level info"
Feb 20 09:25:59 np0005625203.localdomain podman[240359]: time="2026-02-20T09:25:59Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Feb 20 09:25:59 np0005625203.localdomain podman[240359]: time="2026-02-20T09:25:59Z" level=info msg="Setting parallel job count to 25"
Feb 20 09:25:59 np0005625203.localdomain podman[240359]: time="2026-02-20T09:25:59Z" level=info msg="Using systemd socket activation to determine API endpoint"
Feb 20 09:25:59 np0005625203.localdomain podman[240359]: time="2026-02-20T09:25:59Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\""
Feb 20 09:25:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:25:59 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Feb 20 09:25:59 np0005625203.localdomain podman[240359]: time="2026-02-20T09:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:25:59 np0005625203.localdomain podman[240358]: 2026-02-20 09:25:59.86242668 +0000 UTC m=+0.094344750 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:25:59 np0005625203.localdomain podman[240358]: 2026-02-20 09:25:59.874912404 +0000 UTC m=+0.106830474 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:25:59 np0005625203.localdomain podman[240358]: unhealthy
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:25:59 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Failed with result 'exit-code'.
Feb 20 09:26:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully.
Feb 20 09:26:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:01 np0005625203.localdomain python3.9[240503]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 20 09:26:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56862 DF PROTO=TCP SPT=48320 DPT=9882 SEQ=100522494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA01D3800000000001030307) 
Feb 20 09:26:02 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully.
Feb 20 09:26:02 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e0b96efd9497779e5b269af474ffc26b10023ac5cdee3873df81b8013dae2e66-merged.mount: Deactivated successfully.
Feb 20 09:26:02 np0005625203.localdomain sudo[240611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpkmbcdrhtugqnklcnjcncrvpvqdfcbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579562.4351783-2500-279014397091331/AnsiballZ_stat.py
Feb 20 09:26:02 np0005625203.localdomain sudo[240611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:02 np0005625203.localdomain python3.9[240613]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:26:02 np0005625203.localdomain sudo[240611]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:03 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:03 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully.
Feb 20 09:26:03 np0005625203.localdomain sudo[240701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qkfnxkbtlqvygtogpybaczovffakoqck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579562.4351783-2500-279014397091331/AnsiballZ_copy.py
Feb 20 09:26:03 np0005625203.localdomain sudo[240701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:03 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully.
Feb 20 09:26:03 np0005625203.localdomain python3.9[240703]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579562.4351783-2500-279014397091331/.source.yaml _original_basename=.9uxp1ues follow=False checksum=dae36056a950a4131d7691afd655cacfc03f4930 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:26:03 np0005625203.localdomain sudo[240701]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:03 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:03 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:03 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:04 np0005625203.localdomain sudo[240811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cnnzqcodezrowpfzlyjncxpdusouosxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579563.7453754-2545-241332139596060/AnsiballZ_stat.py
Feb 20 09:26:04 np0005625203.localdomain sudo[240811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:04 np0005625203.localdomain python3.9[240813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:26:04 np0005625203.localdomain sudo[240811]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:04 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully.
Feb 20 09:26:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb-merged.mount: Deactivated successfully.
Feb 20 09:26:05 np0005625203.localdomain sudo[240899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qxeknicikdnsqqozgsppejbchqwdllbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579563.7453754-2545-241332139596060/AnsiballZ_copy.py
Feb 20 09:26:05 np0005625203.localdomain sudo[240899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d62367a3a75d40fe308c9a1ba2227d9d21236494de6c00d0d8f446eab81f58cb-merged.mount: Deactivated successfully.
Feb 20 09:26:05 np0005625203.localdomain python3.9[240901]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579563.7453754-2545-241332139596060/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:26:05 np0005625203.localdomain sudo[240899]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e0c81d46f937f1f84faf68fb71f862e4bd868921a7d16384d44790308d98719f-merged.mount: Deactivated successfully.
Feb 20 09:26:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully.
Feb 20 09:26:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56863 DF PROTO=TCP SPT=48320 DPT=9882 SEQ=100522494 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA01E3400000000001030307) 
Feb 20 09:26:06 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully.
Feb 20 09:26:06 np0005625203.localdomain sudo[241009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fnqwrzabmczbbwqbcienbfylkbqfplkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579566.0578887-2608-172511537288019/AnsiballZ_file.py
Feb 20 09:26:06 np0005625203.localdomain sudo[241009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:06 np0005625203.localdomain python3.9[241011]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:26:06 np0005625203.localdomain sudo[241009]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:07 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully.
Feb 20 09:26:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:26:07.638 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:26:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:26:07.639 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:26:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:26:07.639 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:26:07 np0005625203.localdomain sudo[241119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rinsrditfdqkzumpkujbysemusorkwia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579567.590854-2632-199738569565351/AnsiballZ_file.py
Feb 20 09:26:07 np0005625203.localdomain sudo[241119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:08 np0005625203.localdomain python3.9[241121]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:26:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully.
Feb 20 09:26:08 np0005625203.localdomain sudo[241119]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16862 DF PROTO=TCP SPT=58260 DPT=9105 SEQ=519156844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA01EB000000000001030307) 
Feb 20 09:26:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully.
Feb 20 09:26:08 np0005625203.localdomain sudo[241229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kyakcwfjnegjkpyuhyernjcvdmktqbpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579568.3083553-2656-273774812968399/AnsiballZ_stat.py
Feb 20 09:26:08 np0005625203.localdomain sudo[241229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:08 np0005625203.localdomain python3.9[241231]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:26:08 np0005625203.localdomain sudo[241229]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:09 np0005625203.localdomain sudo[241286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zcrcyydlhpsjluajdbjivsblstrhdrtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579568.3083553-2656-273774812968399/AnsiballZ_file.py
Feb 20 09:26:09 np0005625203.localdomain sudo[241286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:09 np0005625203.localdomain python3.9[241288]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.5kko28em recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:26:09 np0005625203.localdomain sudo[241286]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:26:09 np0005625203.localdomain podman[241368]: 2026-02-20 09:26:09.769267983 +0000 UTC m=+0.085415280 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:26:09 np0005625203.localdomain podman[241368]: 2026-02-20 09:26:09.803175821 +0000 UTC m=+0.119323098 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:26:09 np0005625203.localdomain python3.9[241410]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:26:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully.
Feb 20 09:26:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8516f80a305bf91ea394ab1c0000d8e781a49a1e417cb98413c75dee5cfb0223-merged.mount: Deactivated successfully.
Feb 20 09:26:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8516f80a305bf91ea394ab1c0000d8e781a49a1e417cb98413c75dee5cfb0223-merged.mount: Deactivated successfully.
Feb 20 09:26:10 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:26:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16511 DF PROTO=TCP SPT=41482 DPT=9105 SEQ=1702601889 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA01F6810000000001030307) 
Feb 20 09:26:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:26:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:26:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:26:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16864 DF PROTO=TCP SPT=58260 DPT=9105 SEQ=519156844 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0202C10000000001030307) 
Feb 20 09:26:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.522 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.543 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.543 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.543 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.563 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.563 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.563 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.564 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.564 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.565 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:26:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.590 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.591 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.591 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.591 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:26:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:14.592 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.044 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:26:15 np0005625203.localdomain sudo[241744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cgehisrqmgixzasqtaqrrfdpebdcyhyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579574.8969045-2767-191412055890299/AnsiballZ_container_config_data.py
Feb 20 09:26:15 np0005625203.localdomain sudo[241744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.255 228941 WARNING nova.virt.libvirt.driver [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.257 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=13350MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.257 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.258 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.330 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.330 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.355 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:26:15 np0005625203.localdomain python3.9[241746]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Feb 20 09:26:15 np0005625203.localdomain sudo[241744]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:15 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:15 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:15 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.810 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.818 228941 DEBUG nova.compute.provider_tree [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.831 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.834 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:26:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:15.835 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:26:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:26:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:26:16 np0005625203.localdomain sudo[241888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cqagsrhvrtyvsfocnakqdtwzfeufajjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579576.1838167-2800-74515193773764/AnsiballZ_container_config_hash.py
Feb 20 09:26:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:16.469 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:26:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:16.470 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:26:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:16.470 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:26:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:26:16.470 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:26:16 np0005625203.localdomain sudo[241888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:26:16 np0005625203.localdomain podman[241871]: 2026-02-20 09:26:16.520840955 +0000 UTC m=+0.089974752 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:26:16 np0005625203.localdomain podman[241871]: 2026-02-20 09:26:16.525961515 +0000 UTC m=+0.095095312 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:26:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:16 np0005625203.localdomain python3.9[241900]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 20 09:26:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:16 np0005625203.localdomain sudo[241888]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:16 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:26:16 np0005625203.localdomain podman[241899]: 2026-02-20 09:26:16.863375966 +0000 UTC m=+0.367388026 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=starting, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 09:26:16 np0005625203.localdomain podman[241899]: 2026-02-20 09:26:16.892714841 +0000 UTC m=+0.396726901 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 20 09:26:16 np0005625203.localdomain podman[241899]: unhealthy
Feb 20 09:26:16 np0005625203.localdomain podman[241872]: 2026-02-20 09:26:16.911736645 +0000 UTC m=+0.477529435 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:26:16 np0005625203.localdomain podman[241872]: 2026-02-20 09:26:16.97434197 +0000 UTC m=+0.540134730 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:26:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39387 DF PROTO=TCP SPT=57786 DPT=9101 SEQ=3472349963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA020DC00000000001030307) 
Feb 20 09:26:17 np0005625203.localdomain sudo[242045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ewzmpxwcbhjrsnqriyrxoemnehsvatoh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771579577.5877926-2830-101756865192648/AnsiballZ_edpm_container_manage.py
Feb 20 09:26:17 np0005625203.localdomain sudo[242045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:18 np0005625203.localdomain python3[242047]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Feb 20 09:26:19 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:26:19 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-787c8f875f9698c41ffef92771343c3a6a6dfa6fc78d05b953e695d5e698036f-merged.mount: Deactivated successfully.
Feb 20 09:26:19 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-787c8f875f9698c41ffef92771343c3a6a6dfa6fc78d05b953e695d5e698036f-merged.mount: Deactivated successfully.
Feb 20 09:26:19 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:26:19 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Failed with result 'exit-code'.
Feb 20 09:26:19 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:26:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22232 DF PROTO=TCP SPT=52648 DPT=9100 SEQ=261089383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA021A000000000001030307) 
Feb 20 09:26:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:26:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:26:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:26:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39389 DF PROTO=TCP SPT=57786 DPT=9101 SEQ=3472349963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0225800000000001030307) 
Feb 20 09:26:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:26:24 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:26:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=22234 DF PROTO=TCP SPT=52648 DPT=9100 SEQ=261089383 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0231C00000000001030307) 
Feb 20 09:26:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:27 np0005625203.localdomain podman[242060]: 2026-02-20 09:26:21.97906375 +0000 UTC m=+0.048623507 image pull  quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c
Feb 20 09:26:29 np0005625203.localdomain sshd[242095]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:26:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7612 DF PROTO=TCP SPT=50044 DPT=9102 SEQ=3172588007 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA023DD50000000001030307) 
Feb 20 09:26:29 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:26:29 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0db72f640c06816f89c40aafbb6106c7b45c3882ced51667574b7ebe0628b583-merged.mount: Deactivated successfully.
Feb 20 09:26:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:26:30 np0005625203.localdomain sshd[242095]: Invalid user thomaz from 194.107.115.2 port 48246
Feb 20 09:26:30 np0005625203.localdomain sshd[242095]: Received disconnect from 194.107.115.2 port 48246:11: Bye Bye [preauth]
Feb 20 09:26:30 np0005625203.localdomain sshd[242095]: Disconnected from invalid user thomaz 194.107.115.2 port 48246 [preauth]
Feb 20 09:26:31 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully.
Feb 20 09:26:31 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-2830780bc6d16943969f9158fd5036df60ccc26823e26fa259ba0accaa537c16-merged.mount: Deactivated successfully.
Feb 20 09:26:31 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-2830780bc6d16943969f9158fd5036df60ccc26823e26fa259ba0accaa537c16-merged.mount: Deactivated successfully.
Feb 20 09:26:31 np0005625203.localdomain podman[242103]: 2026-02-20 09:26:31.496025664 +0000 UTC m=+1.563488950 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:26:31 np0005625203.localdomain podman[242103]: 2026-02-20 09:26:31.535330879 +0000 UTC m=+1.602794165 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:26:31 np0005625203.localdomain podman[242103]: unhealthy
Feb 20 09:26:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25247 DF PROTO=TCP SPT=42756 DPT=9882 SEQ=931794382 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0248C00000000001030307) 
Feb 20 09:26:32 np0005625203.localdomain sshd[242157]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:26:32 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:32 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully.
Feb 20 09:26:32 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully.
Feb 20 09:26:32 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:26:32 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Failed with result 'exit-code'.
Feb 20 09:26:33 np0005625203.localdomain sshd[242157]: Invalid user claude from 152.32.129.236 port 49974
Feb 20 09:26:33 np0005625203.localdomain sshd[242157]: Received disconnect from 152.32.129.236 port 49974:11: Bye Bye [preauth]
Feb 20 09:26:33 np0005625203.localdomain sshd[242157]: Disconnected from invalid user claude 152.32.129.236 port 49974 [preauth]
Feb 20 09:26:33 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:33 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:34 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:34 np0005625203.localdomain podman[242146]: 2026-02-20 09:26:32.94825482 +0000 UTC m=+1.421119441 image pull  quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c
Feb 20 09:26:34 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:34 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:34 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:34 np0005625203.localdomain podman[242146]: 
Feb 20 09:26:34 np0005625203.localdomain podman[242146]: 2026-02-20 09:26:34.73663957 +0000 UTC m=+3.209504101 container create dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, container_name=openstack_network_exporter, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 20 09:26:35 np0005625203.localdomain python3[242047]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c
Feb 20 09:26:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25248 DF PROTO=TCP SPT=42756 DPT=9882 SEQ=931794382 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0258800000000001030307) 
Feb 20 09:26:36 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-2830780bc6d16943969f9158fd5036df60ccc26823e26fa259ba0accaa537c16-merged.mount: Deactivated successfully.
Feb 20 09:26:36 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-18b2a38b03b01d01d2686bb7a91f0fbbb40a40764e34293cc7422e7526e11c3c-merged.mount: Deactivated successfully.
Feb 20 09:26:36 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-18b2a38b03b01d01d2686bb7a91f0fbbb40a40764e34293cc7422e7526e11c3c-merged.mount: Deactivated successfully.
Feb 20 09:26:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45847 DF PROTO=TCP SPT=52802 DPT=9105 SEQ=3821234106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0260400000000001030307) 
Feb 20 09:26:38 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:26:39 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully.
Feb 20 09:26:39 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully.
Feb 20 09:26:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:26:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:26:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39547 DF PROTO=TCP SPT=53238 DPT=9105 SEQ=1193739873 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA026C800000000001030307) 
Feb 20 09:26:41 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:26:41 np0005625203.localdomain sudo[242045]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:41 np0005625203.localdomain podman[242187]: 2026-02-20 09:26:41.397718296 +0000 UTC m=+0.714272404 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:26:41 np0005625203.localdomain podman[242187]: 2026-02-20 09:26:41.437393742 +0000 UTC m=+0.753947830 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:26:41 np0005625203.localdomain sudo[242317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iqmmkjrimrlikgzrhbaanciqjdgcylhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579601.5599165-2854-132387623199727/AnsiballZ_stat.py
Feb 20 09:26:41 np0005625203.localdomain sudo[242317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:42 np0005625203.localdomain python3.9[242319]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:26:42 np0005625203.localdomain sudo[242317]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:42 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:42 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:42 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:42 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:26:42 np0005625203.localdomain sudo[242429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twqzdoveunwtvnppwdvydjkyvorfszaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579602.3401535-2881-107496496614681/AnsiballZ_file.py
Feb 20 09:26:42 np0005625203.localdomain sudo[242429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:42 np0005625203.localdomain python3.9[242431]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:26:42 np0005625203.localdomain sudo[242429]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:42 np0005625203.localdomain sudo[242484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dmndrnewqxydzxcdzxcpiztgiwitwqnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579602.3401535-2881-107496496614681/AnsiballZ_stat.py
Feb 20 09:26:43 np0005625203.localdomain sudo[242484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:43 np0005625203.localdomain python3.9[242486]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:26:43 np0005625203.localdomain sudo[242484]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:43 np0005625203.localdomain sudo[242593]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lwjkrlrcpmjywuycqgnqcpoozrgzayoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579603.2582362-2881-191516473811456/AnsiballZ_copy.py
Feb 20 09:26:43 np0005625203.localdomain sudo[242593]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:43 np0005625203.localdomain python3.9[242595]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771579603.2582362-2881-191516473811456/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:26:43 np0005625203.localdomain sudo[242593]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:44 np0005625203.localdomain sudo[242648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lngfbldikizhzhgegvzwpdulaflqlwjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579603.2582362-2881-191516473811456/AnsiballZ_systemd.py
Feb 20 09:26:44 np0005625203.localdomain sudo[242648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45849 DF PROTO=TCP SPT=52802 DPT=9105 SEQ=3821234106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0278010000000001030307) 
Feb 20 09:26:44 np0005625203.localdomain python3.9[242650]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:26:44 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:26:44 np0005625203.localdomain systemd-sysv-generator[242675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:26:44 np0005625203.localdomain systemd-rc-local-generator[242671]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:26:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:26:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:44 np0005625203.localdomain sudo[242648]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:45 np0005625203.localdomain sudo[242738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtimmvqrsaesernsjkkeywxccpkdpsle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579603.2582362-2881-191516473811456/AnsiballZ_systemd.py
Feb 20 09:26:45 np0005625203.localdomain sudo[242738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:45 np0005625203.localdomain python3.9[242740]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:26:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully.
Feb 20 09:26:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-37d662a0623c7987d6654fd890eea9e2ed325a6255b9d568db582e750601fc64-merged.mount: Deactivated successfully.
Feb 20 09:26:46 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:26:46 np0005625203.localdomain systemd-rc-local-generator[242766]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:26:46 np0005625203.localdomain systemd-sysv-generator[242769]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:26:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:26:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:26:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:26:46 np0005625203.localdomain systemd[1]: Starting openstack_network_exporter container...
Feb 20 09:26:46 np0005625203.localdomain podman[242781]: 2026-02-20 09:26:46.977127822 +0000 UTC m=+0.084804203 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:26:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11786 DF PROTO=TCP SPT=47476 DPT=9101 SEQ=738867909 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0283000000000001030307) 
Feb 20 09:26:47 np0005625203.localdomain podman[242781]: 2026-02-20 09:26:47.020218028 +0000 UTC m=+0.127894429 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:26:48 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:26:48 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:26:48 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:26:48 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:26:48 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:26:48 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b51c29942bd3b8c4fee2131632735ff28c903d9a0d3410352169879e71bf2e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Feb 20 09:26:48 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05b51c29942bd3b8c4fee2131632735ff28c903d9a0d3410352169879e71bf2e/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Feb 20 09:26:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:26:48 np0005625203.localdomain podman[242783]: 2026-02-20 09:26:48.750673328 +0000 UTC m=+1.849542404 container init dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=openstack_network_exporter, vendor=Red Hat, Inc., vcs-type=git, release=1770267347)
Feb 20 09:26:48 np0005625203.localdomain openstack_network_exporter[242811]: INFO    09:26:48 main.go:48: registering *bridge.Collector
Feb 20 09:26:48 np0005625203.localdomain openstack_network_exporter[242811]: INFO    09:26:48 main.go:48: registering *coverage.Collector
Feb 20 09:26:48 np0005625203.localdomain openstack_network_exporter[242811]: INFO    09:26:48 main.go:48: registering *datapath.Collector
Feb 20 09:26:48 np0005625203.localdomain openstack_network_exporter[242811]: INFO    09:26:48 main.go:48: registering *iface.Collector
Feb 20 09:26:48 np0005625203.localdomain openstack_network_exporter[242811]: INFO    09:26:48 main.go:48: registering *memory.Collector
Feb 20 09:26:48 np0005625203.localdomain openstack_network_exporter[242811]: INFO    09:26:48 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Feb 20 09:26:48 np0005625203.localdomain openstack_network_exporter[242811]: INFO    09:26:48 main.go:48: registering *ovn.Collector
Feb 20 09:26:48 np0005625203.localdomain openstack_network_exporter[242811]: INFO    09:26:48 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Feb 20 09:26:48 np0005625203.localdomain openstack_network_exporter[242811]: INFO    09:26:48 main.go:48: registering *pmd_perf.Collector
Feb 20 09:26:48 np0005625203.localdomain openstack_network_exporter[242811]: INFO    09:26:48 main.go:48: registering *pmd_rxq.Collector
Feb 20 09:26:48 np0005625203.localdomain openstack_network_exporter[242811]: INFO    09:26:48 main.go:48: registering *vswitch.Collector
Feb 20 09:26:48 np0005625203.localdomain openstack_network_exporter[242811]: NOTICE  09:26:48 main.go:82: listening on http://:9105/metrics
Feb 20 09:26:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:26:48 np0005625203.localdomain podman[242783]: 2026-02-20 09:26:48.78322301 +0000 UTC m=+1.882092066 container start dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, release=1770267347, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., version=9.7, vcs-type=git)
Feb 20 09:26:48 np0005625203.localdomain podman[242783]: openstack_network_exporter
Feb 20 09:26:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:26:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:26:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6971 DF PROTO=TCP SPT=53462 DPT=9100 SEQ=2392754836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA028F400000000001030307) 
Feb 20 09:26:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:26:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:26:50 np0005625203.localdomain systemd[1]: Started openstack_network_exporter container.
Feb 20 09:26:50 np0005625203.localdomain podman[242824]: 2026-02-20 09:26:50.841335414 +0000 UTC m=+2.052200045 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=starting, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, managed_by=edpm_ansible, distribution-scope=public, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z)
Feb 20 09:26:50 np0005625203.localdomain sudo[242738]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:50 np0005625203.localdomain podman[242836]: 2026-02-20 09:26:50.888867608 +0000 UTC m=+1.555673316 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:26:50 np0005625203.localdomain podman[242836]: 2026-02-20 09:26:50.898288881 +0000 UTC m=+1.565094629 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Feb 20 09:26:50 np0005625203.localdomain podman[242836]: unhealthy
Feb 20 09:26:50 np0005625203.localdomain podman[242837]: 2026-02-20 09:26:50.941174901 +0000 UTC m=+1.604720375 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:26:50 np0005625203.localdomain podman[242824]: 2026-02-20 09:26:50.949327437 +0000 UTC m=+2.160192148 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, version=9.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1770267347, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter)
Feb 20 09:26:51 np0005625203.localdomain podman[242837]: 2026-02-20 09:26:51.025211871 +0000 UTC m=+1.688757345 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:26:51 np0005625203.localdomain python3.9[242994]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 20 09:26:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:51 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:26:51 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Failed with result 'exit-code'.
Feb 20 09:26:51 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:26:51 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:26:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:26:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:26:52 np0005625203.localdomain sudo[243106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-egdgrzxumuzdhchhdwqeeuzpqbtfgqnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579612.5289977-3016-249927970135526/AnsiballZ_stat.py
Feb 20 09:26:52 np0005625203.localdomain sudo[243106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:53 np0005625203.localdomain python3.9[243108]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:26:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49199 DF PROTO=TCP SPT=41062 DPT=9100 SEQ=84278636 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA029A810000000001030307) 
Feb 20 09:26:53 np0005625203.localdomain sudo[243106]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:53 np0005625203.localdomain sudo[243196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uywoaiclyciwrbvqcerueaywifihobvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579612.5289977-3016-249927970135526/AnsiballZ_copy.py
Feb 20 09:26:53 np0005625203.localdomain sudo[243196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:53 np0005625203.localdomain python3.9[243198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579612.5289977-3016-249927970135526/.source.yaml _original_basename=.aaxcyc4y follow=False checksum=3d9c806251215c5317a47411279e51c792f2fd64 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:26:53 np0005625203.localdomain sudo[243196]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:54 np0005625203.localdomain sudo[243306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qeuqwxghomvmzmiwkdluysvwxifxjxyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579613.798667-3061-23883873674411/AnsiballZ_find.py
Feb 20 09:26:54 np0005625203.localdomain sudo[243306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:26:54 np0005625203.localdomain python3.9[243308]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 20 09:26:54 np0005625203.localdomain sudo[243306]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:55 np0005625203.localdomain sudo[243326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:26:55 np0005625203.localdomain sudo[243326]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:26:55 np0005625203.localdomain sudo[243326]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:55 np0005625203.localdomain sudo[243344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:26:55 np0005625203.localdomain sudo[243344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:26:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:26:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-5839c92e4849940a9cba2db411bd09f73de0be67a96e976da2027adb67ab877e-merged.mount: Deactivated successfully.
Feb 20 09:26:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6973 DF PROTO=TCP SPT=53462 DPT=9100 SEQ=2392754836 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA02A7000000000001030307) 
Feb 20 09:26:57 np0005625203.localdomain sshd[243375]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:26:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully.
Feb 20 09:26:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-14738252526b4ecc3e5658790c785cb46cd573b0c30a58499169cca3263ae65c-merged.mount: Deactivated successfully.
Feb 20 09:26:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-14738252526b4ecc3e5658790c785cb46cd573b0c30a58499169cca3263ae65c-merged.mount: Deactivated successfully.
Feb 20 09:26:58 np0005625203.localdomain sudo[243344]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:58 np0005625203.localdomain sshd[243375]: Invalid user airflow from 34.131.211.42 port 40510
Feb 20 09:26:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52423 DF PROTO=TCP SPT=53924 DPT=9882 SEQ=1344029356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA02B1CC0000000001030307) 
Feb 20 09:26:59 np0005625203.localdomain sshd[243375]: Received disconnect from 34.131.211.42 port 40510:11: Bye Bye [preauth]
Feb 20 09:26:59 np0005625203.localdomain sshd[243375]: Disconnected from invalid user airflow 34.131.211.42 port 40510 [preauth]
Feb 20 09:26:59 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:26:59 np0005625203.localdomain sudo[243396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:26:59 np0005625203.localdomain sudo[243396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:26:59 np0005625203.localdomain sudo[243396]: pam_unix(sudo:session): session closed for user root
Feb 20 09:26:59 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully.
Feb 20 09:26:59 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully.
Feb 20 09:27:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:01 np0005625203.localdomain sshd[243414]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:27:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52425 DF PROTO=TCP SPT=53924 DPT=9882 SEQ=1344029356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA02BDC00000000001030307) 
Feb 20 09:27:03 np0005625203.localdomain sshd[243414]: Received disconnect from 103.48.192.48 port 13012:11: Bye Bye [preauth]
Feb 20 09:27:03 np0005625203.localdomain sshd[243414]: Disconnected from authenticating user root 103.48.192.48 port 13012 [preauth]
Feb 20 09:27:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:27:03 np0005625203.localdomain podman[243416]: 2026-02-20 09:27:03.305365478 +0000 UTC m=+0.051754327 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=starting, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:27:03 np0005625203.localdomain podman[243416]: 2026-02-20 09:27:03.31785061 +0000 UTC m=+0.064239379 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:27:03 np0005625203.localdomain podman[243416]: unhealthy
Feb 20 09:27:03 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-14738252526b4ecc3e5658790c785cb46cd573b0c30a58499169cca3263ae65c-merged.mount: Deactivated successfully.
Feb 20 09:27:03 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ffa0361de9b1daeb234135fe553d80c8cdedd4f9acc107cdd3a4ed4b713c4ea0-merged.mount: Deactivated successfully.
Feb 20 09:27:04 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ffa0361de9b1daeb234135fe553d80c8cdedd4f9acc107cdd3a4ed4b713c4ea0-merged.mount: Deactivated successfully.
Feb 20 09:27:04 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:27:04 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Failed with result 'exit-code'.
Feb 20 09:27:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52426 DF PROTO=TCP SPT=53924 DPT=9882 SEQ=1344029356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA02CD800000000001030307) 
Feb 20 09:27:06 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4e4217686394af7a9122b2b81585c3ad5207fe018f230f20d139fff3e54ac3cc-merged.mount: Deactivated successfully.
Feb 20 09:27:06 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976-merged.mount: Deactivated successfully.
Feb 20 09:27:06 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976-merged.mount: Deactivated successfully.
Feb 20 09:27:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:27:07.639 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:27:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:27:07.639 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:27:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:27:07.639 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:27:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31983 DF PROTO=TCP SPT=60914 DPT=9105 SEQ=1314052851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA02D5400000000001030307) 
Feb 20 09:27:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-33f73751efe606c7233470249b676223e1b26b870cc49c3dbfbe2c7691e9f3fe-merged.mount: Deactivated successfully.
Feb 20 09:27:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4e4217686394af7a9122b2b81585c3ad5207fe018f230f20d139fff3e54ac3cc-merged.mount: Deactivated successfully.
Feb 20 09:27:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4e4217686394af7a9122b2b81585c3ad5207fe018f230f20d139fff3e54ac3cc-merged.mount: Deactivated successfully.
Feb 20 09:27:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully.
Feb 20 09:27:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-33f73751efe606c7233470249b676223e1b26b870cc49c3dbfbe2c7691e9f3fe-merged.mount: Deactivated successfully.
Feb 20 09:27:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-33f73751efe606c7233470249b676223e1b26b870cc49c3dbfbe2c7691e9f3fe-merged.mount: Deactivated successfully.
Feb 20 09:27:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully.
Feb 20 09:27:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully.
Feb 20 09:27:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully.
Feb 20 09:27:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully.
Feb 20 09:27:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully.
Feb 20 09:27:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully.
Feb 20 09:27:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:27:12 np0005625203.localdomain podman[243438]: 2026-02-20 09:27:12.75965409 +0000 UTC m=+0.074666919 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:27:12 np0005625203.localdomain podman[243438]: 2026-02-20 09:27:12.771203514 +0000 UTC m=+0.086216413 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:27:12 np0005625203.localdomain sshd[243461]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:27:13 np0005625203.localdomain sshd[243463]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:27:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-3105551fde90ad87a79816e708b2cc4b7af2f50432ce26b439bbd7707bc89976-merged.mount: Deactivated successfully.
Feb 20 09:27:13 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:27:14 np0005625203.localdomain sshd[243463]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:27:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=31985 DF PROTO=TCP SPT=60914 DPT=9105 SEQ=1314052851 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA02ED000000000001030307) 
Feb 20 09:27:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:14.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:27:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:14.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:27:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:14.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:27:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:14.223 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:27:14 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:14.223 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:27:14 np0005625203.localdomain sshd[243461]: Invalid user sybase from 103.61.123.132 port 51562
Feb 20 09:27:14 np0005625203.localdomain sshd[243461]: Received disconnect from 103.61.123.132 port 51562:11: Bye Bye [preauth]
Feb 20 09:27:14 np0005625203.localdomain sshd[243461]: Disconnected from invalid user sybase 103.61.123.132 port 51562 [preauth]
Feb 20 09:27:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52427 DF PROTO=TCP SPT=53924 DPT=9882 SEQ=1344029356 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA02EE800000000001030307) 
Feb 20 09:27:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully.
Feb 20 09:27:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully.
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.198 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.198 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.198 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.198 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.198 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.198 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.232 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.232 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.232 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.233 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.233 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:27:15 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.666 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:27:15 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.867 228941 WARNING nova.virt.libvirt.driver [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.869 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=13285MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.869 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.869 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:27:15 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.926 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.926 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:27:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:15.953 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:27:15 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:16.461 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:27:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:16.466 228941 DEBUG nova.compute.provider_tree [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:27:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:16.483 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:27:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:16.485 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:27:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:16.486 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:27:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-530e571f2fdc2c9cc9ab61d58bf266b4766d3c3aa17392b07069c5b092adeb06-merged.mount: Deactivated successfully.
Feb 20 09:27:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4853d8cbf1b20eda152efab24d0e4cb43146df568657aa0bf8852ddc75389e64-merged.mount: Deactivated successfully.
Feb 20 09:27:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4853d8cbf1b20eda152efab24d0e4cb43146df568657aa0bf8852ddc75389e64-merged.mount: Deactivated successfully.
Feb 20 09:27:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-b4f761d90eeb5a4c1ea51e856783cf8398e02a6caf306b90498250a43e5bbae1-merged.mount: Deactivated successfully.
Feb 20 09:27:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49424 DF PROTO=TCP SPT=48478 DPT=9101 SEQ=1102226254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA02F8410000000001030307) 
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.178 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.179 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:27:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:27:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:17.487 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:27:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:27:17.488 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:27:17 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e1fac4507a16e359f79966290a44e975bb0ed717e8b6cc0e34b61e8c96e0a1a3-merged.mount: Deactivated successfully.
Feb 20 09:27:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:27:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:18 np0005625203.localdomain podman[243509]: 2026-02-20 09:27:18.888669354 +0000 UTC m=+0.086537213 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:27:18 np0005625203.localdomain podman[243509]: 2026-02-20 09:27:18.920208306 +0000 UTC m=+0.118076115 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent)
Feb 20 09:27:19 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:27:19 np0005625203.localdomain systemd[1]: tmp-crun.fVM5xU.mount: Deactivated successfully.
Feb 20 09:27:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=39392 DF PROTO=TCP SPT=57786 DPT=9101 SEQ=3472349963 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0304810000000001030307) 
Feb 20 09:27:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:27:22 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:27:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:27:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:27:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:27:22 np0005625203.localdomain podman[243527]: 2026-02-20 09:27:22.159242121 +0000 UTC m=+0.091051165 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, tcib_managed=true)
Feb 20 09:27:22 np0005625203.localdomain podman[243529]: 2026-02-20 09:27:22.214440426 +0000 UTC m=+0.137891598 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller)
Feb 20 09:27:22 np0005625203.localdomain podman[243529]: 2026-02-20 09:27:22.274686859 +0000 UTC m=+0.198138021 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 20 09:27:22 np0005625203.localdomain podman[243527]: 2026-02-20 09:27:22.293388999 +0000 UTC m=+0.225198033 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Feb 20 09:27:22 np0005625203.localdomain podman[243527]: unhealthy
Feb 20 09:27:22 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:27:22 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:27:22 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Failed with result 'exit-code'.
Feb 20 09:27:22 np0005625203.localdomain podman[243528]: 2026-02-20 09:27:22.278744926 +0000 UTC m=+0.205751551 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.7, release=1770267347, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64)
Feb 20 09:27:22 np0005625203.localdomain podman[243528]: 2026-02-20 09:27:22.364240038 +0000 UTC m=+0.291246673 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.expose-services=, config_id=openstack_network_exporter, distribution-scope=public, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=minimal rhel9, release=1770267347, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 20 09:27:22 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:27:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49426 DF PROTO=TCP SPT=48478 DPT=9101 SEQ=1102226254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0310000000000001030307) 
Feb 20 09:27:24 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:24 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:27:24 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:27:24 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:27:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=12388 DF PROTO=TCP SPT=40004 DPT=9100 SEQ=2599719944 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA031C400000000001030307) 
Feb 20 09:27:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:27:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-28cf91af479bb1e20b302e6f839411cf8d51c7bb716b7360ee3fc14275286500-merged.mount: Deactivated successfully.
Feb 20 09:27:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-28cf91af479bb1e20b302e6f839411cf8d51c7bb716b7360ee3fc14275286500-merged.mount: Deactivated successfully.
Feb 20 09:27:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27223 DF PROTO=TCP SPT=50104 DPT=9882 SEQ=2537149459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0326FF0000000001030307) 
Feb 20 09:27:31 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:27:31 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:27:31 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:27:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27225 DF PROTO=TCP SPT=50104 DPT=9882 SEQ=2537149459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0333010000000001030307) 
Feb 20 09:27:33 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:33 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:27:33 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:27:34 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:27:34 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:34 np0005625203.localdomain podman[243586]: 2026-02-20 09:27:34.700471137 +0000 UTC m=+0.062674413 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:27:34 np0005625203.localdomain podman[243586]: 2026-02-20 09:27:34.729049104 +0000 UTC m=+0.091252410 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:27:34 np0005625203.localdomain podman[243586]: unhealthy
Feb 20 09:27:34 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:27:34 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Failed with result 'exit-code'.
Feb 20 09:27:35 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:35 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:35 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:35 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=27226 DF PROTO=TCP SPT=50104 DPT=9882 SEQ=2537149459 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0342C00000000001030307) 
Feb 20 09:27:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61256 DF PROTO=TCP SPT=41372 DPT=9105 SEQ=1541390361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA034A810000000001030307) 
Feb 20 09:27:38 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:27:38 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ca430835467936ee65e9cfbaab428c832bc1d43f5f133ae27dcd34c9dca062b5-merged.mount: Deactivated successfully.
Feb 20 09:27:38 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ca430835467936ee65e9cfbaab428c832bc1d43f5f133ae27dcd34c9dca062b5-merged.mount: Deactivated successfully.
Feb 20 09:27:39 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:39 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully.
Feb 20 09:27:39 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:39 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:41 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ba9090487930ea4ca9efbb869a950be47d0c5c3f7a5f6eb919ee0be5f322c2ce-merged.mount: Deactivated successfully.
Feb 20 09:27:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=45852 DF PROTO=TCP SPT=52802 DPT=9105 SEQ=3821234106 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0356800000000001030307) 
Feb 20 09:27:41 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0a6f20b244995edeba33e1e070a87c3f1ef57f7787ff2d18dff693e5ba4f7644-merged.mount: Deactivated successfully.
Feb 20 09:27:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:27:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully.
Feb 20 09:27:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully.
Feb 20 09:27:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61258 DF PROTO=TCP SPT=41372 DPT=9105 SEQ=1541390361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0362400000000001030307) 
Feb 20 09:27:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:27:44 np0005625203.localdomain podman[243610]: 2026-02-20 09:27:44.276291413 +0000 UTC m=+0.093862065 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:27:44 np0005625203.localdomain podman[243610]: 2026-02-20 09:27:44.287292942 +0000 UTC m=+0.104863634 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:27:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:27:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:27:45 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:27:46 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:46 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:46 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38866 DF PROTO=TCP SPT=59526 DPT=9101 SEQ=859467295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA036D810000000001030307) 
Feb 20 09:27:47 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:47 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:47 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:48 np0005625203.localdomain sudo[243723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-abxagqafninskzuvewgshmfpuhmmubqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579668.292276-3286-102020810228454/AnsiballZ_podman_container_info.py
Feb 20 09:27:48 np0005625203.localdomain sudo[243723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:27:48 np0005625203.localdomain python3.9[243725]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Feb 20 09:27:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:27:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37185 DF PROTO=TCP SPT=34672 DPT=9100 SEQ=390833714 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0379800000000001030307) 
Feb 20 09:27:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully.
Feb 20 09:27:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52efa5d66e3554b905ead4ab4d91a04cce6d946ec83d9a5167ea71afe19dd150-merged.mount: Deactivated successfully.
Feb 20 09:27:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52efa5d66e3554b905ead4ab4d91a04cce6d946ec83d9a5167ea71afe19dd150-merged.mount: Deactivated successfully.
Feb 20 09:27:50 np0005625203.localdomain podman[243737]: 2026-02-20 09:27:50.456585904 +0000 UTC m=+0.773301306 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:27:50 np0005625203.localdomain podman[243737]: 2026-02-20 09:27:50.462060104 +0000 UTC m=+0.778775516 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 20 09:27:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c3c2fee87fe7e8303aaac2829f1b7d26d779101a77d8fd6a9f6bec71602d9a66-merged.mount: Deactivated successfully.
Feb 20 09:27:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully.
Feb 20 09:27:51 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:27:51 np0005625203.localdomain sudo[243723]: pam_unix(sudo:session): session closed for user root
Feb 20 09:27:51 np0005625203.localdomain sudo[243865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wyteffderoezljmwrymuoikhifwzpejy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579671.5725005-3294-22898300805089/AnsiballZ_podman_container_exec.py
Feb 20 09:27:51 np0005625203.localdomain sudo[243865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:27:52 np0005625203.localdomain python3.9[243867]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 20 09:27:52 np0005625203.localdomain systemd[1]: Started libpod-conmon-efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.scope.
Feb 20 09:27:52 np0005625203.localdomain podman[243868]: 2026-02-20 09:27:52.263708464 +0000 UTC m=+0.127656756 container exec efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller)
Feb 20 09:27:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c3c2fee87fe7e8303aaac2829f1b7d26d779101a77d8fd6a9f6bec71602d9a66-merged.mount: Deactivated successfully.
Feb 20 09:27:52 np0005625203.localdomain podman[243868]: 2026-02-20 09:27:52.298447689 +0000 UTC m=+0.162395961 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 20 09:27:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:27:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:27:52 np0005625203.localdomain sudo[243865]: pam_unix(sudo:session): session closed for user root
Feb 20 09:27:52 np0005625203.localdomain podman[243897]: 2026-02-20 09:27:52.993200011 +0000 UTC m=+0.301697336 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:27:53 np0005625203.localdomain podman[243897]: 2026-02-20 09:27:53.025233954 +0000 UTC m=+0.333731259 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Feb 20 09:27:53 np0005625203.localdomain podman[243897]: unhealthy
Feb 20 09:27:53 np0005625203.localdomain podman[243898]: 2026-02-20 09:27:53.03869242 +0000 UTC m=+0.348860477 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 20 09:27:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38868 DF PROTO=TCP SPT=59526 DPT=9101 SEQ=859467295 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0385400000000001030307) 
Feb 20 09:27:53 np0005625203.localdomain podman[243898]: 2026-02-20 09:27:53.136424647 +0000 UTC m=+0.446592764 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:27:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:53 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:27:53 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Failed with result 'exit-code'.
Feb 20 09:27:53 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:27:53 np0005625203.localdomain systemd[1]: libpod-conmon-efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.scope: Deactivated successfully.
Feb 20 09:27:53 np0005625203.localdomain sudo[244044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ocuiwtmcwvnkthboubwukdjbajsjabxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579673.0746272-3302-279836494203829/AnsiballZ_podman_container_exec.py
Feb 20 09:27:53 np0005625203.localdomain sudo[244044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:27:53 np0005625203.localdomain python3.9[244047]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 20 09:27:53 np0005625203.localdomain systemd[1]: Started libpod-conmon-efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.scope.
Feb 20 09:27:53 np0005625203.localdomain podman[244048]: 2026-02-20 09:27:53.737170817 +0000 UTC m=+0.109443531 container exec efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:27:53 np0005625203.localdomain podman[244048]: 2026-02-20 09:27:53.769312893 +0000 UTC m=+0.141585597 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:27:54 np0005625203.localdomain sudo[244044]: pam_unix(sudo:session): session closed for user root
Feb 20 09:27:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully.
Feb 20 09:27:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e7b2b5ec277db0a8768ab2ccd5edd825c3b965080651b007ffb0861f2c0ca9b0-merged.mount: Deactivated successfully.
Feb 20 09:27:54 np0005625203.localdomain sudo[244184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mywwwoshkszfxpcfchjkzkeoeqkgvfuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579674.336152-3310-251658385409655/AnsiballZ_file.py
Feb 20 09:27:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:27:54 np0005625203.localdomain sudo[244184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:27:54 np0005625203.localdomain python3.9[244187]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:27:54 np0005625203.localdomain sudo[244184]: pam_unix(sudo:session): session closed for user root
Feb 20 09:27:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0336e79261e1f534d091cad94b9980aafc6b329c3b01bda2d50fcc505860ff11-merged.mount: Deactivated successfully.
Feb 20 09:27:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0336e79261e1f534d091cad94b9980aafc6b329c3b01bda2d50fcc505860ff11-merged.mount: Deactivated successfully.
Feb 20 09:27:55 np0005625203.localdomain systemd[1]: libpod-conmon-efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.scope: Deactivated successfully.
Feb 20 09:27:55 np0005625203.localdomain podman[244186]: 2026-02-20 09:27:55.320951268 +0000 UTC m=+0.688995974 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, config_id=openstack_network_exporter, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Feb 20 09:27:55 np0005625203.localdomain sudo[244306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gebrxpdcgfqbjxwkgztqejmpyzxqmxtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579675.0328166-3319-35629055174634/AnsiballZ_podman_container_info.py
Feb 20 09:27:55 np0005625203.localdomain sudo[244306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:27:55 np0005625203.localdomain podman[244186]: 2026-02-20 09:27:55.357341945 +0000 UTC m=+0.725386601 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, vcs-type=git, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, version=9.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, io.openshift.expose-services=)
Feb 20 09:27:55 np0005625203.localdomain python3.9[244316]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Feb 20 09:27:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=37187 DF PROTO=TCP SPT=34672 DPT=9100 SEQ=390833714 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0391410000000001030307) 
Feb 20 09:27:56 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:56 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:56 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:27:56 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:27:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:27:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:27:57 np0005625203.localdomain sudo[244306]: pam_unix(sudo:session): session closed for user root
Feb 20 09:27:58 np0005625203.localdomain sudo[244437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rucfkpbcxxhwqgxiuvsiowawlreawwrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579677.8614435-3327-245507762383118/AnsiballZ_podman_container_exec.py
Feb 20 09:27:58 np0005625203.localdomain sudo[244437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:27:58 np0005625203.localdomain python3.9[244439]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 20 09:27:58 np0005625203.localdomain systemd[1]: Started libpod-conmon-379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.scope.
Feb 20 09:27:58 np0005625203.localdomain podman[244440]: 2026-02-20 09:27:58.478320465 +0000 UTC m=+0.102149775 container exec 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 09:27:58 np0005625203.localdomain podman[244440]: 2026-02-20 09:27:58.486129197 +0000 UTC m=+0.109958487 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 20 09:27:58 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0336e79261e1f534d091cad94b9980aafc6b329c3b01bda2d50fcc505860ff11-merged.mount: Deactivated successfully.
Feb 20 09:27:58 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bfd7902c54f5e5aa9d28d09659c7eace41e35d954c014d763e6c73387e43d5dd-merged.mount: Deactivated successfully.
Feb 20 09:27:58 np0005625203.localdomain sudo[244437]: pam_unix(sudo:session): session closed for user root
Feb 20 09:27:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54422 DF PROTO=TCP SPT=43006 DPT=9882 SEQ=2026452767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA039C2D0000000001030307) 
Feb 20 09:27:59 np0005625203.localdomain sudo[244577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ygvhnqbbhvtxpbyzcqhrsiybsbsrlruu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579678.9127502-3335-187558701702247/AnsiballZ_podman_container_exec.py
Feb 20 09:27:59 np0005625203.localdomain sudo[244577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:27:59 np0005625203.localdomain python3.9[244579]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 20 09:27:59 np0005625203.localdomain sudo[244591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:27:59 np0005625203.localdomain sudo[244591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:27:59 np0005625203.localdomain sudo[244591]: pam_unix(sudo:session): session closed for user root
Feb 20 09:27:59 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c3c2fee87fe7e8303aaac2829f1b7d26d779101a77d8fd6a9f6bec71602d9a66-merged.mount: Deactivated successfully.
Feb 20 09:27:59 np0005625203.localdomain sudo[244609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:27:59 np0005625203.localdomain sudo[244609]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:27:59 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully.
Feb 20 09:27:59 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully.
Feb 20 09:27:59 np0005625203.localdomain systemd[1]: libpod-conmon-379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.scope: Deactivated successfully.
Feb 20 09:27:59 np0005625203.localdomain systemd[1]: Started libpod-conmon-379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.scope.
Feb 20 09:27:59 np0005625203.localdomain podman[244580]: 2026-02-20 09:27:59.666682187 +0000 UTC m=+0.232746641 container exec 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 09:27:59 np0005625203.localdomain podman[244580]: 2026-02-20 09:27:59.67226765 +0000 UTC m=+0.238332184 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Feb 20 09:28:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c3c2fee87fe7e8303aaac2829f1b7d26d779101a77d8fd6a9f6bec71602d9a66-merged.mount: Deactivated successfully.
Feb 20 09:28:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c3c2fee87fe7e8303aaac2829f1b7d26d779101a77d8fd6a9f6bec71602d9a66-merged.mount: Deactivated successfully.
Feb 20 09:28:00 np0005625203.localdomain sudo[244577]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:01 np0005625203.localdomain systemd[1]: libpod-conmon-379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.scope: Deactivated successfully.
Feb 20 09:28:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:28:01 np0005625203.localdomain sudo[244609]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:01 np0005625203.localdomain sudo[244785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-asiwdrcazbovyqsaqqlcbydyovebznzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579681.0968313-3343-26241665096809/AnsiballZ_file.py
Feb 20 09:28:01 np0005625203.localdomain sudo[244785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:28:01 np0005625203.localdomain python3.9[244787]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:28:01 np0005625203.localdomain sudo[244785]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:01 np0005625203.localdomain sudo[244843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:28:01 np0005625203.localdomain sudo[244843]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:28:01 np0005625203.localdomain sudo[244843]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:02 np0005625203.localdomain sudo[244913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cmmxzhtkyfgubwzracensnubyqbpkzha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579681.7629554-3352-137708473744759/AnsiballZ_podman_container_info.py
Feb 20 09:28:02 np0005625203.localdomain sudo[244913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54424 DF PROTO=TCP SPT=43006 DPT=9882 SEQ=2026452767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA03A8400000000001030307) 
Feb 20 09:28:02 np0005625203.localdomain python3.9[244915]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Feb 20 09:28:02 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4c416128fe28816a81362614e1a7f9e853b273ba662e28de61a85f5c6446ec2c-merged.mount: Deactivated successfully.
Feb 20 09:28:02 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4e86ba1d201a7a318aee08dfa5526c28ec564d0175a53bee4b062bdddbb48cb0-merged.mount: Deactivated successfully.
Feb 20 09:28:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:28:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:28:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:28:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:28:05 np0005625203.localdomain podman[244927]: 2026-02-20 09:28:05.807558995 +0000 UTC m=+0.382112078 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:28:05 np0005625203.localdomain podman[244927]: 2026-02-20 09:28:05.818320138 +0000 UTC m=+0.392873231 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:28:05 np0005625203.localdomain podman[244927]: unhealthy
Feb 20 09:28:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54425 DF PROTO=TCP SPT=43006 DPT=9882 SEQ=2026452767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA03B8000000000001030307) 
Feb 20 09:28:07 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:28:07 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:28:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:28:07.640 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:28:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:28:07.640 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:28:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:28:07.640 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:28:07 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:28:07 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:28:07 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Failed with result 'exit-code'.
Feb 20 09:28:07 np0005625203.localdomain sudo[244913]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56714 DF PROTO=TCP SPT=39162 DPT=9105 SEQ=2109458282 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA03BFC00000000001030307) 
Feb 20 09:28:08 np0005625203.localdomain sudo[245056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kqchnydakpufftadmbnnsnlkstykpieh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579688.0861213-3360-139355527741996/AnsiballZ_podman_container_exec.py
Feb 20 09:28:08 np0005625203.localdomain sudo[245056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:08 np0005625203.localdomain python3.9[245058]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 20 09:28:08 np0005625203.localdomain systemd[1]: Started libpod-conmon-aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.scope.
Feb 20 09:28:08 np0005625203.localdomain podman[245059]: 2026-02-20 09:28:08.776317058 +0000 UTC m=+0.138728028 container exec aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:28:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:08 np0005625203.localdomain podman[245059]: 2026-02-20 09:28:08.812269142 +0000 UTC m=+0.174680092 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:28:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:28:08 np0005625203.localdomain sudo[245056]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:09 np0005625203.localdomain sudo[245195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uadjhkqawfjbjkntbfpdbydugsvuzmqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579689.194533-3368-46899003355488/AnsiballZ_podman_container_exec.py
Feb 20 09:28:09 np0005625203.localdomain sudo[245195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:28:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:09 np0005625203.localdomain systemd[1]: libpod-conmon-aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.scope: Deactivated successfully.
Feb 20 09:28:09 np0005625203.localdomain python3.9[245197]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 20 09:28:09 np0005625203.localdomain systemd[1]: Started libpod-conmon-aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.scope.
Feb 20 09:28:09 np0005625203.localdomain podman[245198]: 2026-02-20 09:28:09.806084728 +0000 UTC m=+0.113000961 container exec aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:28:09 np0005625203.localdomain podman[245198]: 2026-02-20 09:28:09.838254004 +0000 UTC m=+0.145170297 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 09:28:10 np0005625203.localdomain sudo[245195]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:10 np0005625203.localdomain sudo[245335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ffifgcfyddgmipcpgvjzkyyuypuoedpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579690.2145317-3376-59814198383026/AnsiballZ_file.py
Feb 20 09:28:10 np0005625203.localdomain sudo[245335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:10 np0005625203.localdomain python3.9[245337]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:28:10 np0005625203.localdomain sudo[245335]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:11 np0005625203.localdomain sudo[245445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qukxqjopxallpudjzbwoqkrlefpxyzvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579691.0088856-3385-263148916381069/AnsiballZ_podman_container_info.py
Feb 20 09:28:11 np0005625203.localdomain sudo[245445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:11 np0005625203.localdomain python3.9[245447]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Feb 20 09:28:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:28:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c44b9ee3b74198541ae59b815b4be56ebc6ae69bec4e8dadcdc16fb5e4a48b77-merged.mount: Deactivated successfully.
Feb 20 09:28:12 np0005625203.localdomain systemd[1]: libpod-conmon-aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.scope: Deactivated successfully.
Feb 20 09:28:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56716 DF PROTO=TCP SPT=39162 DPT=9105 SEQ=2109458282 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA03D7800000000001030307) 
Feb 20 09:28:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=54426 DF PROTO=TCP SPT=43006 DPT=9882 SEQ=2026452767 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA03D8810000000001030307) 
Feb 20 09:28:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:28:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully.
Feb 20 09:28:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully.
Feb 20 09:28:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:15.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:28:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:15.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:28:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:15.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:28:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:15.226 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:28:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:15.227 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:28:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.198 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.221 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.221 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.221 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.222 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.222 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:28:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:28:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.691 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.842 228941 WARNING nova.virt.libvirt.driver [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.843 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=13215MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.844 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.844 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:28:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.916 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.917 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:28:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:16.953 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:28:16 np0005625203.localdomain sudo[245445]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:16 np0005625203.localdomain podman[245459]: 2026-02-20 09:28:16.972969118 +0000 UTC m=+1.290224018 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:28:16 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64149 DF PROTO=TCP SPT=44598 DPT=9101 SEQ=504097964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA03E2800000000001030307) 
Feb 20 09:28:17 np0005625203.localdomain podman[245459]: 2026-02-20 09:28:17.008184259 +0000 UTC m=+1.325439189 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:28:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:17.405 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:28:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:17.410 228941 DEBUG nova.compute.provider_tree [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:28:17 np0005625203.localdomain sudo[245630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xcunzrntgvzdoavixqudbnjdyxkjlqis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579697.147397-3393-12091934206487/AnsiballZ_podman_container_exec.py
Feb 20 09:28:17 np0005625203.localdomain sudo[245630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:17.432 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:28:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:17.434 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:28:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:17.435 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:28:17 np0005625203.localdomain python3.9[245634]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 20 09:28:17 np0005625203.localdomain systemd[1]: tmp-crun.7JOgQJ.mount: Deactivated successfully.
Feb 20 09:28:17 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:17 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:28:17 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:28:17 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:28:18 np0005625203.localdomain systemd[1]: Started libpod-conmon-8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.scope.
Feb 20 09:28:18 np0005625203.localdomain podman[245635]: 2026-02-20 09:28:18.031172778 +0000 UTC m=+0.403212671 container exec 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:28:18 np0005625203.localdomain podman[245635]: 2026-02-20 09:28:18.062311253 +0000 UTC m=+0.434351116 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:28:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:18.431 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:28:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:18.432 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:28:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:28:18.448 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:28:18 np0005625203.localdomain sudo[245630]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:28:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:28:18 np0005625203.localdomain systemd[1]: libpod-conmon-8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.scope: Deactivated successfully.
Feb 20 09:28:19 np0005625203.localdomain sudo[245771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gdfrdyosjcljmdbmifqoimfwqwvpdrgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579698.7398622-3401-219750727216593/AnsiballZ_podman_container_exec.py
Feb 20 09:28:19 np0005625203.localdomain sudo[245771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:19 np0005625203.localdomain python3.9[245773]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 20 09:28:19 np0005625203.localdomain systemd[1]: Started libpod-conmon-8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.scope.
Feb 20 09:28:19 np0005625203.localdomain podman[245774]: 2026-02-20 09:28:19.399757583 +0000 UTC m=+0.110420191 container exec 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:28:19 np0005625203.localdomain podman[245774]: 2026-02-20 09:28:19.428404671 +0000 UTC m=+0.139067289 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:28:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49429 DF PROTO=TCP SPT=48478 DPT=9101 SEQ=1102226254 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA03EE800000000001030307) 
Feb 20 09:28:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully.
Feb 20 09:28:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-399537d55839d97f33c8f4bb32cbd0a91de930b46800cfbfe38988d47bef0997-merged.mount: Deactivated successfully.
Feb 20 09:28:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:28:21 np0005625203.localdomain sudo[245771]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:21 np0005625203.localdomain sudo[245922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yftasvrjiaxzzwxpevxyosgvhcjyhezc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579701.633585-3409-118094230584365/AnsiballZ_file.py
Feb 20 09:28:21 np0005625203.localdomain sudo[245922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:22 np0005625203.localdomain python3.9[245924]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:28:22 np0005625203.localdomain sudo[245922]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:22 np0005625203.localdomain systemd[1]: libpod-conmon-8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.scope: Deactivated successfully.
Feb 20 09:28:22 np0005625203.localdomain podman[245804]: 2026-02-20 09:28:22.188106789 +0000 UTC m=+0.708092186 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 20 09:28:22 np0005625203.localdomain podman[245804]: 2026-02-20 09:28:22.193856057 +0000 UTC m=+0.713841444 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:28:22 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully.
Feb 20 09:28:22 np0005625203.localdomain sudo[246037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bnsljcgstqbjfiximtlszvkopfpsewzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579702.3020716-3418-264258594307070/AnsiballZ_podman_container_info.py
Feb 20 09:28:22 np0005625203.localdomain sudo[246037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:22 np0005625203.localdomain python3.9[246039]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Feb 20 09:28:22 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:28:22 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:22 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:22 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:28:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=64151 DF PROTO=TCP SPT=44598 DPT=9101 SEQ=504097964 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA03FA400000000001030307) 
Feb 20 09:28:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:28:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:28:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:28:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ac04412f5c5a43e8c61c2b8d6c1acf66f67fc19f0d028526d9bdbd1ed0352faf-merged.mount: Deactivated successfully.
Feb 20 09:28:24 np0005625203.localdomain sudo[246037]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:24 np0005625203.localdomain podman[246053]: 2026-02-20 09:28:24.074723391 +0000 UTC m=+0.385080070 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=unhealthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 20 09:28:24 np0005625203.localdomain podman[246054]: 2026-02-20 09:28:24.12569959 +0000 UTC m=+0.433340534 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Feb 20 09:28:24 np0005625203.localdomain podman[246053]: 2026-02-20 09:28:24.141665614 +0000 UTC m=+0.452022293 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute)
Feb 20 09:28:24 np0005625203.localdomain podman[246054]: 2026-02-20 09:28:24.192381965 +0000 UTC m=+0.500022859 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, container_name=ovn_controller)
Feb 20 09:28:24 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-cd2341bfcaa4cb509ed75b46508987f090c82b284b579c71e1d63af13468cdb7-merged.mount: Deactivated successfully.
Feb 20 09:28:25 np0005625203.localdomain sudo[246199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gdevbdstxlkaolpodmdlfdlsqvifgxyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579704.303268-3426-58601942322641/AnsiballZ_podman_container_exec.py
Feb 20 09:28:25 np0005625203.localdomain sudo[246199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:25 np0005625203.localdomain python3.9[246201]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 20 09:28:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992-merged.mount: Deactivated successfully.
Feb 20 09:28:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424-merged.mount: Deactivated successfully.
Feb 20 09:28:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63591 DF PROTO=TCP SPT=48594 DPT=9100 SEQ=765971148 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0406800000000001030307) 
Feb 20 09:28:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424-merged.mount: Deactivated successfully.
Feb 20 09:28:26 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:28:26 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:28:26 np0005625203.localdomain systemd[1]: Started libpod-conmon-408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.scope.
Feb 20 09:28:26 np0005625203.localdomain podman[246202]: 2026-02-20 09:28:26.362683546 +0000 UTC m=+0.561504715 container exec 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:28:26 np0005625203.localdomain podman[246202]: 2026-02-20 09:28:26.396254406 +0000 UTC m=+0.595075565 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:28:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:28:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully.
Feb 20 09:28:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992-merged.mount: Deactivated successfully.
Feb 20 09:28:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992-merged.mount: Deactivated successfully.
Feb 20 09:28:27 np0005625203.localdomain sudo[246199]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:28 np0005625203.localdomain podman[246232]: 2026-02-20 09:28:28.024389181 +0000 UTC m=+1.344218972 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, release=1770267347, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 20 09:28:28 np0005625203.localdomain podman[246232]: 2026-02-20 09:28:28.038222769 +0000 UTC m=+1.358052620 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, version=9.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7)
Feb 20 09:28:28 np0005625203.localdomain sudo[246359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dhrlynfpienxedjcpuknjiypalszdtnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579708.1334069-3434-46066353885810/AnsiballZ_podman_container_exec.py
Feb 20 09:28:28 np0005625203.localdomain sudo[246359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully.
Feb 20 09:28:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully.
Feb 20 09:28:28 np0005625203.localdomain python3.9[246361]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 20 09:28:28 np0005625203.localdomain systemd[1]: libpod-conmon-408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.scope: Deactivated successfully.
Feb 20 09:28:28 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:28:28 np0005625203.localdomain systemd[1]: Started libpod-conmon-408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.scope.
Feb 20 09:28:28 np0005625203.localdomain podman[246362]: 2026-02-20 09:28:28.86854554 +0000 UTC m=+0.200987416 container exec 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:28:28 np0005625203.localdomain sshd[246382]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:28:28 np0005625203.localdomain podman[246362]: 2026-02-20 09:28:28.901378597 +0000 UTC m=+0.233820463 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:28:29 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully.
Feb 20 09:28:29 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully.
Feb 20 09:28:29 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully.
Feb 20 09:28:29 np0005625203.localdomain sudo[246359]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63647 DF PROTO=TCP SPT=39230 DPT=9102 SEQ=2171938978 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0412950000000001030307) 
Feb 20 09:28:29 np0005625203.localdomain sshd[246382]: Invalid user firebird from 5.253.59.68 port 41006
Feb 20 09:28:29 np0005625203.localdomain systemd[1]: libpod-conmon-408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.scope: Deactivated successfully.
Feb 20 09:28:29 np0005625203.localdomain sshd[246382]: Received disconnect from 5.253.59.68 port 41006:11: Bye Bye [preauth]
Feb 20 09:28:29 np0005625203.localdomain sshd[246382]: Disconnected from invalid user firebird 5.253.59.68 port 41006 [preauth]
Feb 20 09:28:29 np0005625203.localdomain sudo[246503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-llbaobuwgbxphgduvwqbuaavnbreoycr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579709.375897-3442-79678557376186/AnsiballZ_file.py
Feb 20 09:28:29 np0005625203.localdomain sudo[246503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:29 np0005625203.localdomain python3.9[246505]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:28:29 np0005625203.localdomain sudo[246503]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:30 np0005625203.localdomain sudo[246613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lkecucyqsomccvxtdgvxuswcmkmtmyrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579710.0480251-3451-30322032666002/AnsiballZ_podman_container_info.py
Feb 20 09:28:30 np0005625203.localdomain sudo[246613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:30 np0005625203.localdomain python3.9[246615]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Feb 20 09:28:31 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424-merged.mount: Deactivated successfully.
Feb 20 09:28:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6267 DF PROTO=TCP SPT=32950 DPT=9882 SEQ=4251370420 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA041D800000000001030307) 
Feb 20 09:28:33 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:28:33 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:28:33 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:28:34 np0005625203.localdomain sudo[246613]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:34 np0005625203.localdomain sudo[246735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kalmmhmsgxabcatgpwjswdlbwssvhqbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579714.2018118-3459-24730642934202/AnsiballZ_podman_container_exec.py
Feb 20 09:28:34 np0005625203.localdomain sudo[246735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:34 np0005625203.localdomain python3.9[246737]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 20 09:28:34 np0005625203.localdomain systemd[1]: Started libpod-conmon-dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.scope.
Feb 20 09:28:34 np0005625203.localdomain podman[246738]: 2026-02-20 09:28:34.812728015 +0000 UTC m=+0.114195468 container exec dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, architecture=x86_64, release=1770267347, name=ubi9/ubi-minimal, managed_by=edpm_ansible, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 09:28:34 np0005625203.localdomain podman[246738]: 2026-02-20 09:28:34.8167729 +0000 UTC m=+0.118240353 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1770267347, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9/ubi-minimal, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 09:28:35 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:28:35 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:28:35 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:28:36 np0005625203.localdomain sudo[246735]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=6268 DF PROTO=TCP SPT=32950 DPT=9882 SEQ=4251370420 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA042D400000000001030307) 
Feb 20 09:28:36 np0005625203.localdomain sudo[246875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ywdkbrddyypxehxgvwtjagouvzjdekqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579716.2091684-3467-45501394432173/AnsiballZ_podman_container_exec.py
Feb 20 09:28:36 np0005625203.localdomain sudo[246875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:36 np0005625203.localdomain python3.9[246877]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Feb 20 09:28:36 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:36 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:28:37 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:28:37 np0005625203.localdomain systemd[1]: libpod-conmon-dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.scope: Deactivated successfully.
Feb 20 09:28:37 np0005625203.localdomain systemd[1]: Started libpod-conmon-dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.scope.
Feb 20 09:28:37 np0005625203.localdomain podman[246878]: 2026-02-20 09:28:37.161005698 +0000 UTC m=+0.446691478 container exec dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, release=1770267347, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, vcs-type=git, version=9.7, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.buildah.version=1.33.7, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container)
Feb 20 09:28:37 np0005625203.localdomain podman[246878]: 2026-02-20 09:28:37.165431685 +0000 UTC m=+0.451117495 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, vcs-type=git, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, release=1770267347)
Feb 20 09:28:37 np0005625203.localdomain sudo[246875]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:37 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:28:37 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:38 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:28:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25580 DF PROTO=TCP SPT=49716 DPT=9105 SEQ=143494321 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0435010000000001030307) 
Feb 20 09:28:38 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:28:38 np0005625203.localdomain systemd[1]: libpod-conmon-dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.scope: Deactivated successfully.
Feb 20 09:28:38 np0005625203.localdomain podman[246924]: 2026-02-20 09:28:38.206829835 +0000 UTC m=+0.268793488 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:28:38 np0005625203.localdomain podman[246924]: 2026-02-20 09:28:38.213955245 +0000 UTC m=+0.275918868 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:28:38 np0005625203.localdomain podman[246924]: unhealthy
Feb 20 09:28:38 np0005625203.localdomain sudo[247037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hbjkmuwwmrbeyqgulcwyuenjlfjqhygh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579717.9555054-3475-100530010515739/AnsiballZ_file.py
Feb 20 09:28:38 np0005625203.localdomain sudo[247037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:28:38 np0005625203.localdomain sshd[247040]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:28:38 np0005625203.localdomain python3.9[247039]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:28:38 np0005625203.localdomain sudo[247037]: pam_unix(sudo:session): session closed for user root
Feb 20 09:28:38 np0005625203.localdomain sshd[247040]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:28:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:28:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-126ec7093b4409981b35de203497fb4c932b8ea0a58e787934a1a228394ab4e1-merged.mount: Deactivated successfully.
Feb 20 09:28:40 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:28:40 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Failed with result 'exit-code'.
Feb 20 09:28:41 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=61261 DF PROTO=TCP SPT=41372 DPT=9105 SEQ=1541390361 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0440810000000001030307) 
Feb 20 09:28:42 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992-merged.mount: Deactivated successfully.
Feb 20 09:28:42 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424-merged.mount: Deactivated successfully.
Feb 20 09:28:42 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424-merged.mount: Deactivated successfully.
Feb 20 09:28:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully.
Feb 20 09:28:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992-merged.mount: Deactivated successfully.
Feb 20 09:28:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25582 DF PROTO=TCP SPT=49716 DPT=9105 SEQ=143494321 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA044CC10000000001030307) 
Feb 20 09:28:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992-merged.mount: Deactivated successfully.
Feb 20 09:28:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully.
Feb 20 09:28:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully.
Feb 20 09:28:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully.
Feb 20 09:28:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully.
Feb 20 09:28:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully.
Feb 20 09:28:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38646 DF PROTO=TCP SPT=55384 DPT=9101 SEQ=1886862595 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0457C00000000001030307) 
Feb 20 09:28:48 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424-merged.mount: Deactivated successfully.
Feb 20 09:28:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:28:48 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-cd350f63fa5cb4de5fd35b027e0dd392fc094bab6ce558ae5afdf1311f190a48-merged.mount: Deactivated successfully.
Feb 20 09:28:48 np0005625203.localdomain systemd[1]: tmp-crun.BQcUkA.mount: Deactivated successfully.
Feb 20 09:28:48 np0005625203.localdomain podman[247059]: 2026-02-20 09:28:48.397274757 +0000 UTC m=+0.114893120 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:28:48 np0005625203.localdomain podman[247059]: 2026-02-20 09:28:48.412461127 +0000 UTC m=+0.130079500 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:28:48 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:28:49 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:28:49 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0336e79261e1f534d091cad94b9980aafc6b329c3b01bda2d50fcc505860ff11-merged.mount: Deactivated successfully.
Feb 20 09:28:49 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0336e79261e1f534d091cad94b9980aafc6b329c3b01bda2d50fcc505860ff11-merged.mount: Deactivated successfully.
Feb 20 09:28:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32953 DF PROTO=TCP SPT=49892 DPT=9100 SEQ=1891013852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0464000000000001030307) 
Feb 20 09:28:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:28:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:28:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:28:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=38648 DF PROTO=TCP SPT=55384 DPT=9101 SEQ=1886862595 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA046F800000000001030307) 
Feb 20 09:28:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0336e79261e1f534d091cad94b9980aafc6b329c3b01bda2d50fcc505860ff11-merged.mount: Deactivated successfully.
Feb 20 09:28:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:28:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-3ed31fcdcaee63ea79499a12bac1a1c15d8713ced0f8f605d87edc1f9667054a-merged.mount: Deactivated successfully.
Feb 20 09:28:53 np0005625203.localdomain systemd[1]: tmp-crun.z3WsfF.mount: Deactivated successfully.
Feb 20 09:28:53 np0005625203.localdomain podman[247079]: 2026-02-20 09:28:53.265257493 +0000 UTC m=+0.074330853 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:28:53 np0005625203.localdomain podman[247079]: 2026-02-20 09:28:53.271633412 +0000 UTC m=+0.080706732 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 20 09:28:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-3ed31fcdcaee63ea79499a12bac1a1c15d8713ced0f8f605d87edc1f9667054a-merged.mount: Deactivated successfully.
Feb 20 09:28:53 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:28:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:28:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:28:54 np0005625203.localdomain sshd[247097]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:28:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e812fa34defdc78ec5fb2b77011468829f7a2881cc57f804b1a422dc9e19278a-merged.mount: Deactivated successfully.
Feb 20 09:28:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-358e1b67e7a5a713aac9c0a160bdddbfbcb9b13948cffdbbefd6a9946a4ee797-merged.mount: Deactivated successfully.
Feb 20 09:28:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-358e1b67e7a5a713aac9c0a160bdddbfbcb9b13948cffdbbefd6a9946a4ee797-merged.mount: Deactivated successfully.
Feb 20 09:28:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=32955 DF PROTO=TCP SPT=49892 DPT=9100 SEQ=1891013852 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA047BC00000000001030307) 
Feb 20 09:28:56 np0005625203.localdomain sshd[247097]: Invalid user ivan from 118.99.80.29 port 7870
Feb 20 09:28:56 np0005625203.localdomain sshd[247097]: Received disconnect from 118.99.80.29 port 7870:11: Bye Bye [preauth]
Feb 20 09:28:56 np0005625203.localdomain sshd[247097]: Disconnected from invalid user ivan 118.99.80.29 port 7870 [preauth]
Feb 20 09:28:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:28:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:28:56 np0005625203.localdomain systemd[1]: tmp-crun.eyRt1m.mount: Deactivated successfully.
Feb 20 09:28:56 np0005625203.localdomain podman[247100]: 2026-02-20 09:28:56.693140885 +0000 UTC m=+0.100468821 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:28:56 np0005625203.localdomain podman[247100]: 2026-02-20 09:28:56.727693704 +0000 UTC m=+0.135021630 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute)
Feb 20 09:28:56 np0005625203.localdomain podman[247101]: 2026-02-20 09:28:56.737743212 +0000 UTC m=+0.142259621 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:28:56 np0005625203.localdomain podman[247101]: 2026-02-20 09:28:56.777132399 +0000 UTC m=+0.181648798 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:28:58 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:28:58 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:28:58 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:28:58 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:28:58 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:28:58 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7422 DF PROTO=TCP SPT=56142 DPT=9882 SEQ=1896416853 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA04868D0000000001030307) 
Feb 20 09:28:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:28:59 np0005625203.localdomain podman[247144]: 2026-02-20 09:28:59.260109554 +0000 UTC m=+0.078384274 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, version=9.7, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git)
Feb 20 09:28:59 np0005625203.localdomain podman[247144]: 2026-02-20 09:28:59.300342288 +0000 UTC m=+0.118617038 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.7, build-date=2026-02-05T04:57:10Z, architecture=x86_64, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 09:29:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:29:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:29:00 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:29:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7424 DF PROTO=TCP SPT=56142 DPT=9882 SEQ=1896416853 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0492810000000001030307) 
Feb 20 09:29:02 np0005625203.localdomain sudo[247164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:29:02 np0005625203.localdomain sudo[247164]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:29:02 np0005625203.localdomain sudo[247164]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:02 np0005625203.localdomain sudo[247182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:29:02 np0005625203.localdomain sudo[247182]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:29:02 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:02 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:02 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:03 np0005625203.localdomain sudo[247182]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:04 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:29:04 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-86b495d7449130304be6335c2cc66bfe5517118781c16cde6afb1fb27ddd4c49-merged.mount: Deactivated successfully.
Feb 20 09:29:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-86b495d7449130304be6335c2cc66bfe5517118781c16cde6afb1fb27ddd4c49-merged.mount: Deactivated successfully.
Feb 20 09:29:05 np0005625203.localdomain sudo[247233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:29:05 np0005625203.localdomain sudo[247233]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:29:05 np0005625203.localdomain sudo[247233]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-2cd9444c84550fbd551e3826a8110fcc009757858b99e84f1119041f2325189b-merged.mount: Deactivated successfully.
Feb 20 09:29:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7425 DF PROTO=TCP SPT=56142 DPT=9882 SEQ=1896416853 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA04A2400000000001030307) 
Feb 20 09:29:06 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully.
Feb 20 09:29:06 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully.
Feb 20 09:29:06 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully.
Feb 20 09:29:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:29:07.641 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:29:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:29:07.642 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:29:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:29:07.642 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:29:08 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48162 DF PROTO=TCP SPT=59804 DPT=9105 SEQ=1570453070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA04AA010000000001030307) 
Feb 20 09:29:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully.
Feb 20 09:29:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully.
Feb 20 09:29:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:29:10 np0005625203.localdomain podman[247251]: 2026-02-20 09:29:10.774047503 +0000 UTC m=+0.089406862 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:29:10 np0005625203.localdomain podman[247251]: 2026-02-20 09:29:10.782434401 +0000 UTC m=+0.097793799 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:29:10 np0005625203.localdomain podman[247251]: unhealthy
Feb 20 09:29:10 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:29:10 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Failed with result 'exit-code'.
Feb 20 09:29:11 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56719 DF PROTO=TCP SPT=39162 DPT=9105 SEQ=2109458282 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA04B6810000000001030307) 
Feb 20 09:29:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully.
Feb 20 09:29:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-6cebad060ab405d52d2b962638c8f57ff1b2ca4462868ac6b7d7f52e12ed3e0a-merged.mount: Deactivated successfully.
Feb 20 09:29:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e0c81d46f937f1f84faf68fb71f862e4bd868921a7d16384d44790308d98719f-merged.mount: Deactivated successfully.
Feb 20 09:29:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully.
Feb 20 09:29:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:13.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:29:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:13.199 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 20 09:29:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:13.215 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 20 09:29:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:13.216 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:29:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:13.216 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 20 09:29:13 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:13.231 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:29:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully.
Feb 20 09:29:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e0c81d46f937f1f84faf68fb71f862e4bd868921a7d16384d44790308d98719f-merged.mount: Deactivated successfully.
Feb 20 09:29:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e0c81d46f937f1f84faf68fb71f862e4bd868921a7d16384d44790308d98719f-merged.mount: Deactivated successfully.
Feb 20 09:29:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=48164 DF PROTO=TCP SPT=59804 DPT=9105 SEQ=1570453070 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA04C1C00000000001030307) 
Feb 20 09:29:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully.
Feb 20 09:29:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:15.241 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:29:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:16.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:29:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:16.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:29:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:16.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:29:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:16.214 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:29:16 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:16.214 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:29:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:29:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully.
Feb 20 09:29:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully.
Feb 20 09:29:17 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11202 DF PROTO=TCP SPT=47540 DPT=9101 SEQ=2335622291 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA04CD000000000001030307) 
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:29:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:29:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:17.198 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:29:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:17.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.201 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.201 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.217 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.217 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.217 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.217 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.218 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:29:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:29:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:29:18 np0005625203.localdomain podman[247293]: 2026-02-20 09:29:18.587308375 +0000 UTC m=+0.076615379 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:29:18 np0005625203.localdomain podman[247293]: 2026-02-20 09:29:18.593793495 +0000 UTC m=+0.083100449 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.639 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:29:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:29:18 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.838 228941 WARNING nova.virt.libvirt.driver [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.840 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=13165MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.840 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.841 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.933 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.934 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:29:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:18.992 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Refreshing inventories for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 20 09:29:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:19.058 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Updating ProviderTree inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 20 09:29:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:19.058 228941 DEBUG nova.compute.provider_tree [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:29:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:19.073 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Refreshing aggregate associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 20 09:29:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:19.097 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Refreshing trait associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, traits: HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AESNI,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SHA,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 20 09:29:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:19.112 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:29:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:19.586 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:29:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:19.594 228941 DEBUG nova.compute.provider_tree [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:29:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:19.613 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:29:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:19.616 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:29:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:19.616 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:29:19 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:19 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:19 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:20 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56036 DF PROTO=TCP SPT=36710 DPT=9100 SEQ=3779743246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA04D9400000000001030307) 
Feb 20 09:29:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:29:20.612 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:29:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:23 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=63594 DF PROTO=TCP SPT=48594 DPT=9100 SEQ=765971148 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA04E4800000000001030307) 
Feb 20 09:29:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0e4bbbcc3a308b062cd809f5d981a575292a522b3c6697e4ac7d70789e33f207-merged.mount: Deactivated successfully.
Feb 20 09:29:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d5d417fdd363e485b6316b065395adc69531d816fbea7fef0eb296ef13ea44a2-merged.mount: Deactivated successfully.
Feb 20 09:29:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:29:23 np0005625203.localdomain systemd[1]: tmp-crun.tWDcPX.mount: Deactivated successfully.
Feb 20 09:29:23 np0005625203.localdomain podman[247341]: 2026-02-20 09:29:23.427962184 +0000 UTC m=+0.085189971 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:29:23 np0005625203.localdomain podman[247341]: 2026-02-20 09:29:23.432728061 +0000 UTC m=+0.089955858 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:29:23 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:29:24 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:24 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:24 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e812fa34defdc78ec5fb2b77011468829f7a2881cc57f804b1a422dc9e19278a-merged.mount: Deactivated successfully.
Feb 20 09:29:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-de09a3fcf798e8d6f765a1320359ed0f97a6a1d1a2a8fd17434f89e173a7556b-merged.mount: Deactivated successfully.
Feb 20 09:29:26 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56038 DF PROTO=TCP SPT=36710 DPT=9100 SEQ=3779743246 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA04F1000000000001030307) 
Feb 20 09:29:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c649efc911c887686c8351fe543502de582148a048396cbc7ad85b29ea075fe6-merged.mount: Deactivated successfully.
Feb 20 09:29:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-71578ef0b1e4f969b13e723033957534bc6c3b31bff47c5fdb42a55e43d4cef9-merged.mount: Deactivated successfully.
Feb 20 09:29:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-71578ef0b1e4f969b13e723033957534bc6c3b31bff47c5fdb42a55e43d4cef9-merged.mount: Deactivated successfully.
Feb 20 09:29:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully.
Feb 20 09:29:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c649efc911c887686c8351fe543502de582148a048396cbc7ad85b29ea075fe6-merged.mount: Deactivated successfully.
Feb 20 09:29:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c649efc911c887686c8351fe543502de582148a048396cbc7ad85b29ea075fe6-merged.mount: Deactivated successfully.
Feb 20 09:29:28 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11340 DF PROTO=TCP SPT=36418 DPT=9882 SEQ=2681131460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA04FBBD0000000001030307) 
Feb 20 09:29:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:29:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:29:29 np0005625203.localdomain systemd[1]: tmp-crun.QXrIBs.mount: Deactivated successfully.
Feb 20 09:29:29 np0005625203.localdomain podman[247361]: 2026-02-20 09:29:29.333670408 +0000 UTC m=+0.147485611 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:29:29 np0005625203.localdomain podman[247361]: 2026-02-20 09:29:29.39635914 +0000 UTC m=+0.210174273 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:29:29 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully.
Feb 20 09:29:29 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:29:30 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad-merged.mount: Deactivated successfully.
Feb 20 09:29:30 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully.
Feb 20 09:29:30 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully.
Feb 20 09:29:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:29:30 np0005625203.localdomain podman[247394]: 2026-02-20 09:29:30.757730747 +0000 UTC m=+0.076696962 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, release=1770267347, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:29:30 np0005625203.localdomain podman[247394]: 2026-02-20 09:29:30.775358587 +0000 UTC m=+0.094324872 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, distribution-scope=public, architecture=x86_64, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, version=9.7, config_id=openstack_network_exporter, vcs-type=git, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9/ubi-minimal, managed_by=edpm_ansible, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 20 09:29:31 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-71578ef0b1e4f969b13e723033957534bc6c3b31bff47c5fdb42a55e43d4cef9-merged.mount: Deactivated successfully.
Feb 20 09:29:31 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:29:31 np0005625203.localdomain podman[247360]: 2026-02-20 09:29:31.608444431 +0000 UTC m=+2.424979087 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3)
Feb 20 09:29:31 np0005625203.localdomain podman[247360]: 2026-02-20 09:29:31.620242824 +0000 UTC m=+2.436777470 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Feb 20 09:29:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11342 DF PROTO=TCP SPT=36418 DPT=9882 SEQ=2681131460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0507C10000000001030307) 
Feb 20 09:29:33 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:29:33 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:29:34 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:29:34 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:29:35 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:35 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:29:35 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:29:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11343 DF PROTO=TCP SPT=36418 DPT=9882 SEQ=2681131460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0517800000000001030307) 
Feb 20 09:29:37 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:37 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:37 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:38 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:38 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25879 DF PROTO=TCP SPT=38274 DPT=9105 SEQ=65668331 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA051F400000000001030307) 
Feb 20 09:29:38 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:38 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:38 np0005625203.localdomain sudo[247511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aneojzaxckjwunbzwkhkvxcumejvysvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579778.4861002-3688-5993777798890/AnsiballZ_file.py
Feb 20 09:29:38 np0005625203.localdomain sudo[247511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:38 np0005625203.localdomain python3.9[247513]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:38 np0005625203.localdomain sudo[247511]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:39 np0005625203.localdomain sudo[247621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-myookpncgclhmmobplluzjbyaxemfhwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579779.3097713-3715-65212183699080/AnsiballZ_stat.py
Feb 20 09:29:39 np0005625203.localdomain sudo[247621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:39 np0005625203.localdomain sshd[247624]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:29:39 np0005625203.localdomain python3.9[247623]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:29:39 np0005625203.localdomain sudo[247621]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:40 np0005625203.localdomain sudo[247711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nhgzphflzmqnyrtqpnaivwwwclusqxrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579779.3097713-3715-65212183699080/AnsiballZ_copy.py
Feb 20 09:29:40 np0005625203.localdomain sudo[247711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:40 np0005625203.localdomain python3.9[247713]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579779.3097713-3715-65212183699080/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:40 np0005625203.localdomain sudo[247711]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:29:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1a3dde96d66e26fbb5d914ff807f950d08e80e1daf4f655a6bd9c579da655dfe-merged.mount: Deactivated successfully.
Feb 20 09:29:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1a3dde96d66e26fbb5d914ff807f950d08e80e1daf4f655a6bd9c579da655dfe-merged.mount: Deactivated successfully.
Feb 20 09:29:40 np0005625203.localdomain sshd[247624]: Invalid user adminuser from 152.32.129.236 port 58486
Feb 20 09:29:40 np0005625203.localdomain sudo[247821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ftaouzetbrjyeabxszgoduzmkkmnilpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579780.7492385-3763-182081884698287/AnsiballZ_file.py
Feb 20 09:29:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:29:41 np0005625203.localdomain sudo[247821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:41 np0005625203.localdomain sshd[247624]: Received disconnect from 152.32.129.236 port 58486:11: Bye Bye [preauth]
Feb 20 09:29:41 np0005625203.localdomain sshd[247624]: Disconnected from invalid user adminuser 152.32.129.236 port 58486 [preauth]
Feb 20 09:29:41 np0005625203.localdomain podman[247823]: 2026-02-20 09:29:41.10866022 +0000 UTC m=+0.087085690 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:29:41 np0005625203.localdomain podman[247823]: 2026-02-20 09:29:41.122098381 +0000 UTC m=+0.100523891 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:29:41 np0005625203.localdomain podman[247823]: unhealthy
Feb 20 09:29:41 np0005625203.localdomain python3.9[247824]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:41 np0005625203.localdomain sudo[247821]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:41 np0005625203.localdomain sudo[247953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fxfmnhkizwhxviznpjrppracobqodljv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579781.4624794-3787-77774784750603/AnsiballZ_stat.py
Feb 20 09:29:41 np0005625203.localdomain sudo[247953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:41 np0005625203.localdomain python3.9[247955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:29:41 np0005625203.localdomain sudo[247953]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:42 np0005625203.localdomain sudo[248010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uonwcmgyxbdyocuwauuyutqwzysoeduv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579781.4624794-3787-77774784750603/AnsiballZ_file.py
Feb 20 09:29:42 np0005625203.localdomain sudo[248010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:42 np0005625203.localdomain python3.9[248012]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:42 np0005625203.localdomain sudo[248010]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:42 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully.
Feb 20 09:29:42 np0005625203.localdomain sudo[248120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chdrcosvhgrfgmulcbyqicpghnvtgtjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579782.6041474-3823-109797046268922/AnsiballZ_stat.py
Feb 20 09:29:42 np0005625203.localdomain sudo[248120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:42 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-40d13af751dd0e47fc8bb889a91a6d655bc2617bd5ab127ac97d8b2c392f6c58-merged.mount: Deactivated successfully.
Feb 20 09:29:43 np0005625203.localdomain python3.9[248122]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:29:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-40d13af751dd0e47fc8bb889a91a6d655bc2617bd5ab127ac97d8b2c392f6c58-merged.mount: Deactivated successfully.
Feb 20 09:29:43 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:29:43 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Failed with result 'exit-code'.
Feb 20 09:29:43 np0005625203.localdomain sudo[248120]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:43 np0005625203.localdomain sudo[248177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orifsssisikwjqflhucnkrhffncjxgis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579782.6041474-3823-109797046268922/AnsiballZ_file.py
Feb 20 09:29:43 np0005625203.localdomain sudo[248177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:43 np0005625203.localdomain python3.9[248179]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.7faq1zik recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:43 np0005625203.localdomain sudo[248177]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:44 np0005625203.localdomain sudo[248287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ownqtvgjwcvrbdiimahaehbtmgehstmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579783.8147247-3859-207166985700027/AnsiballZ_stat.py
Feb 20 09:29:44 np0005625203.localdomain sudo[248287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=25881 DF PROTO=TCP SPT=38274 DPT=9105 SEQ=65668331 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0537000000000001030307) 
Feb 20 09:29:44 np0005625203.localdomain python3.9[248289]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:29:44 np0005625203.localdomain sudo[248287]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:44 np0005625203.localdomain sudo[248344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dpqbwjyufkxtrtfycmlhtxlpylkdvdly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579783.8147247-3859-207166985700027/AnsiballZ_file.py
Feb 20 09:29:44 np0005625203.localdomain sudo[248344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=11344 DF PROTO=TCP SPT=36418 DPT=9882 SEQ=2681131460 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0538800000000001030307) 
Feb 20 09:29:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully.
Feb 20 09:29:44 np0005625203.localdomain python3.9[248346]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:44 np0005625203.localdomain sudo[248344]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully.
Feb 20 09:29:45 np0005625203.localdomain sudo[248454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hlzsckdigdqudrrugzemxixgnhwljoop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579785.0079136-3898-252047149380204/AnsiballZ_command.py
Feb 20 09:29:45 np0005625203.localdomain sudo[248454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:45 np0005625203.localdomain python3.9[248456]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:29:45 np0005625203.localdomain sudo[248454]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:45 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:46 np0005625203.localdomain sudo[248565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmfgmostpnaimvgtyzptobkzwbreyiuv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771579785.7141397-3922-136046451170200/AnsiballZ_edpm_nftables_from_files.py
Feb 20 09:29:46 np0005625203.localdomain sudo[248565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:46 np0005625203.localdomain python3[248567]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Feb 20 09:29:46 np0005625203.localdomain sudo[248565]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:46 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:46 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:47 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52562 DF PROTO=TCP SPT=40266 DPT=9101 SEQ=1470578933 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0542400000000001030307) 
Feb 20 09:29:47 np0005625203.localdomain sudo[248675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uidcwswzbkerftqwjottalmkwnnhlhls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579786.5517619-3947-5961742975727/AnsiballZ_stat.py
Feb 20 09:29:47 np0005625203.localdomain sudo[248675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:29:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6600.1 total, 600.0 interval
                                                          Cumulative writes: 4844 writes, 21K keys, 4844 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4844 writes, 660 syncs, 7.34 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 09:29:47 np0005625203.localdomain python3.9[248677]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:29:47 np0005625203.localdomain sudo[248675]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:47 np0005625203.localdomain sudo[248732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ufghplrnkhsrvksyibqzwklabfcpmhlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579786.5517619-3947-5961742975727/AnsiballZ_file.py
Feb 20 09:29:47 np0005625203.localdomain sudo[248732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:48 np0005625203.localdomain python3.9[248734]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:48 np0005625203.localdomain sudo[248732]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:48 np0005625203.localdomain sudo[248842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gyojogjrpnhkoeqrkoxeyyvctcfunget ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579788.4110334-3982-100514194176582/AnsiballZ_stat.py
Feb 20 09:29:48 np0005625203.localdomain sudo[248842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:48 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-40d13af751dd0e47fc8bb889a91a6d655bc2617bd5ab127ac97d8b2c392f6c58-merged.mount: Deactivated successfully.
Feb 20 09:29:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:29:48 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-20741c15c03b3a0cca3141fc20b59df52b1003984d593ca69a540b93db33da69-merged.mount: Deactivated successfully.
Feb 20 09:29:48 np0005625203.localdomain python3.9[248844]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:29:48 np0005625203.localdomain podman[248845]: 2026-02-20 09:29:48.927402959 +0000 UTC m=+0.064399495 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:29:48 np0005625203.localdomain podman[248845]: 2026-02-20 09:29:48.93625026 +0000 UTC m=+0.073246806 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:29:48 np0005625203.localdomain sudo[248842]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:49 np0005625203.localdomain rsyslogd[758]: imjournal: 2559 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 20 09:29:49 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:29:49 np0005625203.localdomain sudo[248922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lvbcvcykwqsrhngtsxazkuibqckbyoxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579788.4110334-3982-100514194176582/AnsiballZ_file.py
Feb 20 09:29:49 np0005625203.localdomain sudo[248922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:49 np0005625203.localdomain python3.9[248924]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:49 np0005625203.localdomain sudo[248922]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:49 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-20741c15c03b3a0cca3141fc20b59df52b1003984d593ca69a540b93db33da69-merged.mount: Deactivated successfully.
Feb 20 09:29:50 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16453 DF PROTO=TCP SPT=43502 DPT=9100 SEQ=865296257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA054E400000000001030307) 
Feb 20 09:29:50 np0005625203.localdomain sudo[249032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kupoylirssliriafdvyjzctkpcwgvsdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579789.960348-4018-185747531129207/AnsiballZ_stat.py
Feb 20 09:29:50 np0005625203.localdomain sudo[249032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:50 np0005625203.localdomain python3.9[249034]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:29:50 np0005625203.localdomain sudo[249032]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully.
Feb 20 09:29:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully.
Feb 20 09:29:50 np0005625203.localdomain sudo[249089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltwmyizdftrabwcfeotozvfhvoftzgme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579789.960348-4018-185747531129207/AnsiballZ_file.py
Feb 20 09:29:50 np0005625203.localdomain sudo[249089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully.
Feb 20 09:29:50 np0005625203.localdomain python3.9[249091]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:50 np0005625203.localdomain sudo[249089]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:51 np0005625203.localdomain sudo[249199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-huuygqxrmdgqvjlctkzyfxpervjjksoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579791.162335-4054-236189466862286/AnsiballZ_stat.py
Feb 20 09:29:51 np0005625203.localdomain sudo[249199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:51 np0005625203.localdomain python3.9[249201]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:29:51 np0005625203.localdomain sudo[249199]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:51 np0005625203.localdomain sudo[249256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nsmfjmysohjdxrhjmvmmsrekhkmqrdbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579791.162335-4054-236189466862286/AnsiballZ_file.py
Feb 20 09:29:51 np0005625203.localdomain sudo[249256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:29:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 6600.1 total, 600.0 interval
                                                          Cumulative writes: 5843 writes, 25K keys, 5843 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5843 writes, 764 syncs, 7.65 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 09:29:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:52 np0005625203.localdomain python3.9[249258]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully.
Feb 20 09:29:52 np0005625203.localdomain sudo[249256]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-93889cfe8a1eaf916da420177b6c00eab0b6f1d6521b96229ee8963de2bbdb6f-merged.mount: Deactivated successfully.
Feb 20 09:29:52 np0005625203.localdomain sudo[249366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ofryrbjunvtcsefkyhcohwfdgspwmgni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579792.422614-4090-235066797463217/AnsiballZ_stat.py
Feb 20 09:29:52 np0005625203.localdomain sudo[249366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:52 np0005625203.localdomain python3.9[249368]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:29:52 np0005625203.localdomain sudo[249366]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:53 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52564 DF PROTO=TCP SPT=40266 DPT=9101 SEQ=1470578933 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA055A000000000001030307) 
Feb 20 09:29:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:53 np0005625203.localdomain sudo[249456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-myrwripklmavndxatssvofrvrljndytr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579792.422614-4090-235066797463217/AnsiballZ_copy.py
Feb 20 09:29:53 np0005625203.localdomain sudo[249456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:29:53 np0005625203.localdomain python3.9[249458]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1771579792.422614-4090-235066797463217/.source.nft follow=False _original_basename=ruleset.j2 checksum=953266ca5f7d82d2777a0a437bd7feceb9259ee8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:53 np0005625203.localdomain sudo[249456]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:29:53 np0005625203.localdomain podman[249476]: 2026-02-20 09:29:53.779377305 +0000 UTC m=+0.094992542 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:29:53 np0005625203.localdomain podman[249476]: 2026-02-20 09:29:53.786266316 +0000 UTC m=+0.101881563 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb 20 09:29:54 np0005625203.localdomain sudo[249584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dmwobktzpqnojtbhofgyctgowejmkxer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579793.7952766-4135-66468559099841/AnsiballZ_file.py
Feb 20 09:29:54 np0005625203.localdomain sudo[249584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:54 np0005625203.localdomain sshd[249587]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:29:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:29:54 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:29:54 np0005625203.localdomain python3.9[249586]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:54 np0005625203.localdomain sudo[249584]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:54 np0005625203.localdomain sudo[249696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pbdknurluendenapydzovwduqdrnugia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579794.5056417-4159-76432702297968/AnsiballZ_command.py
Feb 20 09:29:54 np0005625203.localdomain sudo[249696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:55 np0005625203.localdomain python3.9[249698]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:29:55 np0005625203.localdomain sshd[249587]: Invalid user oracle from 194.107.115.2 port 55414
Feb 20 09:29:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:29:55 np0005625203.localdomain sudo[249696]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:55 np0005625203.localdomain sshd[249587]: Received disconnect from 194.107.115.2 port 55414:11: Bye Bye [preauth]
Feb 20 09:29:55 np0005625203.localdomain sshd[249587]: Disconnected from invalid user oracle 194.107.115.2 port 55414 [preauth]
Feb 20 09:29:55 np0005625203.localdomain sudo[249809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ayefrudmrbnzebdmbzdlpwwgzuvlabjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579795.248128-4183-102055659101507/AnsiballZ_blockinfile.py
Feb 20 09:29:55 np0005625203.localdomain sudo[249809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:55 np0005625203.localdomain python3.9[249811]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                                            include "/etc/nftables/edpm-chains.nft"
                                                            include "/etc/nftables/edpm-rules.nft"
                                                            include "/etc/nftables/edpm-jumps.nft"
                                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:55 np0005625203.localdomain sudo[249809]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-67ff2fcf098d662f72898d504b273725d460bc3ee224388c566fda6c94421648-merged.mount: Deactivated successfully.
Feb 20 09:29:56 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-882ca1ee31e7e66703911e849aa154cc24abce007ac0cf03d820cf958d55c0d1-merged.mount: Deactivated successfully.
Feb 20 09:29:56 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16455 DF PROTO=TCP SPT=43502 DPT=9100 SEQ=865296257 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0566000000000001030307) 
Feb 20 09:29:56 np0005625203.localdomain sudo[249919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gjpirygyejjtdqdaqmkcnoyzwvsvqdru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579796.2590177-4210-140093263232367/AnsiballZ_command.py
Feb 20 09:29:56 np0005625203.localdomain sudo[249919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:56 np0005625203.localdomain python3.9[249921]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:29:56 np0005625203.localdomain sudo[249919]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:57 np0005625203.localdomain sudo[250030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvishpvkbhvtizmhezywpzwkwayljmja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579796.9899187-4234-184975766769516/AnsiballZ_stat.py
Feb 20 09:29:57 np0005625203.localdomain sudo[250030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:57 np0005625203.localdomain python3.9[250032]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:29:57 np0005625203.localdomain sudo[250030]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:58 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:29:58 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:29:58 np0005625203.localdomain sudo[250142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qyzbecvgxcjuzcbcseauctdlqbxyuvoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579798.2370424-4259-180921208612902/AnsiballZ_command.py
Feb 20 09:29:58 np0005625203.localdomain sudo[250142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:58 np0005625203.localdomain python3.9[250144]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:29:58 np0005625203.localdomain sudo[250142]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:58 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:29:59 np0005625203.localdomain sudo[250255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ntrrhmyxbhgllzdifuhctcerwmgnqgow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579798.9845605-4282-275183182624731/AnsiballZ_file.py
Feb 20 09:29:59 np0005625203.localdomain sudo[250255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:29:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60597 DF PROTO=TCP SPT=36936 DPT=9102 SEQ=403042201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0572250000000001030307) 
Feb 20 09:29:59 np0005625203.localdomain python3.9[250257]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:29:59 np0005625203.localdomain sudo[250255]: pam_unix(sudo:session): session closed for user root
Feb 20 09:29:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:29:59 np0005625203.localdomain systemd[1]: tmp-crun.qPgpiM.mount: Deactivated successfully.
Feb 20 09:29:59 np0005625203.localdomain podman[250275]: 2026-02-20 09:29:59.773803875 +0000 UTC m=+0.085111041 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 09:29:59 np0005625203.localdomain podman[250275]: 2026-02-20 09:29:59.842470946 +0000 UTC m=+0.153778092 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:29:59 np0005625203.localdomain sshd[229272]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:29:59 np0005625203.localdomain systemd[1]: session-55.scope: Deactivated successfully.
Feb 20 09:29:59 np0005625203.localdomain systemd[1]: session-55.scope: Consumed 1min 33.934s CPU time.
Feb 20 09:29:59 np0005625203.localdomain systemd-logind[759]: Session 55 logged out. Waiting for processes to exit.
Feb 20 09:29:59 np0005625203.localdomain systemd-logind[759]: Removed session 55.
Feb 20 09:30:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:30:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:30:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bbf98921711ec0c598fda2e2ca2c55c79674f35f32436d92adf3bb7290153e1a-merged.mount: Deactivated successfully.
Feb 20 09:30:00 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:30:01 np0005625203.localdomain sshd[250301]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:30:01 np0005625203.localdomain sshd[250301]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:30:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:30:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:30:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:30:01 np0005625203.localdomain podman[250303]: 2026-02-20 09:30:01.790132002 +0000 UTC m=+0.097671892 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.buildah.version=1.33.7)
Feb 20 09:30:01 np0005625203.localdomain podman[250303]: 2026-02-20 09:30:01.834263071 +0000 UTC m=+0.141802981 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal)
Feb 20 09:30:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60599 DF PROTO=TCP SPT=36936 DPT=9102 SEQ=403042201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA057E410000000001030307) 
Feb 20 09:30:02 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:30:02 np0005625203.localdomain systemd[1]: tmp-crun.2sxWHC.mount: Deactivated successfully.
Feb 20 09:30:02 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:30:02 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:30:02 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:30:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:30:04 np0005625203.localdomain podman[250323]: 2026-02-20 09:30:04.752252281 +0000 UTC m=+0.075412440 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb 20 09:30:04 np0005625203.localdomain podman[250323]: 2026-02-20 09:30:04.762782437 +0000 UTC m=+0.085942606 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:30:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52bb44324f3eb9002a3bf4ee7b8544bc72e25676c81bb6c59a692125c71221e1-merged.mount: Deactivated successfully.
Feb 20 09:30:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4cd727fde2209c79d9821216422c7a3f4abac9c93918abc294ef4cb9196199ef-merged.mount: Deactivated successfully.
Feb 20 09:30:05 np0005625203.localdomain sudo[250341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:30:05 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:30:05 np0005625203.localdomain sudo[250341]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:30:05 np0005625203.localdomain sudo[250341]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:05 np0005625203.localdomain sudo[250359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:30:05 np0005625203.localdomain sudo[250359]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:30:06 np0005625203.localdomain sshd[250377]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:30:06 np0005625203.localdomain sshd[250377]: Accepted publickey for zuul from 192.168.122.30 port 32990 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:30:06 np0005625203.localdomain systemd-logind[759]: New session 56 of user zuul.
Feb 20 09:30:06 np0005625203.localdomain systemd[1]: Started Session 56 of User zuul.
Feb 20 09:30:06 np0005625203.localdomain sshd[250377]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:30:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60600 DF PROTO=TCP SPT=36936 DPT=9102 SEQ=403042201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA058E000000000001030307) 
Feb 20 09:30:06 np0005625203.localdomain sudo[250499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-objcvckyffafbopqqljuelnzzlodnyxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579806.3983011-22-157734469380433/AnsiballZ_file.py
Feb 20 09:30:06 np0005625203.localdomain sudo[250499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:30:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:30:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:30:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:30:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:30:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:30:07 np0005625203.localdomain python3.9[250501]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config/container-startup-config/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:30:07 np0005625203.localdomain sudo[250499]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:30:07.642 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:30:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:30:07.642 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:30:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:30:07.642 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:30:07 np0005625203.localdomain sudo[250614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jvyihxauebefiqltmdxyfwhlbhmlgabu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579807.3948853-22-13138431140634/AnsiballZ_file.py
Feb 20 09:30:07 np0005625203.localdomain sudo[250614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:07 np0005625203.localdomain python3.9[250616]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:30:07 np0005625203.localdomain sudo[250614]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:07 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully.
Feb 20 09:30:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-40d13af751dd0e47fc8bb889a91a6d655bc2617bd5ab127ac97d8b2c392f6c58-merged.mount: Deactivated successfully.
Feb 20 09:30:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-40d13af751dd0e47fc8bb889a91a6d655bc2617bd5ab127ac97d8b2c392f6c58-merged.mount: Deactivated successfully.
Feb 20 09:30:08 np0005625203.localdomain sudo[250727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wumwznvahgeuwiklfcqcmoadxxqghmqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579807.9848154-22-13392560180115/AnsiballZ_file.py
Feb 20 09:30:08 np0005625203.localdomain sudo[250727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:08 np0005625203.localdomain sudo[250359]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:08 np0005625203.localdomain python3.9[250732]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/neutron-sriov-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:30:08 np0005625203.localdomain sudo[250727]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:09 np0005625203.localdomain sudo[250803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:30:09 np0005625203.localdomain sudo[250803]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:30:09 np0005625203.localdomain sudo[250803]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:09 np0005625203.localdomain python3.9[250872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:30:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:30:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully.
Feb 20 09:30:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-eae537b18cb4af6ef1d611e84802ac12d948a1ed622870af6f76704805834c9a-merged.mount: Deactivated successfully.
Feb 20 09:30:10 np0005625203.localdomain python3.9[250958]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579809.0890462-100-153630807198369/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:30:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:30:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:30:11 np0005625203.localdomain python3.9[251066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:30:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-52c18398a3f1352893ce0f0dc9f4c3a3bdf5492a6bf738875b375a7d97e85441-merged.mount: Deactivated successfully.
Feb 20 09:30:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:30:11 np0005625203.localdomain python3.9[251152]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579810.393437-100-24963530748099/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:30:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f998c699a79bb0ab8f605537409d8dfabf90b90001094b51abb2cd93ea9feefe-merged.mount: Deactivated successfully.
Feb 20 09:30:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:30:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0f91fc7b8e87158c92eb7740043cf5d022febeae010865e677c28eba378655ce-merged.mount: Deactivated successfully.
Feb 20 09:30:12 np0005625203.localdomain python3.9[251260]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/01-neutron-sriov-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:30:13 np0005625203.localdomain python3.9[251346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/01-neutron-sriov-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579812.10069-100-250220879417327/.source.conf follow=False _original_basename=neutron-sriov-agent.conf.j2 checksum=ac8c453a19d7d1e02b788a17020601c1af693654 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:30:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:30:13 np0005625203.localdomain systemd[1]: tmp-crun.AAfmgO.mount: Deactivated successfully.
Feb 20 09:30:13 np0005625203.localdomain podman[251364]: 2026-02-20 09:30:13.779919803 +0000 UTC m=+0.092662186 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:30:13 np0005625203.localdomain podman[251364]: 2026-02-20 09:30:13.790184361 +0000 UTC m=+0.102926694 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:30:13 np0005625203.localdomain podman[251364]: unhealthy
Feb 20 09:30:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-40d13af751dd0e47fc8bb889a91a6d655bc2617bd5ab127ac97d8b2c392f6c58-merged.mount: Deactivated successfully.
Feb 20 09:30:14 np0005625203.localdomain python3.9[251477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-sriov-agent/10-neutron-sriov.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:30:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-350c54d5408c0d2bf89ea259b7106c227b04e0a20775f530019e2aa403fcd90a-merged.mount: Deactivated successfully.
Feb 20 09:30:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-350c54d5408c0d2bf89ea259b7106c227b04e0a20775f530019e2aa403fcd90a-merged.mount: Deactivated successfully.
Feb 20 09:30:14 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Main process exited, code=exited, status=1/FAILURE
Feb 20 09:30:14 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Failed with result 'exit-code'.
Feb 20 09:30:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60601 DF PROTO=TCP SPT=36936 DPT=9102 SEQ=403042201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA05AE810000000001030307) 
Feb 20 09:30:14 np0005625203.localdomain python3.9[251563]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-sriov-agent/10-neutron-sriov.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579813.9760299-274-215349576808861/.source.conf _original_basename=10-neutron-sriov.conf follow=False checksum=13d630d090b626c2aab1085bca0daa7abb0cabfd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:30:15 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:15.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:30:15 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ba6f0be74a40197166410c33403600ee466dbd9d2ddae7d7f49f78c9646720b2-merged.mount: Deactivated successfully.
Feb 20 09:30:15 np0005625203.localdomain python3.9[251671]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:30:15 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac-merged.mount: Deactivated successfully.
Feb 20 09:30:15 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-5f30d5cd30916d88e24f21a5c8313738088a285d6d2d0efec09cc705e86eb786-merged.mount: Deactivated successfully.
Feb 20 09:30:15 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-5f30d5cd30916d88e24f21a5c8313738088a285d6d2d0efec09cc705e86eb786-merged.mount: Deactivated successfully.
Feb 20 09:30:16 np0005625203.localdomain sudo[251781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kbmgohfytwaehdexctjwgxoetcyemrdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579815.8491988-346-9589976911105/AnsiballZ_file.py
Feb 20 09:30:16 np0005625203.localdomain sudo[251781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:16 np0005625203.localdomain python3.9[251783]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:30:16 np0005625203.localdomain sudo[251781]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully.
Feb 20 09:30:16 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595-merged.mount: Deactivated successfully.
Feb 20 09:30:16 np0005625203.localdomain sudo[251891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bicjdrghywawwrmhqpzjxgaohtrrkjnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579816.579937-370-61138411669226/AnsiballZ_stat.py
Feb 20 09:30:16 np0005625203.localdomain sudo[251891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:16 np0005625203.localdomain python3.9[251893]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:30:17 np0005625203.localdomain sudo[251891]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:17.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:30:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:17.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:30:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:17.201 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:30:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:17.214 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:30:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:17.215 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:30:17 np0005625203.localdomain sudo[251948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xgdnwewplldybmhbtkxfcmgiktqjdaty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579816.579937-370-61138411669226/AnsiballZ_file.py
Feb 20 09:30:17 np0005625203.localdomain sudo[251948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:17 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ba6f0be74a40197166410c33403600ee466dbd9d2ddae7d7f49f78c9646720b2-merged.mount: Deactivated successfully.
Feb 20 09:30:17 np0005625203.localdomain python3.9[251950]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:30:17 np0005625203.localdomain sudo[251948]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:17 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e0c81d46f937f1f84faf68fb71f862e4bd868921a7d16384d44790308d98719f-merged.mount: Deactivated successfully.
Feb 20 09:30:17 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully.
Feb 20 09:30:17 np0005625203.localdomain sudo[252058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dgypfwhtffsqkrhonwxyzwyjfhyxbfct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579817.5700834-370-25715582856266/AnsiballZ_stat.py
Feb 20 09:30:17 np0005625203.localdomain sudo[252058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully.
Feb 20 09:30:18 np0005625203.localdomain python3.9[252060]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:30:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:18.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:30:18 np0005625203.localdomain sudo[252058]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:18.216 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:30:18 np0005625203.localdomain sudo[252115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-alfkyugdhjorygeufocfifbonudtgxcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579817.5700834-370-25715582856266/AnsiballZ_file.py
Feb 20 09:30:18 np0005625203.localdomain sudo[252115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:18 np0005625203.localdomain python3.9[252117]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:30:18 np0005625203.localdomain sudo[252115]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.216 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.216 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.216 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.216 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.217 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:30:19 np0005625203.localdomain podman[252135]: 2026-02-20 09:30:19.238166544 +0000 UTC m=+0.081146038 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:30:19 np0005625203.localdomain podman[252135]: 2026-02-20 09:30:19.245841022 +0000 UTC m=+0.088820516 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:30:19 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8194fb55613f783d5a49e05926ed565eee5321a07b6e20f485946b5c4f31cab4-merged.mount: Deactivated successfully.
Feb 20 09:30:19 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 141012 "" "Go-http-client/1.1"
Feb 20 09:30:19 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:30:19 np0005625203.localdomain podman_exporter[240348]: ts=2026-02-20T09:30:19.405Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Feb 20 09:30:19 np0005625203.localdomain podman_exporter[240348]: ts=2026-02-20T09:30:19.405Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Feb 20 09:30:19 np0005625203.localdomain podman_exporter[240348]: ts=2026-02-20T09:30:19.405Z caller=tls_config.go:316 level=info msg="TLS is disabled." http2=false address=[::]:9882
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.661 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:30:19 np0005625203.localdomain sudo[252270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-umjsyekjtfwtgryjkjqwuwwvyruxkvat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579819.5138335-439-77082006600306/AnsiballZ_file.py
Feb 20 09:30:19 np0005625203.localdomain sudo[252270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.862 228941 WARNING nova.virt.libvirt.driver [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.864 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=13126MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.864 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.865 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.926 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.926 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:30:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:19.945 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:30:20 np0005625203.localdomain python3.9[252272]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:30:20 np0005625203.localdomain sudo[252270]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:20.464 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:30:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:20.471 228941 DEBUG nova.compute.provider_tree [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:30:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:20.488 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:30:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:20.491 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:30:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:20.491 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:30:21 np0005625203.localdomain sudo[252402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zibkuhbxfakjhgcpkdmgdmvstzumpbps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579820.1777804-463-196794008747786/AnsiballZ_stat.py
Feb 20 09:30:21 np0005625203.localdomain sudo[252402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:21 np0005625203.localdomain python3.9[252404]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:30:21 np0005625203.localdomain sudo[252402]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:21 np0005625203.localdomain sudo[252459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uqlxhilplhxfuxvvnrbiqdumqqtdkeks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579820.1777804-463-196794008747786/AnsiballZ_file.py
Feb 20 09:30:21 np0005625203.localdomain sudo[252459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:21.492 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:30:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:30:21.492 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:30:21 np0005625203.localdomain python3.9[252461]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:30:21 np0005625203.localdomain sudo[252459]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:22 np0005625203.localdomain sudo[252569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gsjrzgqkbvankfipqvloawxdfhmqvltx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579821.9202118-499-101102030522918/AnsiballZ_stat.py
Feb 20 09:30:22 np0005625203.localdomain sudo[252569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:22 np0005625203.localdomain python3.9[252571]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:30:22 np0005625203.localdomain sudo[252569]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:22 np0005625203.localdomain sudo[252626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ltobpetfscfsgbbvxkomsouzumbqabux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579821.9202118-499-101102030522918/AnsiballZ_file.py
Feb 20 09:30:22 np0005625203.localdomain sudo[252626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:22 np0005625203.localdomain python3.9[252628]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:30:22 np0005625203.localdomain sudo[252626]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:23 np0005625203.localdomain sudo[252736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-utkxhxruednrgrqnfehohhrcxcopcmtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579823.0929255-535-222496404878371/AnsiballZ_systemd.py
Feb 20 09:30:23 np0005625203.localdomain sudo[252736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:23 np0005625203.localdomain python3.9[252738]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:30:23 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:30:24 np0005625203.localdomain systemd-rc-local-generator[252760]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:30:24 np0005625203.localdomain systemd-sysv-generator[252766]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:30:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:30:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:24 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:30:24 np0005625203.localdomain sudo[252736]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:24 np0005625203.localdomain podman[252775]: 2026-02-20 09:30:24.312439193 +0000 UTC m=+0.088676202 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:30:24 np0005625203.localdomain podman[252775]: 2026-02-20 09:30:24.324293181 +0000 UTC m=+0.100530170 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 20 09:30:24 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:30:24 np0005625203.localdomain sudo[252903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xeqqecvavfpelqfanqiourozokrqtjab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579824.498092-559-233847234350332/AnsiballZ_stat.py
Feb 20 09:30:24 np0005625203.localdomain sudo[252903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:25 np0005625203.localdomain python3.9[252905]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:30:25 np0005625203.localdomain sudo[252903]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:25 np0005625203.localdomain sudo[252960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-omgofujwxqqmdxjowlmspfrxrveiqfge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579824.498092-559-233847234350332/AnsiballZ_file.py
Feb 20 09:30:25 np0005625203.localdomain sudo[252960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:25 np0005625203.localdomain python3.9[252962]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:30:25 np0005625203.localdomain sudo[252960]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:25 np0005625203.localdomain sudo[253070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jcnlgwzoduuyjutzdpxlcahkzdwthgwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579825.7190845-595-4893056566427/AnsiballZ_stat.py
Feb 20 09:30:26 np0005625203.localdomain sudo[253070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:26 np0005625203.localdomain python3.9[253072]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:30:26 np0005625203.localdomain sudo[253070]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:26 np0005625203.localdomain sudo[253127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ehhhxvpvjalptltgxzthsswgcgtcxdgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579825.7190845-595-4893056566427/AnsiballZ_file.py
Feb 20 09:30:26 np0005625203.localdomain sudo[253127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:26 np0005625203.localdomain python3.9[253129]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:30:26 np0005625203.localdomain sudo[253127]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:27 np0005625203.localdomain sudo[253237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fvdpauxurvkzjkcxordtjqooaroucpsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579827.523371-631-220011787623750/AnsiballZ_systemd.py
Feb 20 09:30:27 np0005625203.localdomain sudo[253237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:28 np0005625203.localdomain python3.9[253239]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:30:28 np0005625203.localdomain systemd-rc-local-generator[253264]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:30:28 np0005625203.localdomain systemd-sysv-generator[253270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: Starting Create netns directory...
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 20 09:30:28 np0005625203.localdomain systemd[1]: Finished Create netns directory.
Feb 20 09:30:28 np0005625203.localdomain sudo[253237]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:30:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:30:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:30:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 142994 "" "Go-http-client/1.1"
Feb 20 09:30:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15427 "" "Go-http-client/1.1"
Feb 20 09:30:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21004 DF PROTO=TCP SPT=48392 DPT=9102 SEQ=1448084088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA05E7550000000001030307) 
Feb 20 09:30:29 np0005625203.localdomain sudo[253392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vvrgxauobkljtqabsyoohtzytqbryhia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579829.0315769-661-222174090949689/AnsiballZ_file.py
Feb 20 09:30:29 np0005625203.localdomain sudo[253392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:29 np0005625203.localdomain python3.9[253394]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:30:29 np0005625203.localdomain sudo[253392]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21005 DF PROTO=TCP SPT=48392 DPT=9102 SEQ=1448084088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA05EB400000000001030307) 
Feb 20 09:30:30 np0005625203.localdomain sudo[253502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-pghpureyfgrsejzjkphzwbavkqwauxmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579830.2812169-685-157822822715613/AnsiballZ_file.py
Feb 20 09:30:30 np0005625203.localdomain sudo[253502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:30 np0005625203.localdomain python3.9[253504]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:30:30 np0005625203.localdomain sudo[253502]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:31 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60602 DF PROTO=TCP SPT=36936 DPT=9102 SEQ=403042201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA05EE800000000001030307) 
Feb 20 09:30:31 np0005625203.localdomain sudo[253612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nbgumwouyljhempdufkqccnqceutlaua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579830.9784973-709-132093697727632/AnsiballZ_stat.py
Feb 20 09:30:31 np0005625203.localdomain sudo[253612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:30:31 np0005625203.localdomain podman[253615]: 2026-02-20 09:30:31.346944796 +0000 UTC m=+0.081236731 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 20 09:30:31 np0005625203.localdomain podman[253615]: 2026-02-20 09:30:31.438621451 +0000 UTC m=+0.172913386 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:30:31 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:30:31 np0005625203.localdomain python3.9[253614]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_sriov_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:30:31 np0005625203.localdomain sudo[253612]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21006 DF PROTO=TCP SPT=48392 DPT=9102 SEQ=1448084088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA05F3400000000001030307) 
Feb 20 09:30:32 np0005625203.localdomain sudo[253725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vtkslnrsizfzgcepauxtdaiyvmauyqrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579830.9784973-709-132093697727632/AnsiballZ_copy.py
Feb 20 09:30:32 np0005625203.localdomain sudo[253725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:32 np0005625203.localdomain python3.9[253727]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_sriov_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579830.9784973-709-132093697727632/.source.json _original_basename=.y66g70zc follow=False checksum=a32073fdba4733b9ffe872cfb91708eff83a585a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:30:32 np0005625203.localdomain sudo[253725]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:30:32 np0005625203.localdomain systemd[1]: tmp-crun.iXzBTn.mount: Deactivated successfully.
Feb 20 09:30:32 np0005625203.localdomain podman[253728]: 2026-02-20 09:30:32.769830881 +0000 UTC m=+0.086066282 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:30:32 np0005625203.localdomain podman[253728]: 2026-02-20 09:30:32.807237861 +0000 UTC m=+0.123473222 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-type=git, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, version=9.7, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:30:32 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:30:33 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=5162 DF PROTO=TCP SPT=51576 DPT=9102 SEQ=3930013663 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA05F6800000000001030307) 
Feb 20 09:30:33 np0005625203.localdomain python3.9[253856]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:30:35 np0005625203.localdomain sudo[254158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bnkvstmiwrghuulzwdqmztcsaducgykb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579835.012685-829-12246578633018/AnsiballZ_container_config_data.py
Feb 20 09:30:35 np0005625203.localdomain sudo[254158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:35 np0005625203.localdomain python3.9[254160]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_pattern=*.json debug=False
Feb 20 09:30:35 np0005625203.localdomain sudo[254158]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21007 DF PROTO=TCP SPT=48392 DPT=9102 SEQ=1448084088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0603000000000001030307) 
Feb 20 09:30:36 np0005625203.localdomain sudo[254268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qbyduvrtxszfghdhpzbdbbzvmeyvqlbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579836.093178-862-254048006727009/AnsiballZ_container_config_hash.py
Feb 20 09:30:36 np0005625203.localdomain sudo[254268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:30:36 np0005625203.localdomain podman[254271]: 2026-02-20 09:30:36.662551411 +0000 UTC m=+0.079869698 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:30:36 np0005625203.localdomain podman[254271]: 2026-02-20 09:30:36.675314717 +0000 UTC m=+0.092632994 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:30:36 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:30:36 np0005625203.localdomain python3.9[254270]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 20 09:30:36 np0005625203.localdomain sudo[254268]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:30:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:30:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:30:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:30:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:30:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:30:37 np0005625203.localdomain sudo[254398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lwkpengjvudqqxzmrfivaqdjaklrvaox ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771579837.131091-892-58858583426494/AnsiballZ_edpm_container_manage.py
Feb 20 09:30:37 np0005625203.localdomain sudo[254398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:37 np0005625203.localdomain python3[254400]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_sriov_agent config_id=neutron_sriov_agent config_overrides={} config_patterns=*.json containers=['neutron_sriov_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb 20 09:30:38 np0005625203.localdomain podman[254436]: 
Feb 20 09:30:38 np0005625203.localdomain podman[254436]: 2026-02-20 09:30:38.096951604 +0000 UTC m=+0.078314481 container create 43353413e62e7a4f775a0461c94b0521968cdb993a7a078f72b761ee4029d1b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-051d202252335af1ea6f642901267e1995e239568f2f56fb1f47b24e86f13b4e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:30:38 np0005625203.localdomain podman[254436]: 2026-02-20 09:30:38.055674103 +0000 UTC m=+0.037037020 image pull  quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Feb 20 09:30:38 np0005625203.localdomain python3[254400]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_sriov_agent --conmon-pidfile /run/neutron_sriov_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-051d202252335af1ea6f642901267e1995e239568f2f56fb1f47b24e86f13b4e --label config_id=neutron_sriov_agent --label container_name=neutron_sriov_agent --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-051d202252335af1ea6f642901267e1995e239568f2f56fb1f47b24e86f13b4e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user neutron --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified
Feb 20 09:30:39 np0005625203.localdomain sudo[254398]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:41 np0005625203.localdomain sudo[254580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mxtwrhpivpbhskpaiwttzlgnbrsqatew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579841.699732-916-80828763375631/AnsiballZ_stat.py
Feb 20 09:30:41 np0005625203.localdomain sudo[254580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:42 np0005625203.localdomain python3.9[254582]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:30:42 np0005625203.localdomain sudo[254580]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:42 np0005625203.localdomain sudo[254692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tmpelhxdyxcqwltovqmsygifqxnjsatq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579842.5154731-943-38259445487364/AnsiballZ_file.py
Feb 20 09:30:42 np0005625203.localdomain sudo[254692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:43 np0005625203.localdomain python3.9[254694]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:30:43 np0005625203.localdomain sudo[254692]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:43 np0005625203.localdomain sudo[254747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ghkdzwtsngqlxhrdmbsgvigdrogcjtfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579842.5154731-943-38259445487364/AnsiballZ_stat.py
Feb 20 09:30:43 np0005625203.localdomain sudo[254747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:44 np0005625203.localdomain python3.9[254749]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_sriov_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:30:44 np0005625203.localdomain sudo[254747]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21008 DF PROTO=TCP SPT=48392 DPT=9102 SEQ=1448084088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0622800000000001030307) 
Feb 20 09:30:44 np0005625203.localdomain sudo[254856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hkgwwgzgdxbcjglrofmzxqehkufitbey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579844.0967681-943-176791138192136/AnsiballZ_copy.py
Feb 20 09:30:44 np0005625203.localdomain sudo[254856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:30:44 np0005625203.localdomain python3.9[254858]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771579844.0967681-943-176791138192136/source dest=/etc/systemd/system/edpm_neutron_sriov_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:30:44 np0005625203.localdomain sudo[254856]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:44 np0005625203.localdomain podman[254859]: 2026-02-20 09:30:44.768690764 +0000 UTC m=+0.082029036 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=unhealthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:30:44 np0005625203.localdomain podman[254859]: 2026-02-20 09:30:44.803318348 +0000 UTC m=+0.116656610 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:30:44 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:30:44 np0005625203.localdomain sudo[254934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dsqwyvgobtacfxekchyexdbzaklgzdjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579844.0967681-943-176791138192136/AnsiballZ_systemd.py
Feb 20 09:30:44 np0005625203.localdomain sudo[254934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:45 np0005625203.localdomain python3.9[254936]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:30:45 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:30:45 np0005625203.localdomain systemd-rc-local-generator[254960]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:30:45 np0005625203.localdomain systemd-sysv-generator[254964]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:30:45 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:45 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:45 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:45 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:45 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:30:45 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:45 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:45 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:45 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:45 np0005625203.localdomain sudo[254934]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:45 np0005625203.localdomain sudo[255025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xuyqllzfuggvosksbxatugspgeqgbakw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579844.0967681-943-176791138192136/AnsiballZ_systemd.py
Feb 20 09:30:45 np0005625203.localdomain sudo[255025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:46 np0005625203.localdomain python3.9[255027]: ansible-systemd Invoked with state=restarted name=edpm_neutron_sriov_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:30:46 np0005625203.localdomain systemd-rc-local-generator[255050]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:30:46 np0005625203.localdomain systemd-sysv-generator[255054]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:30:46 np0005625203.localdomain sshd[255066]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: Starting neutron_sriov_agent container...
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: tmp-crun.mWbqmP.mount: Deactivated successfully.
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:30:46 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7293d287ceb8e0f35b2d420705456a2993f7bd61d0f36c9b8f7df505e4e319e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 20 09:30:46 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7293d287ceb8e0f35b2d420705456a2993f7bd61d0f36c9b8f7df505e4e319e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:30:46 np0005625203.localdomain podman[255070]: 2026-02-20 09:30:46.752927024 +0000 UTC m=+0.131137339 container init 43353413e62e7a4f775a0461c94b0521968cdb993a7a078f72b761ee4029d1b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.vendor=CentOS, container_name=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-051d202252335af1ea6f642901267e1995e239568f2f56fb1f47b24e86f13b4e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=neutron_sriov_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: + sudo -E kolla_set_configs
Feb 20 09:30:46 np0005625203.localdomain podman[255070]: 2026-02-20 09:30:46.774823633 +0000 UTC m=+0.153033948 container start 43353413e62e7a4f775a0461c94b0521968cdb993a7a078f72b761ee4029d1b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-051d202252335af1ea6f642901267e1995e239568f2f56fb1f47b24e86f13b4e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_sriov_agent, org.label-schema.vendor=CentOS, container_name=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible)
Feb 20 09:30:46 np0005625203.localdomain podman[255070]: neutron_sriov_agent
Feb 20 09:30:46 np0005625203.localdomain systemd[1]: Started neutron_sriov_agent container.
Feb 20 09:30:46 np0005625203.localdomain sudo[255025]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Validating config file
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Copying service configuration files
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Writing out command to execute
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: ++ cat /run_command
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: + CMD=/usr/bin/neutron-sriov-nic-agent
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: + ARGS=
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: + sudo kolla_copy_cacerts
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: + [[ ! -n '' ]]
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: + . kolla_extend_start
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: Running command: '/usr/bin/neutron-sriov-nic-agent'
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\'''
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: + umask 0022
Feb 20 09:30:46 np0005625203.localdomain neutron_sriov_agent[255084]: + exec /usr/bin/neutron-sriov-nic-agent
Feb 20 09:30:47 np0005625203.localdomain systemd[1]: tmp-crun.QpjeX9.mount: Deactivated successfully.
Feb 20 09:30:47 np0005625203.localdomain python3.9[255206]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 20 09:30:48 np0005625203.localdomain sshd[255066]: Received disconnect from 103.48.192.48 port 43758:11: Bye Bye [preauth]
Feb 20 09:30:48 np0005625203.localdomain sshd[255066]: Disconnected from authenticating user root 103.48.192.48 port 43758 [preauth]
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.367 2 INFO neutron.common.config [-] Logging enabled!
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.368 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev44
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.368 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.368 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.368 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.369 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.369 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005625203.localdomain'}
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.369 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-2f952e4d-f704-428c-a1f7-a39d750bd154 - - - - - -] RPC agent_id: nic-switch-agent.np0005625203.localdomain
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.374 2 INFO neutron.agent.agent_extensions_manager [None req-2f952e4d-f704-428c-a1f7-a39d750bd154 - - - - - -] Loaded agent extensions: ['qos']
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.374 2 INFO neutron.agent.agent_extensions_manager [None req-2f952e4d-f704-428c-a1f7-a39d750bd154 - - - - - -] Initializing agent extension 'qos'
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.812 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-2f952e4d-f704-428c-a1f7-a39d750bd154 - - - - - -] Agent initialized successfully, now running... 
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.812 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-2f952e4d-f704-428c-a1f7-a39d750bd154 - - - - - -] SRIOV NIC Agent RPC Daemon Started!
Feb 20 09:30:48 np0005625203.localdomain neutron_sriov_agent[255084]: 2026-02-20 09:30:48.812 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-2f952e4d-f704-428c-a1f7-a39d750bd154 - - - - - -] Agent out of sync with plugin!
Feb 20 09:30:48 np0005625203.localdomain sudo[255315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hpxmxjayqhokowsgvwzewvgfdfpxlbzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579848.6883597-1078-139347616458336/AnsiballZ_stat.py
Feb 20 09:30:48 np0005625203.localdomain sudo[255315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:49 np0005625203.localdomain python3.9[255317]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:30:49 np0005625203.localdomain sudo[255315]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:49 np0005625203.localdomain sudo[255405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fctvoylwjioeknbfoztslltdhktfgtzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579848.6883597-1078-139347616458336/AnsiballZ_copy.py
Feb 20 09:30:49 np0005625203.localdomain sudo[255405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:30:49 np0005625203.localdomain systemd[1]: tmp-crun.39jAY7.mount: Deactivated successfully.
Feb 20 09:30:49 np0005625203.localdomain podman[255408]: 2026-02-20 09:30:49.750226074 +0000 UTC m=+0.095425021 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:30:49 np0005625203.localdomain podman[255408]: 2026-02-20 09:30:49.786237221 +0000 UTC m=+0.131436108 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:30:49 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:30:49 np0005625203.localdomain python3.9[255407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579848.6883597-1078-139347616458336/.source.yaml _original_basename=.cjlvaluz follow=False checksum=9a7aca9285be233ff868b04cb9ff99cde755c904 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:30:49 np0005625203.localdomain sudo[255405]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:50 np0005625203.localdomain sudo[255539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tangsttnizsqqexptfozscswsfmlmhgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579850.0359063-1123-170124136636295/AnsiballZ_systemd.py
Feb 20 09:30:50 np0005625203.localdomain sudo[255539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:50 np0005625203.localdomain python3.9[255541]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_sriov_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:30:50 np0005625203.localdomain systemd[1]: Stopping neutron_sriov_agent container...
Feb 20 09:30:50 np0005625203.localdomain systemd[1]: libpod-43353413e62e7a4f775a0461c94b0521968cdb993a7a078f72b761ee4029d1b5.scope: Deactivated successfully.
Feb 20 09:30:50 np0005625203.localdomain systemd[1]: libpod-43353413e62e7a4f775a0461c94b0521968cdb993a7a078f72b761ee4029d1b5.scope: Consumed 1.682s CPU time.
Feb 20 09:30:50 np0005625203.localdomain podman[255545]: 2026-02-20 09:30:50.777353111 +0000 UTC m=+0.087830996 container died 43353413e62e7a4f775a0461c94b0521968cdb993a7a078f72b761ee4029d1b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-051d202252335af1ea6f642901267e1995e239568f2f56fb1f47b24e86f13b4e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=neutron_sriov_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:30:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43353413e62e7a4f775a0461c94b0521968cdb993a7a078f72b761ee4029d1b5-userdata-shm.mount: Deactivated successfully.
Feb 20 09:30:50 np0005625203.localdomain podman[255545]: 2026-02-20 09:30:50.82503393 +0000 UTC m=+0.135511805 container cleanup 43353413e62e7a4f775a0461c94b0521968cdb993a7a078f72b761ee4029d1b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-051d202252335af1ea6f642901267e1995e239568f2f56fb1f47b24e86f13b4e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=neutron_sriov_agent)
Feb 20 09:30:50 np0005625203.localdomain podman[255545]: neutron_sriov_agent
Feb 20 09:30:50 np0005625203.localdomain podman[255572]: 2026-02-20 09:30:50.899011835 +0000 UTC m=+0.050337193 container cleanup 43353413e62e7a4f775a0461c94b0521968cdb993a7a078f72b761ee4029d1b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-051d202252335af1ea6f642901267e1995e239568f2f56fb1f47b24e86f13b4e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, config_id=neutron_sriov_agent, maintainer=OpenStack Kubernetes Operator team, container_name=neutron_sriov_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:30:50 np0005625203.localdomain podman[255572]: neutron_sriov_agent
Feb 20 09:30:50 np0005625203.localdomain systemd[1]: edpm_neutron_sriov_agent.service: Deactivated successfully.
Feb 20 09:30:50 np0005625203.localdomain systemd[1]: Stopped neutron_sriov_agent container.
Feb 20 09:30:50 np0005625203.localdomain systemd[1]: Starting neutron_sriov_agent container...
Feb 20 09:30:51 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:30:51 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7293d287ceb8e0f35b2d420705456a2993f7bd61d0f36c9b8f7df505e4e319e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 20 09:30:51 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7293d287ceb8e0f35b2d420705456a2993f7bd61d0f36c9b8f7df505e4e319e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:30:51 np0005625203.localdomain podman[255585]: 2026-02-20 09:30:51.042001221 +0000 UTC m=+0.112625045 container init 43353413e62e7a4f775a0461c94b0521968cdb993a7a078f72b761ee4029d1b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=neutron_sriov_agent, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-051d202252335af1ea6f642901267e1995e239568f2f56fb1f47b24e86f13b4e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: + sudo -E kolla_set_configs
Feb 20 09:30:51 np0005625203.localdomain podman[255585]: 2026-02-20 09:30:51.056420429 +0000 UTC m=+0.127044233 container start 43353413e62e7a4f775a0461c94b0521968cdb993a7a078f72b761ee4029d1b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified, name=neutron_sriov_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-051d202252335af1ea6f642901267e1995e239568f2f56fb1f47b24e86f13b4e'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'neutron', 'volumes': ['/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/openstack/neutron-sriov-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_sriov_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/neutron-sriov/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=neutron_sriov_agent, tcib_managed=true, config_id=neutron_sriov_agent, io.buildah.version=1.41.3)
Feb 20 09:30:51 np0005625203.localdomain podman[255585]: neutron_sriov_agent
Feb 20 09:30:51 np0005625203.localdomain systemd[1]: Started neutron_sriov_agent container.
Feb 20 09:30:51 np0005625203.localdomain sudo[255539]: pam_unix(sudo:session): session closed for user root
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Validating config file
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Copying service configuration files
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Writing out command to execute
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/d91d8a949a4b5272256c667b5094a15f5e397c6793efbfa4186752b765c6923b
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: ++ cat /run_command
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: + CMD=/usr/bin/neutron-sriov-nic-agent
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: + ARGS=
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: + sudo kolla_copy_cacerts
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: Running command: '/usr/bin/neutron-sriov-nic-agent'
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: + [[ ! -n '' ]]
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: + . kolla_extend_start
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: + echo 'Running command: '\''/usr/bin/neutron-sriov-nic-agent'\'''
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: + umask 0022
Feb 20 09:30:51 np0005625203.localdomain neutron_sriov_agent[255600]: + exec /usr/bin/neutron-sriov-nic-agent
Feb 20 09:30:51 np0005625203.localdomain sshd[250377]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:30:51 np0005625203.localdomain systemd[1]: session-56.scope: Deactivated successfully.
Feb 20 09:30:51 np0005625203.localdomain systemd[1]: session-56.scope: Consumed 22.760s CPU time.
Feb 20 09:30:51 np0005625203.localdomain systemd-logind[759]: Session 56 logged out. Waiting for processes to exit.
Feb 20 09:30:51 np0005625203.localdomain systemd-logind[759]: Removed session 56.
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.587 2 INFO neutron.common.config [-] Logging enabled!
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.587 2 INFO neutron.common.config [-] /usr/bin/neutron-sriov-nic-agent version 22.2.2.dev44
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.588 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Physical Devices mappings: {'dummy_sriov_net': ['dummy-dev']}
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.588 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Exclude Devices: {}
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.588 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider bandwidths: {}
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.588 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider inventory defaults: {'allocation_ratio': 1.0, 'min_unit': 1, 'step_size': 1, 'reserved': 0}
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.589 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [-] Resource provider hypervisors: {'dummy-dev': 'np0005625203.localdomain'}
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.589 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-95672e59-0763-4bca-aeb8-42372cda6a7d - - - - - -] RPC agent_id: nic-switch-agent.np0005625203.localdomain
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.594 2 INFO neutron.agent.agent_extensions_manager [None req-95672e59-0763-4bca-aeb8-42372cda6a7d - - - - - -] Loaded agent extensions: ['qos']
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.594 2 INFO neutron.agent.agent_extensions_manager [None req-95672e59-0763-4bca-aeb8-42372cda6a7d - - - - - -] Initializing agent extension 'qos'
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.710 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-95672e59-0763-4bca-aeb8-42372cda6a7d - - - - - -] Agent initialized successfully, now running... 
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.710 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-95672e59-0763-4bca-aeb8-42372cda6a7d - - - - - -] SRIOV NIC Agent RPC Daemon Started!
Feb 20 09:30:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:30:52.710 2 INFO neutron.plugins.ml2.drivers.mech_sriov.agent.sriov_nic_agent [None req-95672e59-0763-4bca-aeb8-42372cda6a7d - - - - - -] Agent out of sync with plugin!
Feb 20 09:30:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:30:54 np0005625203.localdomain podman[255634]: 2026-02-20 09:30:54.760052005 +0000 UTC m=+0.079886040 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Feb 20 09:30:54 np0005625203.localdomain podman[255634]: 2026-02-20 09:30:54.765212824 +0000 UTC m=+0.085046859 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 20 09:30:54 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:30:56 np0005625203.localdomain sshd[255652]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:30:57 np0005625203.localdomain sshd[255652]: Invalid user n8n from 103.61.123.132 port 57012
Feb 20 09:30:57 np0005625203.localdomain sshd[255654]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:30:57 np0005625203.localdomain sshd[255654]: Accepted publickey for zuul from 192.168.122.30 port 58760 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:30:57 np0005625203.localdomain systemd-logind[759]: New session 57 of user zuul.
Feb 20 09:30:57 np0005625203.localdomain systemd[1]: Started Session 57 of User zuul.
Feb 20 09:30:57 np0005625203.localdomain sshd[255654]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:30:57 np0005625203.localdomain sshd[255652]: Received disconnect from 103.61.123.132 port 57012:11: Bye Bye [preauth]
Feb 20 09:30:57 np0005625203.localdomain sshd[255652]: Disconnected from invalid user n8n 103.61.123.132 port 57012 [preauth]
Feb 20 09:30:58 np0005625203.localdomain python3.9[255765]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:30:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:30:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:30:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:30:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144995 "" "Go-http-client/1.1"
Feb 20 09:30:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15866 "" "Go-http-client/1.1"
Feb 20 09:30:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18938 DF PROTO=TCP SPT=50380 DPT=9102 SEQ=268398850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA065C850000000001030307) 
Feb 20 09:30:59 np0005625203.localdomain sudo[255877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kfflcjphvkhdpkjzrmpcppyjjslqfnnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579859.3756912-61-32139093087047/AnsiballZ_setup.py
Feb 20 09:30:59 np0005625203.localdomain sudo[255877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:30:59 np0005625203.localdomain python3.9[255879]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:31:00 np0005625203.localdomain sudo[255877]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18939 DF PROTO=TCP SPT=50380 DPT=9102 SEQ=268398850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0660800000000001030307) 
Feb 20 09:31:00 np0005625203.localdomain sudo[255940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ustohibwxigpxhzntoqjganmmsclycui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579859.3756912-61-32139093087047/AnsiballZ_dnf.py
Feb 20 09:31:00 np0005625203.localdomain sudo[255940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21009 DF PROTO=TCP SPT=48392 DPT=9102 SEQ=1448084088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0662810000000001030307) 
Feb 20 09:31:00 np0005625203.localdomain python3.9[255942]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch3.3'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:31:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:31:01 np0005625203.localdomain podman[255945]: 2026-02-20 09:31:01.770842462 +0000 UTC m=+0.082621534 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:31:01 np0005625203.localdomain podman[255945]: 2026-02-20 09:31:01.832421453 +0000 UTC m=+0.144200495 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:31:01 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:31:01 np0005625203.localdomain sshd[255970]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:31:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18940 DF PROTO=TCP SPT=50380 DPT=9102 SEQ=268398850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0668800000000001030307) 
Feb 20 09:31:02 np0005625203.localdomain sshd[255970]: Received disconnect from 185.196.11.208 port 46602:11: Bye Bye [preauth]
Feb 20 09:31:02 np0005625203.localdomain sshd[255970]: Disconnected from authenticating user root 185.196.11.208 port 46602 [preauth]
Feb 20 09:31:03 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=60603 DF PROTO=TCP SPT=36936 DPT=9102 SEQ=403042201 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA066C800000000001030307) 
Feb 20 09:31:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:31:04 np0005625203.localdomain systemd[1]: tmp-crun.ML0l3K.mount: Deactivated successfully.
Feb 20 09:31:04 np0005625203.localdomain podman[255972]: 2026-02-20 09:31:04.037331652 +0000 UTC m=+0.094986100 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1770267347, vendor=Red Hat, Inc., io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7)
Feb 20 09:31:04 np0005625203.localdomain podman[255972]: 2026-02-20 09:31:04.05345855 +0000 UTC m=+0.111112998 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible)
Feb 20 09:31:04 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:31:04 np0005625203.localdomain sudo[255940]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:05 np0005625203.localdomain sudo[256099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rtngyebzqtkhdfrkzuvywohdnzzdzymb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579864.5995922-97-101561431872966/AnsiballZ_systemd.py
Feb 20 09:31:05 np0005625203.localdomain sudo[256099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:05 np0005625203.localdomain python3.9[256101]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Feb 20 09:31:05 np0005625203.localdomain sudo[256099]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18941 DF PROTO=TCP SPT=50380 DPT=9102 SEQ=268398850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0678400000000001030307) 
Feb 20 09:31:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:31:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:31:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:31:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:31:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:31:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:31:07 np0005625203.localdomain sshd[256168]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:31:07 np0005625203.localdomain sshd[256168]: error: kex_exchange_identification: Connection closed by remote host
Feb 20 09:31:07 np0005625203.localdomain sshd[256168]: Connection closed by 167.172.180.30 port 32996
Feb 20 09:31:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:31:07.642 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:31:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:31:07.643 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:31:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:31:07.643 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:31:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:31:07 np0005625203.localdomain sudo[256213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-twccgvffjexojhsaqfuyfdjufkshsuvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579867.2653112-124-43549097285731/AnsiballZ_file.py
Feb 20 09:31:07 np0005625203.localdomain sudo[256213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:07 np0005625203.localdomain systemd[1]: tmp-crun.R78VpX.mount: Deactivated successfully.
Feb 20 09:31:07 np0005625203.localdomain podman[256215]: 2026-02-20 09:31:07.789340003 +0000 UTC m=+0.099699937 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:31:07 np0005625203.localdomain podman[256215]: 2026-02-20 09:31:07.83011923 +0000 UTC m=+0.140479124 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 09:31:07 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:31:07 np0005625203.localdomain python3.9[256216]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/container-startup-config setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:07 np0005625203.localdomain sudo[256213]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:08 np0005625203.localdomain sudo[256341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-taogixsfacezhavsjaxqjrwfqhskprsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579868.0224597-124-102836261283895/AnsiballZ_file.py
Feb 20 09:31:08 np0005625203.localdomain sudo[256341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:08 np0005625203.localdomain python3.9[256343]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:08 np0005625203.localdomain sudo[256341]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:09 np0005625203.localdomain sudo[256451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jekagxlprbluxjoeatcrnrguplqwgjhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579868.9097335-124-128497501798127/AnsiballZ_file.py
Feb 20 09:31:09 np0005625203.localdomain sudo[256451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:09 np0005625203.localdomain python3.9[256453]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/neutron-dhcp-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:09 np0005625203.localdomain sudo[256451]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:09 np0005625203.localdomain sudo[256477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:31:09 np0005625203.localdomain sudo[256477]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:31:09 np0005625203.localdomain sudo[256477]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:09 np0005625203.localdomain sudo[256543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:31:09 np0005625203.localdomain sudo[256543]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:31:09 np0005625203.localdomain sudo[256597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nwcntgtmfezygdpgzlnzcmnvurbeimph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579869.5444987-124-182907279343752/AnsiballZ_file.py
Feb 20 09:31:09 np0005625203.localdomain sudo[256597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:10 np0005625203.localdomain python3.9[256599]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:10 np0005625203.localdomain sudo[256597]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:10 np0005625203.localdomain sudo[256543]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:10 np0005625203.localdomain sshd[256710]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:31:10 np0005625203.localdomain sudo[256740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cvpoabmlsacgncxwkcqrxmswyrmvwogm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579870.1869938-124-173359388853125/AnsiballZ_file.py
Feb 20 09:31:10 np0005625203.localdomain sudo[256740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:10 np0005625203.localdomain python3.9[256742]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:10 np0005625203.localdomain sudo[256740]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:10 np0005625203.localdomain sudo[256798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:31:10 np0005625203.localdomain sudo[256798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:31:10 np0005625203.localdomain sudo[256798]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:11 np0005625203.localdomain sudo[256868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qjgsplluhhskefueodbuxnhirgxrlgny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579870.855378-124-2493779173361/AnsiballZ_file.py
Feb 20 09:31:11 np0005625203.localdomain sudo[256868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:11 np0005625203.localdomain python3.9[256870]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ns-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:11 np0005625203.localdomain sudo[256868]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:11 np0005625203.localdomain sudo[256978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvsvidljjmperrnaohhinsgvyiktnpuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579871.4558067-124-10822189353191/AnsiballZ_file.py
Feb 20 09:31:11 np0005625203.localdomain sudo[256978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:11 np0005625203.localdomain sshd[256710]: Invalid user admin from 34.131.211.42 port 46276
Feb 20 09:31:11 np0005625203.localdomain python3.9[256980]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:11 np0005625203.localdomain sudo[256978]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:12 np0005625203.localdomain sshd[256710]: Received disconnect from 34.131.211.42 port 46276:11: Bye Bye [preauth]
Feb 20 09:31:12 np0005625203.localdomain sshd[256710]: Disconnected from invalid user admin 34.131.211.42 port 46276 [preauth]
Feb 20 09:31:12 np0005625203.localdomain sudo[257088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eiesxmhksipkmwzdgenelrakytnkhhkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579872.1908674-274-23731685658253/AnsiballZ_stat.py
Feb 20 09:31:12 np0005625203.localdomain sudo[257088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:12 np0005625203.localdomain python3.9[257090]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/container-startup-config/neutron_dhcp_agent.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:12 np0005625203.localdomain sudo[257088]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:13 np0005625203.localdomain sudo[257176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mlniqecshkwvywwqhkpjuwewiopzhbqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579872.1908674-274-23731685658253/AnsiballZ_copy.py
Feb 20 09:31:13 np0005625203.localdomain sudo[257176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:13 np0005625203.localdomain python3.9[257178]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/container-startup-config/neutron_dhcp_agent.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579872.1908674-274-23731685658253/.source.yaml follow=False _original_basename=neutron_dhcp_agent.yaml.j2 checksum=472c5e922ae22c8bdcaef73d1ca73ce5597b440e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:13 np0005625203.localdomain sudo[257176]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18942 DF PROTO=TCP SPT=50380 DPT=9102 SEQ=268398850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0698810000000001030307) 
Feb 20 09:31:14 np0005625203.localdomain python3.9[257286]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/01-neutron.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:15 np0005625203.localdomain python3.9[257372]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/01-neutron.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579873.6727724-319-73470160505980/.source.conf follow=False _original_basename=neutron.conf.j2 checksum=24e013b64eb8be4a13596c6ffccbd94df7442bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:31:15 np0005625203.localdomain python3.9[257480]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:15 np0005625203.localdomain podman[257481]: 2026-02-20 09:31:15.765045638 +0000 UTC m=+0.079753811 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:31:15 np0005625203.localdomain podman[257481]: 2026-02-20 09:31:15.773428517 +0000 UTC m=+0.088136650 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:31:15 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:31:16 np0005625203.localdomain python3.9[257589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579875.3225527-319-156088839448685/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.179 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.180 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:31:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:31:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:17.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:31:17 np0005625203.localdomain python3.9[257697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/01-neutron-dhcp-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:17 np0005625203.localdomain python3.9[257783]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/01-neutron-dhcp-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579876.8095698-319-176954312557086/.source.conf follow=False _original_basename=neutron-dhcp-agent.conf.j2 checksum=6ea7e5c75393b1b86129c9eea6beb812afc47291 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:18.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:31:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:18.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:31:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:18.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:31:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:18.213 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:31:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:18.214 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:31:19 np0005625203.localdomain python3.9[257891]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-dhcp-agent/10-neutron-dhcp.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.199 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.224 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.225 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.225 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.225 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.226 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:31:19 np0005625203.localdomain python3.9[257997]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-dhcp-agent/10-neutron-dhcp.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579878.7223928-493-33031318545819/.source.conf _original_basename=10-neutron-dhcp.conf follow=False checksum=13d630d090b626c2aab1085bca0daa7abb0cabfd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.677 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.828 228941 WARNING nova.virt.libvirt.driver [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.829 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=13054MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.829 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.829 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.878 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.879 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:31:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:19.911 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:31:19 np0005625203.localdomain sshd[258072]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:31:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:20.367 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:31:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:20.374 228941 DEBUG nova.compute.provider_tree [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:31:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:20.393 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:31:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:20.396 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:31:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:20.396 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:31:20 np0005625203.localdomain python3.9[258129]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:20 np0005625203.localdomain sshd[258072]: Invalid user claude from 5.253.59.68 port 54100
Feb 20 09:31:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:31:20 np0005625203.localdomain sshd[258072]: Received disconnect from 5.253.59.68 port 54100:11: Bye Bye [preauth]
Feb 20 09:31:20 np0005625203.localdomain sshd[258072]: Disconnected from invalid user claude 5.253.59.68 port 54100 [preauth]
Feb 20 09:31:20 np0005625203.localdomain podman[258165]: 2026-02-20 09:31:20.632110335 +0000 UTC m=+0.086515479 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:31:20 np0005625203.localdomain podman[258165]: 2026-02-20 09:31:20.645530449 +0000 UTC m=+0.099935563 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:31:20 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:31:20 np0005625203.localdomain python3.9[258240]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579879.8269877-538-139647905878623/.source follow=False _original_basename=haproxy.j2 checksum=eddfecb822bb60e7241db0fd719c7552d2d25452 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:21.397 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:31:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:21.397 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:31:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:21.397 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:31:21 np0005625203.localdomain python3.9[258348]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:21 np0005625203.localdomain python3.9[258434]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/dhcp_agent_dnsmasq_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579881.0263505-538-254180820932607/.source follow=False _original_basename=dnsmasq.j2 checksum=a6b8b2fb47e7419d250eaee9e3565b13fff8f42e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:31:22.195 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:31:22 np0005625203.localdomain python3.9[258542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:23 np0005625203.localdomain python3.9[258597]: ansible-ansible.legacy.file Invoked with mode=0755 setype=container_file_t dest=/var/lib/neutron/kill_scripts/haproxy-kill _original_basename=kill-script.j2 recurse=False state=file path=/var/lib/neutron/kill_scripts/haproxy-kill force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:23 np0005625203.localdomain python3.9[258705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/dnsmasq-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:24 np0005625203.localdomain sshd[258792]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:31:24 np0005625203.localdomain python3.9[258791]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/dnsmasq-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771579883.2609901-625-64161147944142/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:25 np0005625203.localdomain sshd[258792]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:31:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:31:25 np0005625203.localdomain podman[258865]: 2026-02-20 09:31:25.375347714 +0000 UTC m=+0.082279508 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Feb 20 09:31:25 np0005625203.localdomain podman[258865]: 2026-02-20 09:31:25.407795245 +0000 UTC m=+0.114727009 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 20 09:31:25 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:31:25 np0005625203.localdomain python3.9[258916]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:31:26 np0005625203.localdomain sudo[259029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bvdchnphnpjnnxhwdtvcvevxpnuivped ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579885.8209682-730-142961353056737/AnsiballZ_file.py
Feb 20 09:31:26 np0005625203.localdomain sudo[259029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:26 np0005625203.localdomain python3.9[259031]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:26 np0005625203.localdomain sudo[259029]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:27 np0005625203.localdomain sudo[259139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fmqoywammdmstmjqjqaganwjbwpuxqfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579887.1984618-754-33723663316004/AnsiballZ_stat.py
Feb 20 09:31:27 np0005625203.localdomain sudo[259139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:27 np0005625203.localdomain python3.9[259141]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:27 np0005625203.localdomain sudo[259139]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:27 np0005625203.localdomain sudo[259196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bexwnnttiyxlychevmfldluttoaivrcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579887.1984618-754-33723663316004/AnsiballZ_file.py
Feb 20 09:31:27 np0005625203.localdomain sudo[259196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:28 np0005625203.localdomain python3.9[259198]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:28 np0005625203.localdomain sudo[259196]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:28 np0005625203.localdomain sudo[259306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iroluqtgdrwgqxjattxwbpmmotqogffw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579888.2195573-754-165042999026231/AnsiballZ_stat.py
Feb 20 09:31:28 np0005625203.localdomain sudo[259306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:28 np0005625203.localdomain python3.9[259308]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:28 np0005625203.localdomain sudo[259306]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:28 np0005625203.localdomain sudo[259363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qpwlupfiqgpmpswoozecilklegvjilbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579888.2195573-754-165042999026231/AnsiballZ_file.py
Feb 20 09:31:28 np0005625203.localdomain sudo[259363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:31:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:31:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:31:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 144995 "" "Go-http-client/1.1"
Feb 20 09:31:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 15865 "" "Go-http-client/1.1"
Feb 20 09:31:29 np0005625203.localdomain python3.9[259365]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:29 np0005625203.localdomain sudo[259363]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56111 DF PROTO=TCP SPT=39736 DPT=9102 SEQ=2247533995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA06D1B50000000001030307) 
Feb 20 09:31:29 np0005625203.localdomain sudo[259473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fzbrbkzlcjypctvewqnrvsemohepbbpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579889.268234-823-253610316215717/AnsiballZ_file.py
Feb 20 09:31:29 np0005625203.localdomain sudo[259473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:29 np0005625203.localdomain python3.9[259475]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:31:29 np0005625203.localdomain sudo[259473]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:30 np0005625203.localdomain sudo[259583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ljmpeifznetqkfnjdtealtedrfdlxsiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579889.90449-847-237306219064048/AnsiballZ_stat.py
Feb 20 09:31:30 np0005625203.localdomain sudo[259583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56112 DF PROTO=TCP SPT=39736 DPT=9102 SEQ=2247533995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA06D5C10000000001030307) 
Feb 20 09:31:30 np0005625203.localdomain python3.9[259585]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:30 np0005625203.localdomain sudo[259583]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:30 np0005625203.localdomain sudo[259640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zvvfycbhwjwcpqljpptozlbuzsogqrlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579889.90449-847-237306219064048/AnsiballZ_file.py
Feb 20 09:31:30 np0005625203.localdomain sudo[259640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:30 np0005625203.localdomain python3.9[259642]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:31:30 np0005625203.localdomain sudo[259640]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:31 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18943 DF PROTO=TCP SPT=50380 DPT=9102 SEQ=268398850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA06D8800000000001030307) 
Feb 20 09:31:31 np0005625203.localdomain sudo[259750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fybgzspithwjyuosowrdpblsrznclyll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579891.023393-883-121496626191232/AnsiballZ_stat.py
Feb 20 09:31:31 np0005625203.localdomain sudo[259750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:31 np0005625203.localdomain python3.9[259752]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:31 np0005625203.localdomain sudo[259750]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:31 np0005625203.localdomain sudo[259807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nnyqxmnnrsnflnlfqdpelzsttuxgondn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579891.023393-883-121496626191232/AnsiballZ_file.py
Feb 20 09:31:31 np0005625203.localdomain sudo[259807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:31 np0005625203.localdomain python3.9[259809]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:31:31 np0005625203.localdomain sudo[259807]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:31:32 np0005625203.localdomain podman[259881]: 2026-02-20 09:31:32.278075719 +0000 UTC m=+0.086396316 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:31:32 np0005625203.localdomain sudo[259938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-snwnlkotalvwrfivzbmokctxdhbcidty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579892.058921-919-249180075041571/AnsiballZ_systemd.py
Feb 20 09:31:32 np0005625203.localdomain sudo[259938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56113 DF PROTO=TCP SPT=39736 DPT=9102 SEQ=2247533995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA06DDC00000000001030307) 
Feb 20 09:31:32 np0005625203.localdomain podman[259881]: 2026-02-20 09:31:32.388258187 +0000 UTC m=+0.196578824 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Feb 20 09:31:32 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:31:32 np0005625203.localdomain python3.9[259940]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:31:32 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:31:32 np0005625203.localdomain systemd-sysv-generator[259968]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:31:32 np0005625203.localdomain systemd-rc-local-generator[259964]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:31:32 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:32 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:32 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:32 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:32 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:31:32 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:32 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:32 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:32 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:33 np0005625203.localdomain sudo[259938]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:33 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=21010 DF PROTO=TCP SPT=48392 DPT=9102 SEQ=1448084088 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA06E0800000000001030307) 
Feb 20 09:31:33 np0005625203.localdomain sudo[260088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ohqkikrqawrbqnlrrotvowcjcqbcyunh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579893.197784-943-55760864622040/AnsiballZ_stat.py
Feb 20 09:31:33 np0005625203.localdomain sudo[260088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:33 np0005625203.localdomain python3.9[260090]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:33 np0005625203.localdomain sudo[260088]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:33 np0005625203.localdomain sudo[260145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rpuhnnikezxxmeythwkolpvbmzyqvnix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579893.197784-943-55760864622040/AnsiballZ_file.py
Feb 20 09:31:33 np0005625203.localdomain sudo[260145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:34 np0005625203.localdomain python3.9[260147]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:31:34 np0005625203.localdomain sudo[260145]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:34 np0005625203.localdomain sudo[260255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ovadwauedawiaoltltrsufumguvjrzaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579894.296502-979-90302939868383/AnsiballZ_stat.py
Feb 20 09:31:34 np0005625203.localdomain sudo[260255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:31:34 np0005625203.localdomain systemd[1]: tmp-crun.x9KbaL.mount: Deactivated successfully.
Feb 20 09:31:34 np0005625203.localdomain podman[260258]: 2026-02-20 09:31:34.726074634 +0000 UTC m=+0.099529480 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.openshift.expose-services=)
Feb 20 09:31:34 np0005625203.localdomain podman[260258]: 2026-02-20 09:31:34.743268864 +0000 UTC m=+0.116723690 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, version=9.7)
Feb 20 09:31:34 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:31:34 np0005625203.localdomain python3.9[260257]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:34 np0005625203.localdomain sudo[260255]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:35 np0005625203.localdomain sudo[260331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aphmuzxlvwbnxevffxgywpxwmvfnyjsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579894.296502-979-90302939868383/AnsiballZ_file.py
Feb 20 09:31:35 np0005625203.localdomain sudo[260331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:35 np0005625203.localdomain python3.9[260333]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:31:35 np0005625203.localdomain sudo[260331]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:36 np0005625203.localdomain sudo[260441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xmiaxsmfzmvlzfxlpgdljghqkzrgzulu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579895.8811193-1015-169386985888242/AnsiballZ_systemd.py
Feb 20 09:31:36 np0005625203.localdomain sudo[260441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56114 DF PROTO=TCP SPT=39736 DPT=9102 SEQ=2247533995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA06ED800000000001030307) 
Feb 20 09:31:36 np0005625203.localdomain python3.9[260443]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:31:36 np0005625203.localdomain systemd-rc-local-generator[260463]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:31:36 np0005625203.localdomain systemd-sysv-generator[260471]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: Starting Create netns directory...
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: netns-placeholder.service: Deactivated successfully.
Feb 20 09:31:36 np0005625203.localdomain systemd[1]: Finished Create netns directory.
Feb 20 09:31:36 np0005625203.localdomain sudo[260441]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:31:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:31:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:31:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:31:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:31:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:31:38 np0005625203.localdomain sudo[260592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ccllbwhapmnjrqysbhwkhjqrmusbcfju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579897.9662275-1045-129045079834142/AnsiballZ_file.py
Feb 20 09:31:38 np0005625203.localdomain sudo[260592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:31:38 np0005625203.localdomain podman[260594]: 2026-02-20 09:31:38.34502661 +0000 UTC m=+0.091184604 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:31:38 np0005625203.localdomain podman[260594]: 2026-02-20 09:31:38.356759262 +0000 UTC m=+0.102917286 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:31:38 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:31:38 np0005625203.localdomain python3.9[260595]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:31:38 np0005625203.localdomain sudo[260592]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:38 np0005625203.localdomain sudo[260720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fuxzexiodrqymljbolbwouyrpukaatxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579898.6604607-1069-106002218547311/AnsiballZ_file.py
Feb 20 09:31:38 np0005625203.localdomain sudo[260720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:39 np0005625203.localdomain python3.9[260722]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:31:39 np0005625203.localdomain sudo[260720]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:39 np0005625203.localdomain sudo[260830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wsfgdkdyudrynkqdfabnpvqmclrwtryi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579899.337431-1093-148694174284353/AnsiballZ_stat.py
Feb 20 09:31:39 np0005625203.localdomain sudo[260830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:39 np0005625203.localdomain python3.9[260832]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/neutron_dhcp_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:39 np0005625203.localdomain sudo[260830]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:40 np0005625203.localdomain sudo[260918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jskbwebhpalxzfrauemyrijbeowlaith ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579899.337431-1093-148694174284353/AnsiballZ_copy.py
Feb 20 09:31:40 np0005625203.localdomain sudo[260918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:40 np0005625203.localdomain python3.9[260920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/neutron_dhcp_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579899.337431-1093-148694174284353/.source.json _original_basename=.lc_8u6f4 follow=False checksum=c62829c98c0f9e788d62f52aa71fba276cd98270 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:31:40 np0005625203.localdomain sudo[260918]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:40 np0005625203.localdomain python3.9[261028]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/neutron_dhcp state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:31:42 np0005625203.localdomain sudo[261330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ynkqvpfrqftzolqktlzsplfqtzvpaltm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579902.5134776-1213-173607420209097/AnsiballZ_container_config_data.py
Feb 20 09:31:42 np0005625203.localdomain sudo[261330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:43 np0005625203.localdomain python3.9[261332]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_pattern=*.json debug=False
Feb 20 09:31:43 np0005625203.localdomain sudo[261330]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:43 np0005625203.localdomain sudo[261440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wfrtzwfwakxgsjqqdzrgortknkceqguq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579903.5931215-1246-69765518548963/AnsiballZ_container_config_hash.py
Feb 20 09:31:43 np0005625203.localdomain sudo[261440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:44 np0005625203.localdomain python3.9[261442]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 20 09:31:44 np0005625203.localdomain sudo[261440]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56115 DF PROTO=TCP SPT=39736 DPT=9102 SEQ=2247533995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA070E810000000001030307) 
Feb 20 09:31:45 np0005625203.localdomain sudo[261550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oxvchbficnxrclofukrccmzrurnyjpsq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771579904.5976825-1276-60009219559737/AnsiballZ_edpm_container_manage.py
Feb 20 09:31:45 np0005625203.localdomain sudo[261550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:45 np0005625203.localdomain python3[261552]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/neutron_dhcp config_id=neutron_dhcp config_overrides={} config_patterns=*.json containers=['neutron_dhcp_agent'] log_base_path=/var/log/containers/stdouts debug=False
Feb 20 09:31:45 np0005625203.localdomain podman[261590]: 
Feb 20 09:31:45 np0005625203.localdomain podman[261590]: 2026-02-20 09:31:45.61854329 +0000 UTC m=+0.086847149 container create 7f4f9cc378bd6c360d8dad20f9fe3eb85e64ade54360ef0c24a98b70f33df00d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-4abc16337fc2fe9b0b769fe75443d5883a32020062950ac34bf35330bf625d99'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=neutron_dhcp_agent, org.label-schema.license=GPLv2)
Feb 20 09:31:45 np0005625203.localdomain podman[261590]: 2026-02-20 09:31:45.572051767 +0000 UTC m=+0.040355686 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:31:45 np0005625203.localdomain python3[261552]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name neutron_dhcp_agent --cgroupns=host --conmon-pidfile /run/neutron_dhcp_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-4abc16337fc2fe9b0b769fe75443d5883a32020062950ac34bf35330bf625d99 --label config_id=neutron_dhcp --label container_name=neutron_dhcp_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-4abc16337fc2fe9b0b769fe75443d5883a32020062950ac34bf35330bf625d99'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/netns:/run/netns:shared --volume /var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /run/openvswitch:/run/openvswitch:shared,z --volume /var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:31:45 np0005625203.localdomain sudo[261550]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:46 np0005625203.localdomain sudo[261735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-daqbhuslxkdwsxwscydidjixvizslpoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579905.9856665-1300-163383575939600/AnsiballZ_stat.py
Feb 20 09:31:46 np0005625203.localdomain sudo[261735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:31:46 np0005625203.localdomain systemd[1]: tmp-crun.ZRdVdw.mount: Deactivated successfully.
Feb 20 09:31:46 np0005625203.localdomain podman[261738]: 2026-02-20 09:31:46.371015126 +0000 UTC m=+0.097434136 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:31:46 np0005625203.localdomain podman[261738]: 2026-02-20 09:31:46.384245314 +0000 UTC m=+0.110664334 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:31:46 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:31:46 np0005625203.localdomain python3.9[261737]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:31:46 np0005625203.localdomain sudo[261735]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:46 np0005625203.localdomain sudo[261870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mcdubvqupmqnueuqackthovpoawjedow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579906.7598898-1327-249190041351992/AnsiballZ_file.py
Feb 20 09:31:46 np0005625203.localdomain sudo[261870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:47 np0005625203.localdomain python3.9[261872]: ansible-file Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:31:47 np0005625203.localdomain sudo[261870]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:47 np0005625203.localdomain sudo[261925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tlrdfmhzylahobditabzwxsholofwlba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579906.7598898-1327-249190041351992/AnsiballZ_stat.py
Feb 20 09:31:47 np0005625203.localdomain sudo[261925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:47 np0005625203.localdomain python3.9[261927]: ansible-stat Invoked with path=/etc/systemd/system/edpm_neutron_dhcp_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:31:47 np0005625203.localdomain sudo[261925]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:48 np0005625203.localdomain sudo[262034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ibkghdqxcrojrvdnerpljzmnxgqmtlsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579907.986632-1327-83269270471584/AnsiballZ_copy.py
Feb 20 09:31:48 np0005625203.localdomain sudo[262034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:48 np0005625203.localdomain python3.9[262036]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771579907.986632-1327-83269270471584/source dest=/etc/systemd/system/edpm_neutron_dhcp_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:31:48 np0005625203.localdomain sudo[262034]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:48 np0005625203.localdomain sudo[262089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jomdcwejnxthuljgiiqmpjhgnwbnafxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579907.986632-1327-83269270471584/AnsiballZ_systemd.py
Feb 20 09:31:48 np0005625203.localdomain sudo[262089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:49 np0005625203.localdomain python3.9[262091]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:31:49 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:31:49 np0005625203.localdomain systemd-sysv-generator[262115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:31:49 np0005625203.localdomain systemd-rc-local-generator[262111]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:31:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:31:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:49 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:49 np0005625203.localdomain sudo[262089]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:49 np0005625203.localdomain sudo[262180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ubcciemtjzwwbmvxmklaroxqgbdwrhya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579907.986632-1327-83269270471584/AnsiballZ_systemd.py
Feb 20 09:31:49 np0005625203.localdomain sudo[262180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:50 np0005625203.localdomain python3.9[262182]: ansible-systemd Invoked with state=restarted name=edpm_neutron_dhcp_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:31:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:31:50 np0005625203.localdomain systemd[1]: tmp-crun.JZDvDD.mount: Deactivated successfully.
Feb 20 09:31:50 np0005625203.localdomain podman[262184]: 2026-02-20 09:31:50.778116298 +0000 UTC m=+0.091335657 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:31:50 np0005625203.localdomain podman[262184]: 2026-02-20 09:31:50.817427981 +0000 UTC m=+0.130647350 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:31:50 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:31:51 np0005625203.localdomain systemd-sysv-generator[262233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:31:51 np0005625203.localdomain systemd-rc-local-generator[262229]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: Starting neutron_dhcp_agent container...
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:31:51 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/788db1da63c3be7cd5fd27f2f4e23726c925eaeee47d97ca5c5bb7eafb2f8180/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 20 09:31:51 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/788db1da63c3be7cd5fd27f2f4e23726c925eaeee47d97ca5c5bb7eafb2f8180/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:31:51 np0005625203.localdomain podman[262247]: 2026-02-20 09:31:51.560929991 +0000 UTC m=+0.127428062 container init 7f4f9cc378bd6c360d8dad20f9fe3eb85e64ade54360ef0c24a98b70f33df00d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-4abc16337fc2fe9b0b769fe75443d5883a32020062950ac34bf35330bf625d99'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=neutron_dhcp, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:31:51 np0005625203.localdomain podman[262247]: 2026-02-20 09:31:51.571450244 +0000 UTC m=+0.137948315 container start 7f4f9cc378bd6c360d8dad20f9fe3eb85e64ade54360ef0c24a98b70f33df00d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-4abc16337fc2fe9b0b769fe75443d5883a32020062950ac34bf35330bf625d99'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, config_id=neutron_dhcp, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=neutron_dhcp_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:31:51 np0005625203.localdomain podman[262247]: neutron_dhcp_agent
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: + sudo -E kolla_set_configs
Feb 20 09:31:51 np0005625203.localdomain systemd[1]: Started neutron_dhcp_agent container.
Feb 20 09:31:51 np0005625203.localdomain sudo[262180]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Validating config file
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Copying service configuration files
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Writing out command to execute
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/d91d8a949a4b5272256c667b5094a15f5e397c6793efbfa4186752b765c6923b
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: ++ cat /run_command
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: + CMD=/usr/bin/neutron-dhcp-agent
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: + ARGS=
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: + sudo kolla_copy_cacerts
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: + [[ ! -n '' ]]
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: + . kolla_extend_start
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\'''
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: Running command: '/usr/bin/neutron-dhcp-agent'
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: + umask 0022
Feb 20 09:31:51 np0005625203.localdomain neutron_dhcp_agent[262261]: + exec /usr/bin/neutron-dhcp-agent
Feb 20 09:31:52 np0005625203.localdomain python3.9[262383]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 20 09:31:52 np0005625203.localdomain neutron_dhcp_agent[262261]: 2026-02-20 09:31:52.886 262265 INFO neutron.common.config [-] Logging enabled!
Feb 20 09:31:52 np0005625203.localdomain neutron_dhcp_agent[262261]: 2026-02-20 09:31:52.887 262265 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev44
Feb 20 09:31:53 np0005625203.localdomain neutron_dhcp_agent[262261]: 2026-02-20 09:31:53.279 262265 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Feb 20 09:31:53 np0005625203.localdomain neutron_dhcp_agent[262261]: 2026-02-20 09:31:53.565 262265 INFO neutron.agent.dhcp.agent [None req-79f46e12-b1a0-45a2-93c8-fd63920aaa8f - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:31:53 np0005625203.localdomain neutron_dhcp_agent[262261]: 2026-02-20 09:31:53.566 262265 INFO neutron.agent.dhcp.agent [None req-79f46e12-b1a0-45a2-93c8-fd63920aaa8f - - - - - -] Synchronizing state complete
Feb 20 09:31:53 np0005625203.localdomain neutron_dhcp_agent[262261]: 2026-02-20 09:31:53.643 262265 INFO neutron.agent.dhcp.agent [None req-79f46e12-b1a0-45a2-93c8-fd63920aaa8f - - - - - -] DHCP agent started
Feb 20 09:31:53 np0005625203.localdomain sudo[262492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iyosnaufirjdvooutfzasbdptizafxzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579913.4270556-1462-20648673390798/AnsiballZ_stat.py
Feb 20 09:31:53 np0005625203.localdomain sudo[262492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:31:53.915 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:31:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:31:53.917 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:31:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:31:53.919 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:31:53 np0005625203.localdomain python3.9[262494]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:31:53 np0005625203.localdomain sudo[262492]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:54 np0005625203.localdomain sudo[262582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-olmvynrfpsvbkwtfgsfpeinxopfiyszh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579913.4270556-1462-20648673390798/AnsiballZ_copy.py
Feb 20 09:31:54 np0005625203.localdomain sudo[262582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:54 np0005625203.localdomain python3.9[262584]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771579913.4270556-1462-20648673390798/.source.yaml _original_basename=.6qcmrl7v follow=False checksum=b9ca88bcb32671aca7ddecc5a041bae0cf925d73 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:31:54 np0005625203.localdomain sudo[262582]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:54 np0005625203.localdomain sudo[262692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kkudapkgzdozogrewnypqqyttyhrhjzs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579914.7133713-1507-75106226488959/AnsiballZ_systemd.py
Feb 20 09:31:54 np0005625203.localdomain sudo[262692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:31:55 np0005625203.localdomain python3.9[262694]: ansible-ansible.builtin.systemd Invoked with name=edpm_neutron_dhcp_agent.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:31:55 np0005625203.localdomain systemd[1]: Stopping neutron_dhcp_agent container...
Feb 20 09:31:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:31:55 np0005625203.localdomain neutron_dhcp_agent[262261]: 2026-02-20 09:31:55.715 262265 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Feb 20 09:31:55 np0005625203.localdomain systemd[1]: tmp-crun.vq6IWD.mount: Deactivated successfully.
Feb 20 09:31:55 np0005625203.localdomain podman[262710]: 2026-02-20 09:31:55.774749222 +0000 UTC m=+0.090431591 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:31:55 np0005625203.localdomain podman[262710]: 2026-02-20 09:31:55.779779507 +0000 UTC m=+0.095461876 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 20 09:31:55 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:31:56 np0005625203.localdomain systemd[1]: libpod-7f4f9cc378bd6c360d8dad20f9fe3eb85e64ade54360ef0c24a98b70f33df00d.scope: Deactivated successfully.
Feb 20 09:31:56 np0005625203.localdomain systemd[1]: libpod-7f4f9cc378bd6c360d8dad20f9fe3eb85e64ade54360ef0c24a98b70f33df00d.scope: Consumed 2.071s CPU time.
Feb 20 09:31:56 np0005625203.localdomain podman[262698]: 2026-02-20 09:31:56.042309383 +0000 UTC m=+0.703514117 container died 7f4f9cc378bd6c360d8dad20f9fe3eb85e64ade54360ef0c24a98b70f33df00d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=neutron_dhcp, container_name=neutron_dhcp_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-4abc16337fc2fe9b0b769fe75443d5883a32020062950ac34bf35330bf625d99'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']})
Feb 20 09:31:56 np0005625203.localdomain podman[262698]: 2026-02-20 09:31:56.087403933 +0000 UTC m=+0.748608617 container cleanup 7f4f9cc378bd6c360d8dad20f9fe3eb85e64ade54360ef0c24a98b70f33df00d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, config_id=neutron_dhcp, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=neutron_dhcp_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-4abc16337fc2fe9b0b769fe75443d5883a32020062950ac34bf35330bf625d99'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:31:56 np0005625203.localdomain podman[262698]: neutron_dhcp_agent
Feb 20 09:31:56 np0005625203.localdomain podman[262754]: error opening file `/run/crun/7f4f9cc378bd6c360d8dad20f9fe3eb85e64ade54360ef0c24a98b70f33df00d/status`: No such file or directory
Feb 20 09:31:56 np0005625203.localdomain podman[262742]: 2026-02-20 09:31:56.189016628 +0000 UTC m=+0.062865041 container cleanup 7f4f9cc378bd6c360d8dad20f9fe3eb85e64ade54360ef0c24a98b70f33df00d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=neutron_dhcp, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-4abc16337fc2fe9b0b769fe75443d5883a32020062950ac34bf35330bf625d99'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent)
Feb 20 09:31:56 np0005625203.localdomain podman[262742]: neutron_dhcp_agent
Feb 20 09:31:56 np0005625203.localdomain systemd[1]: edpm_neutron_dhcp_agent.service: Deactivated successfully.
Feb 20 09:31:56 np0005625203.localdomain systemd[1]: Stopped neutron_dhcp_agent container.
Feb 20 09:31:56 np0005625203.localdomain systemd[1]: Starting neutron_dhcp_agent container...
Feb 20 09:31:56 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:31:56 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/788db1da63c3be7cd5fd27f2f4e23726c925eaeee47d97ca5c5bb7eafb2f8180/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Feb 20 09:31:56 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/788db1da63c3be7cd5fd27f2f4e23726c925eaeee47d97ca5c5bb7eafb2f8180/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:31:56 np0005625203.localdomain podman[262756]: 2026-02-20 09:31:56.33341819 +0000 UTC m=+0.114476881 container init 7f4f9cc378bd6c360d8dad20f9fe3eb85e64ade54360ef0c24a98b70f33df00d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, container_name=neutron_dhcp_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-4abc16337fc2fe9b0b769fe75443d5883a32020062950ac34bf35330bf625d99'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=neutron_dhcp, managed_by=edpm_ansible)
Feb 20 09:31:56 np0005625203.localdomain podman[262756]: 2026-02-20 09:31:56.342618514 +0000 UTC m=+0.123677205 container start 7f4f9cc378bd6c360d8dad20f9fe3eb85e64ade54360ef0c24a98b70f33df00d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron_dhcp_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-4abc16337fc2fe9b0b769fe75443d5883a32020062950ac34bf35330bf625d99'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/netns:/run/netns:shared', '/var/lib/openstack/neutron-dhcp-agent:/etc/neutron.conf.d:z', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/kolla/config_files/neutron_dhcp_agent.json:/var/lib/kolla/config_files/config.json:ro', '/run/openvswitch:/run/openvswitch:shared,z', '/var/lib/neutron/dhcp_agent_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/dhcp_agent_dnsmasq_wrapper:/usr/local/bin/dnsmasq:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-dhcp/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z']}, container_name=neutron_dhcp_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=neutron_dhcp, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:31:56 np0005625203.localdomain podman[262756]: neutron_dhcp_agent
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: + sudo -E kolla_set_configs
Feb 20 09:31:56 np0005625203.localdomain systemd[1]: Started neutron_dhcp_agent container.
Feb 20 09:31:56 np0005625203.localdomain sudo[262692]: pam_unix(sudo:session): session closed for user root
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Validating config file
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Copying service configuration files
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Writing out command to execute
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/.cache
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/external
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/ns-metadata-proxy
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/metadata_proxy
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_haproxy_wrapper
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/dhcp_agent_dnsmasq_wrapper
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/dnsmasq-kill
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/b9146dd2a0dc3e0bc3fee7bb1b53fa22a55af280b3a177d7a47b63f92e7ebd29
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/.cache/python-entrypoints/d91d8a949a4b5272256c667b5094a15f5e397c6793efbfa4186752b765c6923b
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: ++ cat /run_command
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: + CMD=/usr/bin/neutron-dhcp-agent
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: + ARGS=
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: + sudo kolla_copy_cacerts
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: + [[ ! -n '' ]]
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: + . kolla_extend_start
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: + echo 'Running command: '\''/usr/bin/neutron-dhcp-agent'\'''
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: Running command: '/usr/bin/neutron-dhcp-agent'
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: + umask 0022
Feb 20 09:31:56 np0005625203.localdomain neutron_dhcp_agent[262771]: + exec /usr/bin/neutron-dhcp-agent
Feb 20 09:31:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:31:57.550 262775 INFO neutron.common.config [-] Logging enabled!
Feb 20 09:31:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:31:57.550 262775 INFO neutron.common.config [-] /usr/bin/neutron-dhcp-agent version 22.2.2.dev44
Feb 20 09:31:57 np0005625203.localdomain sshd[255654]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:31:57 np0005625203.localdomain systemd-logind[759]: Session 57 logged out. Waiting for processes to exit.
Feb 20 09:31:57 np0005625203.localdomain systemd[1]: session-57.scope: Deactivated successfully.
Feb 20 09:31:57 np0005625203.localdomain systemd[1]: session-57.scope: Consumed 35.534s CPU time.
Feb 20 09:31:57 np0005625203.localdomain systemd-logind[759]: Removed session 57.
Feb 20 09:31:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:31:57.918 262775 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Feb 20 09:31:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:31:58.746 262775 INFO neutron.agent.dhcp.agent [None req-f02b4e5b-3800-450b-885c-26fbd9347bf1 - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:31:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:31:58.746 262775 INFO neutron.agent.dhcp.agent [None req-f02b4e5b-3800-450b-885c-26fbd9347bf1 - - - - - -] Synchronizing state complete
Feb 20 09:31:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:31:58.784 262775 INFO neutron.agent.dhcp.agent [None req-f02b4e5b-3800-450b-885c-26fbd9347bf1 - - - - - -] DHCP agent started
Feb 20 09:31:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:31:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:31:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:31:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:31:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16306 "" "Go-http-client/1.1"
Feb 20 09:31:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46030 DF PROTO=TCP SPT=54062 DPT=9102 SEQ=1758479041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0746E50000000001030307) 
Feb 20 09:32:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46031 DF PROTO=TCP SPT=54062 DPT=9102 SEQ=1758479041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA074B000000000001030307) 
Feb 20 09:32:01 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56116 DF PROTO=TCP SPT=39736 DPT=9102 SEQ=2247533995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA074E800000000001030307) 
Feb 20 09:32:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46032 DF PROTO=TCP SPT=54062 DPT=9102 SEQ=1758479041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0753000000000001030307) 
Feb 20 09:32:02 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:32:02 np0005625203.localdomain podman[262805]: 2026-02-20 09:32:02.771356772 +0000 UTC m=+0.084444395 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb 20 09:32:02 np0005625203.localdomain podman[262805]: 2026-02-20 09:32:02.829514756 +0000 UTC m=+0.142602359 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:32:02 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:32:03 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=18944 DF PROTO=TCP SPT=50380 DPT=9102 SEQ=268398850 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0756800000000001030307) 
Feb 20 09:32:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:32:05 np0005625203.localdomain podman[262831]: 2026-02-20 09:32:05.755584384 +0000 UTC m=+0.076737868 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, release=1770267347, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, vcs-type=git, config_id=openstack_network_exporter, version=9.7, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 20 09:32:05 np0005625203.localdomain podman[262831]: 2026-02-20 09:32:05.769214983 +0000 UTC m=+0.090368457 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, distribution-scope=public, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, config_id=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal)
Feb 20 09:32:05 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:32:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46033 DF PROTO=TCP SPT=54062 DPT=9102 SEQ=1758479041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0762C00000000001030307) 
Feb 20 09:32:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:32:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:32:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:32:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:32:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:32:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:32:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:32:07.643 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:32:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:32:07.644 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:32:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:32:07.644 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:32:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:32:08 np0005625203.localdomain podman[262852]: 2026-02-20 09:32:08.762401892 +0000 UTC m=+0.079245325 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Feb 20 09:32:08 np0005625203.localdomain podman[262852]: 2026-02-20 09:32:08.776239499 +0000 UTC m=+0.093082932 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, managed_by=edpm_ansible)
Feb 20 09:32:08 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:32:11 np0005625203.localdomain sudo[262870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:32:11 np0005625203.localdomain sudo[262870]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:32:11 np0005625203.localdomain sudo[262870]: pam_unix(sudo:session): session closed for user root
Feb 20 09:32:11 np0005625203.localdomain sudo[262888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:32:11 np0005625203.localdomain sudo[262888]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:32:11 np0005625203.localdomain sudo[262888]: pam_unix(sudo:session): session closed for user root
Feb 20 09:32:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46034 DF PROTO=TCP SPT=54062 DPT=9102 SEQ=1758479041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0782800000000001030307) 
Feb 20 09:32:14 np0005625203.localdomain sudo[262939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:32:14 np0005625203.localdomain sudo[262939]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:32:14 np0005625203.localdomain sudo[262939]: pam_unix(sudo:session): session closed for user root
Feb 20 09:32:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:32:16 np0005625203.localdomain podman[262957]: 2026-02-20 09:32:16.770575847 +0000 UTC m=+0.084114706 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:32:16 np0005625203.localdomain podman[262957]: 2026-02-20 09:32:16.784256618 +0000 UTC m=+0.097795507 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:32:16 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:32:17 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:17.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:32:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:18.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:32:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:18.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:32:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:18.200 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:32:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:18.230 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:32:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:19.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:32:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:19.199 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:32:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:19.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:32:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:20.336 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:32:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:20.336 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:32:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:20.336 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:32:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:20.336 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:32:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:20.336 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:32:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:20.758 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:32:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:20.980 228941 WARNING nova.virt.libvirt.driver [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:32:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:20.982 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12940MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:32:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:20.982 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:32:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:20.983 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:32:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:21.034 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:32:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:21.035 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:32:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:21.058 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:32:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:21.530 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:32:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:21.537 228941 DEBUG nova.compute.provider_tree [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:32:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:21.560 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:32:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:21.562 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:32:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:21.563 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:32:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:32:21 np0005625203.localdomain systemd[1]: tmp-crun.W3tIS2.mount: Deactivated successfully.
Feb 20 09:32:21 np0005625203.localdomain podman[263024]: 2026-02-20 09:32:21.745408226 +0000 UTC m=+0.068342628 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:32:21 np0005625203.localdomain podman[263024]: 2026-02-20 09:32:21.757150549 +0000 UTC m=+0.080084901 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:32:21 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:32:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:22.563 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:32:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:22.580 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:32:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:22.581 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:32:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:22.581 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:32:23 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:23.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:32:23 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:32:23.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:32:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:32:26 np0005625203.localdomain podman[263048]: 2026-02-20 09:32:26.759421668 +0000 UTC m=+0.077367407 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:32:26 np0005625203.localdomain podman[263048]: 2026-02-20 09:32:26.797213274 +0000 UTC m=+0.115159013 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:32:26 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:32:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:32:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:32:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:32:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:32:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16311 "" "Go-http-client/1.1"
Feb 20 09:32:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17090 DF PROTO=TCP SPT=56090 DPT=9102 SEQ=3883175571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA07BC170000000001030307) 
Feb 20 09:32:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17091 DF PROTO=TCP SPT=56090 DPT=9102 SEQ=3883175571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA07C0000000000001030307) 
Feb 20 09:32:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46035 DF PROTO=TCP SPT=54062 DPT=9102 SEQ=1758479041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA07C2800000000001030307) 
Feb 20 09:32:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17092 DF PROTO=TCP SPT=56090 DPT=9102 SEQ=3883175571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA07C8000000000001030307) 
Feb 20 09:32:33 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=56117 DF PROTO=TCP SPT=39736 DPT=9102 SEQ=2247533995 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA07CC800000000001030307) 
Feb 20 09:32:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:32:33 np0005625203.localdomain podman[263066]: 2026-02-20 09:32:33.744005156 +0000 UTC m=+0.065581083 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:32:33 np0005625203.localdomain podman[263066]: 2026-02-20 09:32:33.781623847 +0000 UTC m=+0.103199764 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:32:33 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:32:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17093 DF PROTO=TCP SPT=56090 DPT=9102 SEQ=3883175571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA07D7C00000000001030307) 
Feb 20 09:32:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:32:36 np0005625203.localdomain podman[263091]: 2026-02-20 09:32:36.771169362 +0000 UTC m=+0.087416697 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, name=ubi9/ubi-minimal, release=1770267347)
Feb 20 09:32:36 np0005625203.localdomain podman[263091]: 2026-02-20 09:32:36.783643816 +0000 UTC m=+0.099891151 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 09:32:36 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:32:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:32:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:32:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:32:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:32:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:32:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:32:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:32:39 np0005625203.localdomain podman[263111]: 2026-02-20 09:32:39.773629088 +0000 UTC m=+0.089020767 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Feb 20 09:32:39 np0005625203.localdomain podman[263111]: 2026-02-20 09:32:39.78472074 +0000 UTC m=+0.100112359 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:32:39 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:32:41 np0005625203.localdomain sshd[263128]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:32:43 np0005625203.localdomain sshd[263128]: Received disconnect from 118.99.80.29 port 26142:11: Bye Bye [preauth]
Feb 20 09:32:43 np0005625203.localdomain sshd[263128]: Disconnected from authenticating user root 118.99.80.29 port 26142 [preauth]
Feb 20 09:32:43 np0005625203.localdomain sshd[263130]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:32:44 np0005625203.localdomain sshd[263130]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:32:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17094 DF PROTO=TCP SPT=56090 DPT=9102 SEQ=3883175571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA07F8800000000001030307) 
Feb 20 09:32:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:32:47 np0005625203.localdomain systemd[1]: tmp-crun.ibECbU.mount: Deactivated successfully.
Feb 20 09:32:47 np0005625203.localdomain podman[263132]: 2026-02-20 09:32:47.769129083 +0000 UTC m=+0.081053401 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:32:47 np0005625203.localdomain podman[263132]: 2026-02-20 09:32:47.801584564 +0000 UTC m=+0.113508832 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:32:47 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:32:48 np0005625203.localdomain sshd[263154]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:32:49 np0005625203.localdomain sshd[263154]: Received disconnect from 152.32.129.236 port 37562:11: Bye Bye [preauth]
Feb 20 09:32:49 np0005625203.localdomain sshd[263154]: Disconnected from authenticating user root 152.32.129.236 port 37562 [preauth]
Feb 20 09:32:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:32:52 np0005625203.localdomain systemd[1]: tmp-crun.AhUucq.mount: Deactivated successfully.
Feb 20 09:32:52 np0005625203.localdomain podman[263156]: 2026-02-20 09:32:52.761252878 +0000 UTC m=+0.082925528 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:32:52 np0005625203.localdomain podman[263156]: 2026-02-20 09:32:52.79730853 +0000 UTC m=+0.118981150 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:32:52 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:32:56 np0005625203.localdomain sshd[263179]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:32:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:32:57 np0005625203.localdomain sshd[263179]: Accepted publickey for zuul from 192.168.122.30 port 58920 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:32:57 np0005625203.localdomain systemd-logind[759]: New session 58 of user zuul.
Feb 20 09:32:57 np0005625203.localdomain systemd[1]: Started Session 58 of User zuul.
Feb 20 09:32:57 np0005625203.localdomain sshd[263179]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:32:57 np0005625203.localdomain podman[263181]: 2026-02-20 09:32:57.175650656 +0000 UTC m=+0.094362431 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 20 09:32:57 np0005625203.localdomain podman[263181]: 2026-02-20 09:32:57.213318918 +0000 UTC m=+0.132030693 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent)
Feb 20 09:32:57 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:32:58 np0005625203.localdomain python3.9[263309]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:32:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:32:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:32:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:32:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:32:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16312 "" "Go-http-client/1.1"
Feb 20 09:32:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16944 DF PROTO=TCP SPT=55984 DPT=9102 SEQ=3571959263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0831450000000001030307) 
Feb 20 09:32:59 np0005625203.localdomain python3.9[263421]: ansible-ansible.builtin.service_facts Invoked
Feb 20 09:32:59 np0005625203.localdomain network[263438]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 09:32:59 np0005625203.localdomain network[263439]: 'network-scripts' will be removed from distribution in near future.
Feb 20 09:32:59 np0005625203.localdomain network[263440]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 09:33:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16945 DF PROTO=TCP SPT=55984 DPT=9102 SEQ=3571959263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0835410000000001030307) 
Feb 20 09:33:01 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:33:01 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17095 DF PROTO=TCP SPT=56090 DPT=9102 SEQ=3883175571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0838800000000001030307) 
Feb 20 09:33:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16946 DF PROTO=TCP SPT=55984 DPT=9102 SEQ=3571959263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA083D400000000001030307) 
Feb 20 09:33:03 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=46036 DF PROTO=TCP SPT=54062 DPT=9102 SEQ=1758479041 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0840800000000001030307) 
Feb 20 09:33:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:33:04 np0005625203.localdomain podman[263580]: 2026-02-20 09:33:04.761254801 +0000 UTC m=+0.077630615 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 20 09:33:04 np0005625203.localdomain podman[263580]: 2026-02-20 09:33:04.833360374 +0000 UTC m=+0.149736248 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:33:04 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:33:05 np0005625203.localdomain sudo[263695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jesriskhkqsxnyetfiwokgmdebvsdajr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579985.4620526-97-142199324103309/AnsiballZ_setup.py
Feb 20 09:33:05 np0005625203.localdomain sudo[263695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:06 np0005625203.localdomain python3.9[263697]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Feb 20 09:33:06 np0005625203.localdomain sudo[263695]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16947 DF PROTO=TCP SPT=55984 DPT=9102 SEQ=3571959263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA084D000000000001030307) 
Feb 20 09:33:06 np0005625203.localdomain sudo[263758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tkpqxaskuowqqufktnczetxvaakvuegp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579985.4620526-97-142199324103309/AnsiballZ_dnf.py
Feb 20 09:33:06 np0005625203.localdomain sudo[263758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:07 np0005625203.localdomain python3.9[263760]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:33:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:33:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:33:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:33:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:33:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:33:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:33:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:33:07.644 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:33:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:33:07.645 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:33:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:33:07.645 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:33:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:33:07 np0005625203.localdomain systemd[1]: tmp-crun.EDtsCo.mount: Deactivated successfully.
Feb 20 09:33:07 np0005625203.localdomain podman[263763]: 2026-02-20 09:33:07.786660501 +0000 UTC m=+0.092483703 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, architecture=x86_64, version=9.7, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 09:33:07 np0005625203.localdomain podman[263763]: 2026-02-20 09:33:07.829219585 +0000 UTC m=+0.135042767 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, architecture=x86_64, config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., release=1770267347, maintainer=Red Hat, Inc., version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, build-date=2026-02-05T04:57:10Z)
Feb 20 09:33:07 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:33:09 np0005625203.localdomain sshd[263784]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:33:10 np0005625203.localdomain sudo[263758]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:33:10 np0005625203.localdomain systemd[1]: tmp-crun.yw5VAR.mount: Deactivated successfully.
Feb 20 09:33:10 np0005625203.localdomain podman[263841]: 2026-02-20 09:33:10.798577958 +0000 UTC m=+0.111731647 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_managed=true)
Feb 20 09:33:10 np0005625203.localdomain podman[263841]: 2026-02-20 09:33:10.815268832 +0000 UTC m=+0.128422511 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 20 09:33:10 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:33:10 np0005625203.localdomain sshd[263784]: Invalid user n8n from 194.107.115.2 port 62546
Feb 20 09:33:11 np0005625203.localdomain sshd[263784]: Received disconnect from 194.107.115.2 port 62546:11: Bye Bye [preauth]
Feb 20 09:33:11 np0005625203.localdomain sshd[263784]: Disconnected from invalid user n8n 194.107.115.2 port 62546 [preauth]
Feb 20 09:33:11 np0005625203.localdomain sudo[263913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cjnhgjghmrqnbpynenympjdacevrxrqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579990.5415905-133-90601945487303/AnsiballZ_stat.py
Feb 20 09:33:11 np0005625203.localdomain sudo[263913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:11 np0005625203.localdomain python3.9[263915]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:33:11 np0005625203.localdomain sudo[263913]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:12 np0005625203.localdomain sudo[264023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rascxxmztdbncogaqzsgobcomdstgymv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579991.6934202-163-6592137891463/AnsiballZ_command.py
Feb 20 09:33:12 np0005625203.localdomain sudo[264023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:12 np0005625203.localdomain python3.9[264025]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:33:12 np0005625203.localdomain sudo[264023]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:13 np0005625203.localdomain sudo[264134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-umoekadshiudxppuhbpqgfkxsywwermx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579993.4256728-193-173014503990158/AnsiballZ_stat.py
Feb 20 09:33:13 np0005625203.localdomain sudo[264134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:13 np0005625203.localdomain python3.9[264136]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:33:13 np0005625203.localdomain sudo[264134]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16948 DF PROTO=TCP SPT=55984 DPT=9102 SEQ=3571959263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA086C800000000001030307) 
Feb 20 09:33:15 np0005625203.localdomain sudo[264210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:33:15 np0005625203.localdomain sudo[264210]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:33:15 np0005625203.localdomain sudo[264210]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:15 np0005625203.localdomain sudo[264228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 09:33:15 np0005625203.localdomain sudo[264228]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:33:15 np0005625203.localdomain sudo[264282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jjfjczeojnclkplvbzgzxlahievceeqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579994.403258-226-179471348424227/AnsiballZ_lineinfile.py
Feb 20 09:33:15 np0005625203.localdomain sudo[264282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:15 np0005625203.localdomain python3.9[264284]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:33:15 np0005625203.localdomain sudo[264228]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:15 np0005625203.localdomain sudo[264282]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:15 np0005625203.localdomain sudo[264324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:33:15 np0005625203.localdomain sudo[264324]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:33:15 np0005625203.localdomain sudo[264324]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:15 np0005625203.localdomain sudo[264342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:33:15 np0005625203.localdomain sudo[264342]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:33:16 np0005625203.localdomain sudo[264342]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:16 np0005625203.localdomain sudo[264482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-semasbrmsheyxyghpluxdsxowmenvsij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579995.9966304-253-30602861839680/AnsiballZ_systemd_service.py
Feb 20 09:33:16 np0005625203.localdomain sudo[264482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:16 np0005625203.localdomain python3.9[264484]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:33:17 np0005625203.localdomain sudo[264482]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:33:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:33:17 np0005625203.localdomain sudo[264504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:33:17 np0005625203.localdomain sudo[264504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:33:17 np0005625203.localdomain sudo[264504]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:17 np0005625203.localdomain sudo[264612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qrbypualwwdkkfijmyykdzrvgejpufjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771579997.2455335-278-127405244896331/AnsiballZ_systemd_service.py
Feb 20 09:33:17 np0005625203.localdomain sudo[264612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:17 np0005625203.localdomain python3.9[264614]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:33:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:33:17 np0005625203.localdomain sudo[264612]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:17 np0005625203.localdomain podman[264616]: 2026-02-20 09:33:17.948054008 +0000 UTC m=+0.087824261 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:33:17 np0005625203.localdomain podman[264616]: 2026-02-20 09:33:17.987689851 +0000 UTC m=+0.127460104 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:33:18 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:33:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:18.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:33:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:18.201 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:33:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:18.201 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:33:18 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:18.222 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:33:18 np0005625203.localdomain python3.9[264747]: ansible-ansible.builtin.service_facts Invoked
Feb 20 09:33:19 np0005625203.localdomain network[264764]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 09:33:19 np0005625203.localdomain network[264765]: 'network-scripts' will be removed from distribution in near future.
Feb 20 09:33:19 np0005625203.localdomain network[264766]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 09:33:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:19.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:33:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:19.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:33:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:19.199 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:33:20 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.198 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.220 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.220 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.221 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.221 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.222 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.685 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.924 228941 WARNING nova.virt.libvirt.driver [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.926 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12901MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.927 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.928 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.983 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:33:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:21.984 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:33:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:22.001 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:33:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:22.481 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:33:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:22.488 228941 DEBUG nova.compute.provider_tree [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:33:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:22.502 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:33:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:22.504 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:33:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:22.505 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:33:23 np0005625203.localdomain sudo[265040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-beuuhxjyyxrpxidrhbshywvohliywsle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580003.2408752-346-186108876474675/AnsiballZ_dnf.py
Feb 20 09:33:23 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:23.505 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:33:23 np0005625203.localdomain sudo[265040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:33:23 np0005625203.localdomain podman[265043]: 2026-02-20 09:33:23.614529485 +0000 UTC m=+0.088616874 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:33:23 np0005625203.localdomain podman[265043]: 2026-02-20 09:33:23.648131772 +0000 UTC m=+0.122219151 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:33:23 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:33:23 np0005625203.localdomain python3.9[265042]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:33:24 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:24.198 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:33:25 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:25.195 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:33:25 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:33:25.198 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:33:27 np0005625203.localdomain sudo[265040]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:33:27 np0005625203.localdomain podman[265083]: 2026-02-20 09:33:27.765822177 +0000 UTC m=+0.082546107 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 20 09:33:27 np0005625203.localdomain podman[265083]: 2026-02-20 09:33:27.800338033 +0000 UTC m=+0.117061953 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:33:27 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:33:28 np0005625203.localdomain sudo[265191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wcxqpzbzqjaxueeuofezzlsetnjegngp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580007.7821763-373-125479595522294/AnsiballZ_file.py
Feb 20 09:33:28 np0005625203.localdomain sudo[265191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:28 np0005625203.localdomain python3.9[265193]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 09:33:28 np0005625203.localdomain sudo[265191]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:33:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:33:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:33:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:33:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16304 "" "Go-http-client/1.1"
Feb 20 09:33:29 np0005625203.localdomain sudo[265301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jyhqpzoxlnzpyybatixoswsqgzwqvnjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580008.729335-397-56785900961930/AnsiballZ_modprobe.py
Feb 20 09:33:29 np0005625203.localdomain sudo[265301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44334 DF PROTO=TCP SPT=51380 DPT=9102 SEQ=3982613137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA08A6750000000001030307) 
Feb 20 09:33:29 np0005625203.localdomain python3.9[265303]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Feb 20 09:33:29 np0005625203.localdomain sudo[265301]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:29 np0005625203.localdomain sudo[265411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dvhzmnuqovevothcdesqrlrqexvnmcqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580009.5068882-421-205369625555176/AnsiballZ_stat.py
Feb 20 09:33:29 np0005625203.localdomain sudo[265411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:30 np0005625203.localdomain python3.9[265413]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:33:30 np0005625203.localdomain sudo[265411]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:30 np0005625203.localdomain sudo[265468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rxncytqvsnmhkgiyjzvxazfluncmntvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580009.5068882-421-205369625555176/AnsiballZ_file.py
Feb 20 09:33:30 np0005625203.localdomain sudo[265468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44335 DF PROTO=TCP SPT=51380 DPT=9102 SEQ=3982613137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA08AA800000000001030307) 
Feb 20 09:33:30 np0005625203.localdomain python3.9[265470]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/dm-multipath.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/dm-multipath.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:33:30 np0005625203.localdomain sudo[265468]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16949 DF PROTO=TCP SPT=55984 DPT=9102 SEQ=3571959263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA08AC810000000001030307) 
Feb 20 09:33:31 np0005625203.localdomain sudo[265578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-npokcutnmxysczeqyfjkgpexcxnxaczq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580010.9074895-460-268470862443274/AnsiballZ_lineinfile.py
Feb 20 09:33:31 np0005625203.localdomain sudo[265578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:31 np0005625203.localdomain python3.9[265580]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:33:31 np0005625203.localdomain sudo[265578]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:31 np0005625203.localdomain sudo[265688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-efyrsqppptnibdfccptyjcnsychhkluj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580011.6412568-487-155436024121562/AnsiballZ_command.py
Feb 20 09:33:31 np0005625203.localdomain sudo[265688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:32 np0005625203.localdomain python3.9[265690]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:33:32 np0005625203.localdomain sudo[265688]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44336 DF PROTO=TCP SPT=51380 DPT=9102 SEQ=3982613137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA08B2810000000001030307) 
Feb 20 09:33:32 np0005625203.localdomain sudo[265799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yxorsucostikzelkstmtdogpyzukmkvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580012.325875-511-163725536997439/AnsiballZ_command.py
Feb 20 09:33:32 np0005625203.localdomain sudo[265799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:32 np0005625203.localdomain python3.9[265801]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -rF /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:33:32 np0005625203.localdomain sudo[265799]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:33 np0005625203.localdomain sudo[265910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uwwrjenupsuyuuoerbytxeonegcrddva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580013.117519-538-158436827964770/AnsiballZ_stat.py
Feb 20 09:33:33 np0005625203.localdomain sudo[265910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:33 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17096 DF PROTO=TCP SPT=56090 DPT=9102 SEQ=3883175571 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA08B6800000000001030307) 
Feb 20 09:33:33 np0005625203.localdomain python3.9[265912]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:33:33 np0005625203.localdomain sudo[265910]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:34 np0005625203.localdomain sudo[266022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-djmkfverqltwfxeqzccrdkegpgksiasp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580013.9133613-568-151190627041883/AnsiballZ_command.py
Feb 20 09:33:34 np0005625203.localdomain sudo[266022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:34 np0005625203.localdomain python3.9[266024]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:33:34 np0005625203.localdomain sudo[266022]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:35 np0005625203.localdomain sudo[266133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qixksbyrqvbvmrioobslioiruqgqmeoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580014.7180648-598-59063537856839/AnsiballZ_replace.py
Feb 20 09:33:35 np0005625203.localdomain sudo[266133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:33:35 np0005625203.localdomain podman[266136]: 2026-02-20 09:33:35.212903469 +0000 UTC m=+0.078659928 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:33:35 np0005625203.localdomain podman[266136]: 2026-02-20 09:33:35.297321422 +0000 UTC m=+0.163077901 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 09:33:35 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:33:35 np0005625203.localdomain python3.9[266135]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:33:35 np0005625203.localdomain sudo[266133]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:35 np0005625203.localdomain sudo[266268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-obtbuwutixvvbfaesozrbgcdtoutvhjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580015.5605829-625-21673945548389/AnsiballZ_lineinfile.py
Feb 20 09:33:35 np0005625203.localdomain sudo[266268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:36 np0005625203.localdomain python3.9[266270]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:33:36 np0005625203.localdomain sudo[266268]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44337 DF PROTO=TCP SPT=51380 DPT=9102 SEQ=3982613137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA08C2400000000001030307) 
Feb 20 09:33:36 np0005625203.localdomain sudo[266378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lxhlnyuvnapuvqbhbfectymonhvohzwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580016.1505892-625-231997533938596/AnsiballZ_lineinfile.py
Feb 20 09:33:36 np0005625203.localdomain sudo[266378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:36 np0005625203.localdomain python3.9[266380]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:33:36 np0005625203.localdomain sudo[266378]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:36 np0005625203.localdomain sudo[266488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jmqjgzhzizgjhzkjmsackbswrxznzrms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580016.7581847-625-66959184976224/AnsiballZ_lineinfile.py
Feb 20 09:33:37 np0005625203.localdomain sudo[266488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:33:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:33:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:33:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:33:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:33:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:33:37 np0005625203.localdomain python3.9[266490]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:33:37 np0005625203.localdomain sudo[266488]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:37 np0005625203.localdomain sudo[266598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kpgtvunphdvisqkgpnssdjwquxapazev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580017.3844178-625-101896243447661/AnsiballZ_lineinfile.py
Feb 20 09:33:37 np0005625203.localdomain sudo[266598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:37 np0005625203.localdomain python3.9[266600]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:33:37 np0005625203.localdomain sudo[266598]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:38 np0005625203.localdomain sudo[266708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzyintvikrziysjbmwqlshfjyfitebdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580018.0894992-712-24345088780889/AnsiballZ_stat.py
Feb 20 09:33:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:33:38 np0005625203.localdomain sudo[266708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:38 np0005625203.localdomain systemd[1]: tmp-crun.xYLlJs.mount: Deactivated successfully.
Feb 20 09:33:38 np0005625203.localdomain podman[266710]: 2026-02-20 09:33:38.470458159 +0000 UTC m=+0.082350652 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc.)
Feb 20 09:33:38 np0005625203.localdomain podman[266710]: 2026-02-20 09:33:38.488256348 +0000 UTC m=+0.100148921 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., distribution-scope=public, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 09:33:38 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:33:38 np0005625203.localdomain python3.9[266711]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:33:38 np0005625203.localdomain sudo[266708]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:39 np0005625203.localdomain sudo[266839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-iuwagafcrstcevdgeprrzexjgfuqhoui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580018.9581766-742-74449006826797/AnsiballZ_systemd_service.py
Feb 20 09:33:39 np0005625203.localdomain sudo[266839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:39 np0005625203.localdomain python3.9[266841]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:33:39 np0005625203.localdomain sudo[266839]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:40 np0005625203.localdomain sudo[266951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lbrgkcomojtltocnxifpxfbwlynnrszo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580019.7754555-766-210638589480111/AnsiballZ_systemd_service.py
Feb 20 09:33:40 np0005625203.localdomain sudo[266951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:40 np0005625203.localdomain python3.9[266953]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:33:40 np0005625203.localdomain sudo[266951]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:33:41 np0005625203.localdomain podman[266973]: 2026-02-20 09:33:41.767089673 +0000 UTC m=+0.083967042 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 20 09:33:41 np0005625203.localdomain podman[266973]: 2026-02-20 09:33:41.781307471 +0000 UTC m=+0.098184830 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Feb 20 09:33:41 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:33:42 np0005625203.localdomain sudo[267083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xmdjrdhxauujrvnzszhjyrgseiuiyaug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580021.950406-802-265217719676034/AnsiballZ_file.py
Feb 20 09:33:42 np0005625203.localdomain sudo[267083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:42 np0005625203.localdomain python3.9[267085]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Feb 20 09:33:42 np0005625203.localdomain sudo[267083]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:42 np0005625203.localdomain sudo[267193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-eukqdrseseomftsfrkuubmvnwehvbaml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580022.620311-826-138979945318026/AnsiballZ_modprobe.py
Feb 20 09:33:42 np0005625203.localdomain sudo[267193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:43 np0005625203.localdomain python3.9[267195]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Feb 20 09:33:43 np0005625203.localdomain sudo[267193]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:43 np0005625203.localdomain sudo[267303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cbuedhvnwntvvdsxoltgsngwwqyikqik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580023.3873258-850-97389387438424/AnsiballZ_stat.py
Feb 20 09:33:43 np0005625203.localdomain sudo[267303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:43 np0005625203.localdomain python3.9[267305]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:33:43 np0005625203.localdomain sudo[267303]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:44 np0005625203.localdomain sudo[267360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vyezpbsolwfadvvaolhnbecvabbtjsgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580023.3873258-850-97389387438424/AnsiballZ_file.py
Feb 20 09:33:44 np0005625203.localdomain sudo[267360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:44 np0005625203.localdomain python3.9[267362]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/modules-load.d/nvme-fabrics.conf _original_basename=module-load.conf.j2 recurse=False state=file path=/etc/modules-load.d/nvme-fabrics.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:33:44 np0005625203.localdomain sudo[267360]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44338 DF PROTO=TCP SPT=51380 DPT=9102 SEQ=3982613137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA08E2810000000001030307) 
Feb 20 09:33:45 np0005625203.localdomain sudo[267470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uuvsvcvqrbkligxepduswnewkmxkhncc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580024.9183953-889-7940317191445/AnsiballZ_lineinfile.py
Feb 20 09:33:45 np0005625203.localdomain sudo[267470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:45 np0005625203.localdomain python3.9[267472]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:33:45 np0005625203.localdomain sudo[267470]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:46 np0005625203.localdomain sudo[267580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mpqsvjwohbcluwaabdwxzdyetyuiiwvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580025.6901703-916-176364859642303/AnsiballZ_dnf.py
Feb 20 09:33:46 np0005625203.localdomain sudo[267580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:46 np0005625203.localdomain python3.9[267582]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Feb 20 09:33:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:33:48 np0005625203.localdomain systemd[1]: tmp-crun.HgpYXD.mount: Deactivated successfully.
Feb 20 09:33:48 np0005625203.localdomain podman[267585]: 2026-02-20 09:33:48.76280054 +0000 UTC m=+0.079667509 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:33:48 np0005625203.localdomain podman[267585]: 2026-02-20 09:33:48.771386235 +0000 UTC m=+0.088253184 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:33:48 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:33:49 np0005625203.localdomain sudo[267580]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:50 np0005625203.localdomain python3.9[267715]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Feb 20 09:33:51 np0005625203.localdomain sudo[267827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jaqvproucqyvnbhpqohpxuixvpzhxiud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580031.2487261-968-190694795979498/AnsiballZ_file.py
Feb 20 09:33:51 np0005625203.localdomain sudo[267827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:51 np0005625203.localdomain python3.9[267829]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:33:51 np0005625203.localdomain sudo[267827]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:52 np0005625203.localdomain sudo[267937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rzvekhpkhltxljzzqczpqcjujxbblkbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580032.1652477-1001-60730068130963/AnsiballZ_systemd_service.py
Feb 20 09:33:52 np0005625203.localdomain sudo[267937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:52 np0005625203.localdomain python3.9[267939]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:33:52 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:33:52 np0005625203.localdomain systemd-rc-local-generator[267959]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:33:52 np0005625203.localdomain systemd-sysv-generator[267964]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:33:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:33:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:33:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:33:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:33:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:33:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:33:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:33:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:33:52 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:33:53 np0005625203.localdomain sudo[267937]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:53 np0005625203.localdomain python3.9[268082]: ansible-ansible.builtin.service_facts Invoked
Feb 20 09:33:54 np0005625203.localdomain network[268099]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Feb 20 09:33:54 np0005625203.localdomain network[268100]: 'network-scripts' will be removed from distribution in near future.
Feb 20 09:33:54 np0005625203.localdomain network[268101]: It is advised to switch to 'NetworkManager' instead for network management.
Feb 20 09:33:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:33:54 np0005625203.localdomain podman[268107]: 2026-02-20 09:33:54.157460041 +0000 UTC m=+0.067462782 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:33:54 np0005625203.localdomain podman[268107]: 2026-02-20 09:33:54.163006372 +0000 UTC m=+0.073009163 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:33:54 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:33:55 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client.service:23: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:33:58 np0005625203.localdomain sudo[268354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-nipvcsuytoilkzoeauuhgcmudsczrijn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580038.368442-1058-127810630343318/AnsiballZ_systemd_service.py
Feb 20 09:33:58 np0005625203.localdomain sudo[268354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:33:58 np0005625203.localdomain podman[268356]: 2026-02-20 09:33:58.769646094 +0000 UTC m=+0.085839030 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 20 09:33:58 np0005625203.localdomain podman[268356]: 2026-02-20 09:33:58.801109264 +0000 UTC m=+0.117302210 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:33:58 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:33:58 np0005625203.localdomain python3.9[268357]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:33:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:33:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:33:59 np0005625203.localdomain sudo[268354]: pam_unix(sudo:session): session closed for user root
Feb 20 09:33:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:33:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:33:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16317 "" "Go-http-client/1.1"
Feb 20 09:33:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58971 DF PROTO=TCP SPT=38614 DPT=9102 SEQ=1246244886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA091BA50000000001030307) 
Feb 20 09:33:59 np0005625203.localdomain sudo[268482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-csppdychfsnaappgxbdmeakrchuttjsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580039.1350539-1058-218814149218430/AnsiballZ_systemd_service.py
Feb 20 09:33:59 np0005625203.localdomain sudo[268482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:33:59 np0005625203.localdomain python3.9[268484]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:34:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58972 DF PROTO=TCP SPT=38614 DPT=9102 SEQ=1246244886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA091FC10000000001030307) 
Feb 20 09:34:00 np0005625203.localdomain sshd[268486]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:34:00 np0005625203.localdomain sudo[268482]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:00 np0005625203.localdomain sshd[268540]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:34:00 np0005625203.localdomain sshd[268486]: Invalid user airflow from 5.253.59.68 port 58844
Feb 20 09:34:01 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44339 DF PROTO=TCP SPT=51380 DPT=9102 SEQ=3982613137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0922800000000001030307) 
Feb 20 09:34:01 np0005625203.localdomain sshd[268486]: Received disconnect from 5.253.59.68 port 58844:11: Bye Bye [preauth]
Feb 20 09:34:01 np0005625203.localdomain sshd[268486]: Disconnected from invalid user airflow 5.253.59.68 port 58844 [preauth]
Feb 20 09:34:01 np0005625203.localdomain sudo[268597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lpsqhpauvhwjvvdfxdwlmytwuniaypjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580040.8738227-1058-202610633427445/AnsiballZ_systemd_service.py
Feb 20 09:34:01 np0005625203.localdomain sudo[268597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:01 np0005625203.localdomain python3.9[268599]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:34:01 np0005625203.localdomain sudo[268597]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:01 np0005625203.localdomain sshd[268540]: Invalid user titu from 185.196.11.208 port 50510
Feb 20 09:34:01 np0005625203.localdomain sshd[268540]: Received disconnect from 185.196.11.208 port 50510:11: Bye Bye [preauth]
Feb 20 09:34:01 np0005625203.localdomain sshd[268540]: Disconnected from invalid user titu 185.196.11.208 port 50510 [preauth]
Feb 20 09:34:01 np0005625203.localdomain sudo[268708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-syzrabxxjjarttmpatwgggoeokgnauve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580041.5711846-1058-205810352676886/AnsiballZ_systemd_service.py
Feb 20 09:34:01 np0005625203.localdomain sudo[268708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:01 np0005625203.localdomain sshd[268711]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:34:02 np0005625203.localdomain python3.9[268710]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:34:02 np0005625203.localdomain sudo[268708]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58973 DF PROTO=TCP SPT=38614 DPT=9102 SEQ=1246244886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0927C00000000001030307) 
Feb 20 09:34:02 np0005625203.localdomain sudo[268821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-wyvlpsjkzuomkxqvchareqdqwgucxaed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580042.2796485-1058-96197005389719/AnsiballZ_systemd_service.py
Feb 20 09:34:02 np0005625203.localdomain sudo[268821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:02 np0005625203.localdomain sshd[268711]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:34:02 np0005625203.localdomain python3.9[268823]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:34:02 np0005625203.localdomain sudo[268821]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:03 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=16950 DF PROTO=TCP SPT=55984 DPT=9102 SEQ=3571959263 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA092A800000000001030307) 
Feb 20 09:34:03 np0005625203.localdomain sudo[268932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bcvpwzbxjkievrjmqpovsxpacjfcxziq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580042.9511008-1058-154629817702813/AnsiballZ_systemd_service.py
Feb 20 09:34:03 np0005625203.localdomain sudo[268932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:03 np0005625203.localdomain python3.9[268934]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:34:03 np0005625203.localdomain sudo[268932]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:04 np0005625203.localdomain sudo[269043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jlecaifqrlepeviuacwueojyrddatdvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580043.8240566-1058-246549439763083/AnsiballZ_systemd_service.py
Feb 20 09:34:04 np0005625203.localdomain sudo[269043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:04 np0005625203.localdomain python3.9[269045]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:34:04 np0005625203.localdomain sudo[269043]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:05 np0005625203.localdomain sudo[269154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rmuymmpdcfcgfkcwbkgucjnopiztvlyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580044.5177584-1058-249008074925112/AnsiballZ_systemd_service.py
Feb 20 09:34:05 np0005625203.localdomain sudo[269154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:34:05 np0005625203.localdomain systemd[1]: tmp-crun.KeRExR.mount: Deactivated successfully.
Feb 20 09:34:05 np0005625203.localdomain podman[269157]: 2026-02-20 09:34:05.496974369 +0000 UTC m=+0.096884380 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:34:05 np0005625203.localdomain podman[269157]: 2026-02-20 09:34:05.540359488 +0000 UTC m=+0.140269549 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:34:05 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:34:05 np0005625203.localdomain python3.9[269156]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:34:05 np0005625203.localdomain sudo[269154]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58974 DF PROTO=TCP SPT=38614 DPT=9102 SEQ=1246244886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0937800000000001030307) 
Feb 20 09:34:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:34:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:34:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:34:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:34:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:34:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:34:07 np0005625203.localdomain sudo[269289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-lndsswbyvfsvvunadyicvtghmelrvbcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580046.5551999-1235-17517823270941/AnsiballZ_file.py
Feb 20 09:34:07 np0005625203.localdomain sudo[269289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:34:07.645 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:34:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:34:07.646 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:34:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:34:07.646 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:34:07 np0005625203.localdomain python3.9[269291]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:07 np0005625203.localdomain sudo[269289]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:08 np0005625203.localdomain sudo[269399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ynafoeylmsczaztnsnrhqcbzmzbzqvpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580047.8624825-1235-183417957667300/AnsiballZ_file.py
Feb 20 09:34:08 np0005625203.localdomain sudo[269399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:08 np0005625203.localdomain python3.9[269401]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:08 np0005625203.localdomain sudo[269399]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:34:08 np0005625203.localdomain sudo[269509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-qnbyvadesuebbfgopkhyclhqkxjuymga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580048.4408455-1235-281385598636695/AnsiballZ_file.py
Feb 20 09:34:08 np0005625203.localdomain sudo[269509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:08 np0005625203.localdomain podman[269510]: 2026-02-20 09:34:08.773741372 +0000 UTC m=+0.088676237 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347)
Feb 20 09:34:08 np0005625203.localdomain podman[269510]: 2026-02-20 09:34:08.787216998 +0000 UTC m=+0.102151893 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, distribution-scope=public, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, vendor=Red Hat, Inc., version=9.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:34:08 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:34:08 np0005625203.localdomain python3.9[269517]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:08 np0005625203.localdomain sudo[269509]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:09 np0005625203.localdomain sudo[269639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gvkmqkavzddxsqfuxoefcbxwvbeufvja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580049.1132786-1235-134857204572917/AnsiballZ_file.py
Feb 20 09:34:09 np0005625203.localdomain sudo[269639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:09 np0005625203.localdomain python3.9[269641]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:09 np0005625203.localdomain sudo[269639]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:09 np0005625203.localdomain sudo[269749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-poauzyoikqromzmedvfiuyqnaiglqhgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580049.7209022-1235-163979815249385/AnsiballZ_file.py
Feb 20 09:34:09 np0005625203.localdomain sudo[269749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:10 np0005625203.localdomain python3.9[269751]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:10 np0005625203.localdomain sudo[269749]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:10 np0005625203.localdomain sshd[269823]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:34:10 np0005625203.localdomain sudo[269860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gtlcwrmzfgkkfktfnobuhghoomclopyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580050.310261-1235-72289713492824/AnsiballZ_file.py
Feb 20 09:34:10 np0005625203.localdomain sudo[269860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:10 np0005625203.localdomain sshd[269823]: error: kex_exchange_identification: Connection closed by remote host
Feb 20 09:34:10 np0005625203.localdomain sshd[269823]: Connection closed by 209.38.85.213 port 34746
Feb 20 09:34:10 np0005625203.localdomain python3.9[269862]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:10 np0005625203.localdomain sudo[269860]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:11 np0005625203.localdomain sudo[269970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gvmxlrtylqgeapjbpeuhkwscjyotdhju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580050.9539368-1235-216409381099773/AnsiballZ_file.py
Feb 20 09:34:11 np0005625203.localdomain sudo[269970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:11 np0005625203.localdomain python3.9[269972]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:11 np0005625203.localdomain sudo[269970]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:11 np0005625203.localdomain sudo[270080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-smqislzwxlvhxyburpdxnztldktszyea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580051.4950864-1235-46389351373522/AnsiballZ_file.py
Feb 20 09:34:11 np0005625203.localdomain sudo[270080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:11 np0005625203.localdomain python3.9[270082]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:11 np0005625203.localdomain sudo[270080]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:12 np0005625203.localdomain sudo[270190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-onozkxpbxwswrwiebfkvtxcgsmrurcim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580052.2635255-1406-221297681151115/AnsiballZ_file.py
Feb 20 09:34:12 np0005625203.localdomain sudo[270190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:34:12 np0005625203.localdomain podman[270193]: 2026-02-20 09:34:12.626289948 +0000 UTC m=+0.088735939 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 20 09:34:12 np0005625203.localdomain podman[270193]: 2026-02-20 09:34:12.641316061 +0000 UTC m=+0.103762022 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:34:12 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:34:12 np0005625203.localdomain python3.9[270192]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:12 np0005625203.localdomain sudo[270190]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:13 np0005625203.localdomain sudo[270318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fsckfoavgknapocdfsjefamhdrwavvtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580052.8426223-1406-221647204757503/AnsiballZ_file.py
Feb 20 09:34:13 np0005625203.localdomain sudo[270318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:13 np0005625203.localdomain python3.9[270320]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:13 np0005625203.localdomain sudo[270318]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:13 np0005625203.localdomain sudo[270428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rdsxxhtufmlhtizjtabfnfkhdidydbdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580053.482742-1406-69440739609782/AnsiballZ_file.py
Feb 20 09:34:13 np0005625203.localdomain sudo[270428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:14 np0005625203.localdomain python3.9[270430]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:14 np0005625203.localdomain sudo[270428]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:14 np0005625203.localdomain sudo[270538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kftkzentzigmlkcsapwvqonimrqmnxgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580054.3711185-1406-91008771488965/AnsiballZ_file.py
Feb 20 09:34:14 np0005625203.localdomain sudo[270538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58975 DF PROTO=TCP SPT=38614 DPT=9102 SEQ=1246244886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0958800000000001030307) 
Feb 20 09:34:14 np0005625203.localdomain python3.9[270540]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:14 np0005625203.localdomain sudo[270538]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:15 np0005625203.localdomain sudo[270648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-svmoseybtivrbkfljqqbkbhhedqxxvtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580055.038469-1406-190835536597678/AnsiballZ_file.py
Feb 20 09:34:15 np0005625203.localdomain sudo[270648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:15 np0005625203.localdomain python3.9[270650]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:15 np0005625203.localdomain sudo[270648]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:16 np0005625203.localdomain sudo[270758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-poozucmyomjxoylwdqxchkkrykoydpem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580056.2579727-1406-184765912376912/AnsiballZ_file.py
Feb 20 09:34:16 np0005625203.localdomain sudo[270758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:16 np0005625203.localdomain python3.9[270760]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:16 np0005625203.localdomain sudo[270758]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:17 np0005625203.localdomain sudo[270868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grbaapocwuakszshutbqennwezftxdfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580056.8636599-1406-124461779062659/AnsiballZ_file.py
Feb 20 09:34:17 np0005625203.localdomain sudo[270868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:17 np0005625203.localdomain python3.9[270870]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:17 np0005625203.localdomain sudo[270868]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:17 np0005625203.localdomain sudo[270871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:34:17 np0005625203.localdomain sudo[270871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:34:17 np0005625203.localdomain sudo[270871]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:17 np0005625203.localdomain sudo[270910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:34:17 np0005625203.localdomain sudo[270910]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:34:18 np0005625203.localdomain sudo[270910]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:18 np0005625203.localdomain sudo[271047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gylvmaufohhenuxgullxrmseswzilccy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580057.4595401-1406-195373642355579/AnsiballZ_file.py
Feb 20 09:34:18 np0005625203.localdomain sudo[271047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:18 np0005625203.localdomain python3.9[271049]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:18 np0005625203.localdomain sudo[271047]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:18 np0005625203.localdomain sudo[271067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:34:18 np0005625203.localdomain sudo[271067]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:34:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:34:18 np0005625203.localdomain sudo[271067]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:18 np0005625203.localdomain podman[271101]: 2026-02-20 09:34:18.985053373 +0000 UTC m=+0.083065014 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:34:18 np0005625203.localdomain podman[271101]: 2026-02-20 09:34:18.999398476 +0000 UTC m=+0.097410187 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:34:19 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:34:19 np0005625203.localdomain sudo[271198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-dbtkxmvurutkbnfiataoqbmkcexqucnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580058.8909073-1580-268337510556829/AnsiballZ_command.py
Feb 20 09:34:19 np0005625203.localdomain sudo[271198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:19.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:19.199 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:34:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:19.199 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:34:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:19.215 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:34:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:19.215 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:19.216 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 20 09:34:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:19.235 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 20 09:34:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:19.235 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:19 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:19.236 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 20 09:34:19 np0005625203.localdomain python3.9[271200]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                                              systemctl disable --now certmonger.service
                                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                                            fi
                                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:34:19 np0005625203.localdomain sudo[271198]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:20.242 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:20.242 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:34:20 np0005625203.localdomain python3.9[271310]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Feb 20 09:34:20 np0005625203.localdomain sshd[271311]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:34:20 np0005625203.localdomain sudo[271420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-chyoqsrlhfgmautemqlkfkstvclyrrfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580060.6992917-1634-123440115513057/AnsiballZ_systemd_service.py
Feb 20 09:34:20 np0005625203.localdomain sudo[271420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:21.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:21.200 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:21.216 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:34:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:21.216 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:34:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:21.217 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:34:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:21.217 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:34:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:21.218 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:34:21 np0005625203.localdomain python3.9[271422]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Feb 20 09:34:21 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:34:21 np0005625203.localdomain systemd-rc-local-generator[271466]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:34:21 np0005625203.localdomain systemd-sysv-generator[271472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:34:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:34:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:34:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:34:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:34:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:34:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:34:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:34:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:34:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:34:21 np0005625203.localdomain sudo[271420]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:21.691 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:34:21 np0005625203.localdomain sshd[271311]: Invalid user cod4server from 103.48.192.48 port 18002
Feb 20 09:34:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:21.916 228941 WARNING nova.virt.libvirt.driver [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:34:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:21.919 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12916MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:34:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:21.919 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:34:21 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:21.920 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.025 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.025 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:34:22 np0005625203.localdomain sshd[271311]: Received disconnect from 103.48.192.48 port 18002:11: Bye Bye [preauth]
Feb 20 09:34:22 np0005625203.localdomain sshd[271311]: Disconnected from invalid user cod4server 103.48.192.48 port 18002 [preauth]
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.074 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Refreshing inventories for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.123 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Updating ProviderTree inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.123 228941 DEBUG nova.compute.provider_tree [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.136 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Refreshing aggregate associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.160 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Refreshing trait associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, traits: HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AESNI,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_TRUSTED_CERTS,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_ABM,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_RESCUE_BFV,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_SSE42,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_BMI,HW_CPU_X86_AMD_SVM,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SHA,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 20 09:34:22 np0005625203.localdomain sudo[271588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vmkbruqslrogwcebsoaqtjjgrvuviogc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580061.887149-1658-62010505159263/AnsiballZ_command.py
Feb 20 09:34:22 np0005625203.localdomain sudo[271588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.180 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:34:22 np0005625203.localdomain python3.9[271590]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:34:22 np0005625203.localdomain sudo[271588]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.636 228941 DEBUG oslo_concurrency.processutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.644 228941 DEBUG nova.compute.provider_tree [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.661 228941 DEBUG nova.scheduler.client.report [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.663 228941 DEBUG nova.compute.resource_tracker [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:34:22 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:22.664 228941 DEBUG oslo_concurrency.lockutils [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:34:22 np0005625203.localdomain sudo[271721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tiqyhfgvvpyqpncycjkrjrknttdzesyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580062.5332255-1658-254063939936785/AnsiballZ_command.py
Feb 20 09:34:22 np0005625203.localdomain sudo[271721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:22 np0005625203.localdomain python3.9[271723]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:34:23 np0005625203.localdomain sudo[271721]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:23 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:23.198 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:23 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:23.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:23 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:23.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:23 np0005625203.localdomain sudo[271832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-seibojdqkpvangprsopzynfvtddsrrxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580063.1339288-1658-195586717144914/AnsiballZ_command.py
Feb 20 09:34:23 np0005625203.localdomain sudo[271832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:23 np0005625203.localdomain python3.9[271834]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:34:24 np0005625203.localdomain sudo[271832]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:34:24 np0005625203.localdomain podman[271853]: 2026-02-20 09:34:24.784982098 +0000 UTC m=+0.091048710 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:34:24 np0005625203.localdomain podman[271853]: 2026-02-20 09:34:24.801535798 +0000 UTC m=+0.107602380 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:34:24 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:34:25 np0005625203.localdomain sudo[271967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gvwkpmgfqnffsmasbjjnbucurpfzgnfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580064.7442122-1658-172601818840077/AnsiballZ_command.py
Feb 20 09:34:25 np0005625203.localdomain sudo[271967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:25 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:25.208 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:25 np0005625203.localdomain python3.9[271969]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:34:25 np0005625203.localdomain sudo[271967]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:26 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:26.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:26 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:26.199 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:26 np0005625203.localdomain sudo[272078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cywzzouwmzzpzqyiosmcjrawormtfgpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580065.3844094-1658-86993965856026/AnsiballZ_command.py
Feb 20 09:34:26 np0005625203.localdomain sudo[272078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:26 np0005625203.localdomain python3.9[272080]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:34:26 np0005625203.localdomain sudo[272078]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:26 np0005625203.localdomain sudo[272189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-phfqnmclbwfqnpbyyjbkytdjvfifhpay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580066.602414-1658-203895981864962/AnsiballZ_command.py
Feb 20 09:34:26 np0005625203.localdomain sudo[272189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:27 np0005625203.localdomain python3.9[272191]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:34:27 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:34:27.195 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:34:28 np0005625203.localdomain sudo[272189]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:28 np0005625203.localdomain sudo[272300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jznsueuezjumaskelfaerlqqebwumkll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580068.2364545-1658-247499080415175/AnsiballZ_command.py
Feb 20 09:34:28 np0005625203.localdomain sudo[272300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:28 np0005625203.localdomain python3.9[272302]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:34:28 np0005625203.localdomain sudo[272300]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:34:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:34:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:34:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:34:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16319 "" "Go-http-client/1.1"
Feb 20 09:34:29 np0005625203.localdomain sudo[272411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sonfbogdujxbjjqqblvhiousdljzaadp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580068.804515-1658-104961514382317/AnsiballZ_command.py
Feb 20 09:34:29 np0005625203.localdomain sudo[272411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:34:29 np0005625203.localdomain podman[272414]: 2026-02-20 09:34:29.168484545 +0000 UTC m=+0.077414540 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent)
Feb 20 09:34:29 np0005625203.localdomain podman[272414]: 2026-02-20 09:34:29.174293944 +0000 UTC m=+0.083223949 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:34:29 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:34:29 np0005625203.localdomain python3.9[272413]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:34:29 np0005625203.localdomain sudo[272411]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17002 DF PROTO=TCP SPT=53564 DPT=9102 SEQ=2179133752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0990D50000000001030307) 
Feb 20 09:34:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17003 DF PROTO=TCP SPT=53564 DPT=9102 SEQ=2179133752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0994C00000000001030307) 
Feb 20 09:34:31 np0005625203.localdomain sudo[272540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxzqdvptcjiegddrausjerobreoippun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580070.74612-1865-102115548662967/AnsiballZ_file.py
Feb 20 09:34:31 np0005625203.localdomain sudo[272540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:31 np0005625203.localdomain python3.9[272542]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:31 np0005625203.localdomain sudo[272540]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:31 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58976 DF PROTO=TCP SPT=38614 DPT=9102 SEQ=1246244886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0998800000000001030307) 
Feb 20 09:34:31 np0005625203.localdomain sudo[272650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gxevbmvvysfmcokhmylztpegvpoelfmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580071.3878305-1865-259596332837183/AnsiballZ_file.py
Feb 20 09:34:31 np0005625203.localdomain sudo[272650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:31 np0005625203.localdomain python3.9[272652]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:31 np0005625203.localdomain sudo[272650]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17004 DF PROTO=TCP SPT=53564 DPT=9102 SEQ=2179133752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA099CC00000000001030307) 
Feb 20 09:34:32 np0005625203.localdomain sudo[272760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-orillubuygbyegcilgzccovxsiltpmzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580072.15203-1911-75390065186740/AnsiballZ_file.py
Feb 20 09:34:32 np0005625203.localdomain sudo[272760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:32 np0005625203.localdomain python3.9[272762]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:32 np0005625203.localdomain sudo[272760]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:33 np0005625203.localdomain sudo[272870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-hdetyeskhwdpdttffcuvknqcunftugcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580072.9721532-1911-147894913056944/AnsiballZ_file.py
Feb 20 09:34:33 np0005625203.localdomain sudo[272870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:33 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=44340 DF PROTO=TCP SPT=51380 DPT=9102 SEQ=3982613137 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA09A0800000000001030307) 
Feb 20 09:34:33 np0005625203.localdomain python3.9[272872]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:33 np0005625203.localdomain sudo[272870]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:33 np0005625203.localdomain sudo[272980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjxeeufwayigheqzhzmfncuycrpgqqkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580073.5912592-1911-40914877692149/AnsiballZ_file.py
Feb 20 09:34:33 np0005625203.localdomain sudo[272980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:34 np0005625203.localdomain python3.9[272982]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:34 np0005625203.localdomain sudo[272980]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:34 np0005625203.localdomain sudo[273090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izvykbhhvzdqliarvvqhmwwbmpineeiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580074.1812756-1911-236098644101206/AnsiballZ_file.py
Feb 20 09:34:34 np0005625203.localdomain sudo[273090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:34 np0005625203.localdomain python3.9[273092]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:34 np0005625203.localdomain sudo[273090]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:35 np0005625203.localdomain sudo[273200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ccazuzedtdxmikkvdvoyjtsubvcmtzhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580074.8126457-1911-257671056985490/AnsiballZ_file.py
Feb 20 09:34:35 np0005625203.localdomain sudo[273200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:35 np0005625203.localdomain python3.9[273202]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:35 np0005625203.localdomain sudo[273200]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:35 np0005625203.localdomain sudo[273310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jktvkneowbiwcjrrslpaumcatjnjzmhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580075.4186313-1911-236397588679560/AnsiballZ_file.py
Feb 20 09:34:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:34:35 np0005625203.localdomain sudo[273310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:35 np0005625203.localdomain podman[273312]: 2026-02-20 09:34:35.769887375 +0000 UTC m=+0.082886837 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Feb 20 09:34:35 np0005625203.localdomain podman[273312]: 2026-02-20 09:34:35.835702916 +0000 UTC m=+0.148702328 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 20 09:34:35 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:34:35 np0005625203.localdomain python3.9[273313]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:35 np0005625203.localdomain sudo[273310]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:36 np0005625203.localdomain sshd[273357]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:34:36 np0005625203.localdomain sudo[273447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-htcuasmtkbikcxohgnmyrihmordirzio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580076.023256-1911-49658504900809/AnsiballZ_file.py
Feb 20 09:34:36 np0005625203.localdomain sudo[273447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17005 DF PROTO=TCP SPT=53564 DPT=9102 SEQ=2179133752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA09AC800000000001030307) 
Feb 20 09:34:36 np0005625203.localdomain python3.9[273449]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:36 np0005625203.localdomain sudo[273447]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:34:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:34:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:34:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:34:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:34:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:34:37 np0005625203.localdomain sshd[273357]: Received disconnect from 103.61.123.132 port 45222:11: Bye Bye [preauth]
Feb 20 09:34:37 np0005625203.localdomain sshd[273357]: Disconnected from authenticating user root 103.61.123.132 port 45222 [preauth]
Feb 20 09:34:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:34:39 np0005625203.localdomain podman[273467]: 2026-02-20 09:34:39.760419609 +0000 UTC m=+0.077432680 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, version=9.7, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z)
Feb 20 09:34:39 np0005625203.localdomain podman[273467]: 2026-02-20 09:34:39.777253998 +0000 UTC m=+0.094267089 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, vcs-type=git, name=ubi9/ubi-minimal, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64)
Feb 20 09:34:39 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:34:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:34:42 np0005625203.localdomain podman[273487]: 2026-02-20 09:34:42.775100875 +0000 UTC m=+0.089546354 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 20 09:34:42 np0005625203.localdomain podman[273487]: 2026-02-20 09:34:42.787470796 +0000 UTC m=+0.101916275 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute)
Feb 20 09:34:42 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:34:43 np0005625203.localdomain sudo[273596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-medutbuaqkmgjprifxocpgabpeswrcgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580083.1019397-2275-27483431634141/AnsiballZ_getent.py
Feb 20 09:34:43 np0005625203.localdomain sudo[273596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:43 np0005625203.localdomain python3.9[273598]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Feb 20 09:34:43 np0005625203.localdomain sudo[273596]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17006 DF PROTO=TCP SPT=53564 DPT=9102 SEQ=2179133752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA09CC800000000001030307) 
Feb 20 09:34:44 np0005625203.localdomain sshd[273617]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:34:44 np0005625203.localdomain sshd[273617]: Accepted publickey for zuul from 192.168.122.30 port 43586 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:34:44 np0005625203.localdomain systemd-logind[759]: New session 59 of user zuul.
Feb 20 09:34:45 np0005625203.localdomain systemd[1]: Started Session 59 of User zuul.
Feb 20 09:34:45 np0005625203.localdomain sshd[273617]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:34:45 np0005625203.localdomain sshd[273620]: Received disconnect from 192.168.122.30 port 43586:11: disconnected by user
Feb 20 09:34:45 np0005625203.localdomain sshd[273620]: Disconnected from user zuul 192.168.122.30 port 43586
Feb 20 09:34:45 np0005625203.localdomain sshd[273617]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:34:45 np0005625203.localdomain systemd[1]: session-59.scope: Deactivated successfully.
Feb 20 09:34:45 np0005625203.localdomain systemd-logind[759]: Session 59 logged out. Waiting for processes to exit.
Feb 20 09:34:45 np0005625203.localdomain systemd-logind[759]: Removed session 59.
Feb 20 09:34:45 np0005625203.localdomain python3.9[273728]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:34:46 np0005625203.localdomain python3.9[273783]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:46 np0005625203.localdomain python3.9[273891]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:34:47 np0005625203.localdomain python3.9[273977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771580086.383989-2356-78437409910261/.source _original_basename=ssh-config follow=False checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:47 np0005625203.localdomain python3.9[274085]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:34:48 np0005625203.localdomain python3.9[274171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771580087.5343814-2356-156856460557394/.source.py _original_basename=nova_statedir_ownership.py follow=False checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:49 np0005625203.localdomain python3.9[274279]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:34:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:34:49 np0005625203.localdomain podman[274280]: 2026-02-20 09:34:49.259097735 +0000 UTC m=+0.090657899 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:34:49 np0005625203.localdomain podman[274280]: 2026-02-20 09:34:49.271347703 +0000 UTC m=+0.102907857 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:34:49 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:34:49 np0005625203.localdomain python3.9[274388]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771580088.6126633-2356-64137263699961/.source _original_basename=run-on-host follow=False checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:50 np0005625203.localdomain python3.9[274496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:34:51 np0005625203.localdomain python3.9[274582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1771580090.1413128-2518-234479077951271/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=5c86faa791c2b2de3923873eeab6b1f262f557b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:52 np0005625203.localdomain sudo[274690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-szclablfkfqhkzlhyyztbvygsojnyjlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580091.8180509-2563-48366692528577/AnsiballZ_file.py
Feb 20 09:34:52 np0005625203.localdomain sudo[274690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:52 np0005625203.localdomain python3.9[274692]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:52 np0005625203.localdomain sudo[274690]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:52 np0005625203.localdomain sudo[274800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-cduhhntdzqbpcudzajhdtxclxajwfpkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580092.480614-2587-179712814784102/AnsiballZ_copy.py
Feb 20 09:34:52 np0005625203.localdomain sudo[274800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:52 np0005625203.localdomain python3.9[274802]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:52 np0005625203.localdomain sudo[274800]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:53 np0005625203.localdomain sudo[274910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-roegtufodwtoiusfxxhsitjdkgchtjet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580093.1482036-2611-176373131152691/AnsiballZ_stat.py
Feb 20 09:34:53 np0005625203.localdomain sudo[274910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:53 np0005625203.localdomain python3.9[274912]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:34:53 np0005625203.localdomain sudo[274910]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:54 np0005625203.localdomain sudo[275022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sekxevwqqrbxafpjdgfgpkhgepgmzxam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580093.9193301-2639-222478672530952/AnsiballZ_file.py
Feb 20 09:34:54 np0005625203.localdomain sudo[275022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:54 np0005625203.localdomain python3.9[275024]: ansible-ansible.builtin.file Invoked with group=nova mode=0400 owner=nova path=/var/lib/nova/compute_id state=file recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:54 np0005625203.localdomain sudo[275022]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:55 np0005625203.localdomain python3.9[275132]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:34:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:34:55 np0005625203.localdomain podman[275168]: 2026-02-20 09:34:55.806316165 +0000 UTC m=+0.103586757 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:34:55 np0005625203.localdomain podman[275168]: 2026-02-20 09:34:55.821337748 +0000 UTC m=+0.118608400 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:34:55 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:34:55 np0005625203.localdomain sudo[275265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tbhtizzdmbnvbuajberngpatkgqqxpfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580095.6772025-2695-40456730273875/AnsiballZ_file.py
Feb 20 09:34:55 np0005625203.localdomain sudo[275265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:56 np0005625203.localdomain python3.9[275267]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:56 np0005625203.localdomain sudo[275265]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:56 np0005625203.localdomain sudo[275375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-aqxulvrtpqwrngsbptkfuapxhmhgzcww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580096.45472-2719-150244100095147/AnsiballZ_file.py
Feb 20 09:34:56 np0005625203.localdomain sudo[275375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:34:56 np0005625203.localdomain python3.9[275377]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:34:56 np0005625203.localdomain sudo[275375]: pam_unix(sudo:session): session closed for user root
Feb 20 09:34:57 np0005625203.localdomain python3.9[275485]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute_init state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:34:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:34:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:34:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:34:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:34:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16320 "" "Go-http-client/1.1"
Feb 20 09:34:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15837 DF PROTO=TCP SPT=50034 DPT=9102 SEQ=3765063134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A06050000000001030307) 
Feb 20 09:34:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:34:59 np0005625203.localdomain podman[275737]: 2026-02-20 09:34:59.775062045 +0000 UTC m=+0.090608066 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:34:59 np0005625203.localdomain podman[275737]: 2026-02-20 09:34:59.811382296 +0000 UTC m=+0.126928287 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:34:59 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:34:59 np0005625203.localdomain sudo[275808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zsjkmzvrxvjvpgnlhlweguxocwziijom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580099.4880745-2821-112502605019963/AnsiballZ_container_config_data.py
Feb 20 09:34:59 np0005625203.localdomain sudo[275808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:00 np0005625203.localdomain python3.9[275810]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute_init config_pattern=*.json debug=False
Feb 20 09:35:00 np0005625203.localdomain sudo[275808]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15838 DF PROTO=TCP SPT=50034 DPT=9102 SEQ=3765063134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A0A010000000001030307) 
Feb 20 09:35:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17007 DF PROTO=TCP SPT=53564 DPT=9102 SEQ=2179133752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A0C800000000001030307) 
Feb 20 09:35:01 np0005625203.localdomain sudo[275918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-zqjlwwkjkdjnjjymacstqdwpfnkgrtkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580100.5901337-2854-255150096422084/AnsiballZ_container_config_hash.py
Feb 20 09:35:01 np0005625203.localdomain sudo[275918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:01 np0005625203.localdomain python3.9[275920]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 20 09:35:01 np0005625203.localdomain sudo[275918]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15839 DF PROTO=TCP SPT=50034 DPT=9102 SEQ=3765063134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A12010000000001030307) 
Feb 20 09:35:02 np0005625203.localdomain sudo[276028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-uxgfwcowxgaejezltdogylnkzzpzhsvf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771580102.2896647-2884-50219282367691/AnsiballZ_edpm_container_manage.py
Feb 20 09:35:02 np0005625203.localdomain sudo[276028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:03 np0005625203.localdomain python3[276030]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute_init config_id=nova_compute_init config_overrides={} config_patterns=*.json containers=['nova_compute_init'] log_base_path=/var/log/containers/stdouts debug=False
Feb 20 09:35:03 np0005625203.localdomain python3[276030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd",
                                                                    "Digest": "sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2026-01-30T06:31:38.534497001Z",
                                                                    "Config": {
                                                                         "User": "nova",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20260127",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 1214548351,
                                                                    "VirtualSize": 1214548351,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",
                                                                              "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",
                                                                              "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",
                                                                              "sha256:6cac2e473d63cf2a9b8ef2ea3f4fbc7fb780c57021c3588efd56da3aa8cf8843",
                                                                              "sha256:927dd86a09392106af537557be80232b7e8ca154daa00857c24fe20f9e550a50"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20260127",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "nova",
                                                                    "History": [
                                                                         {
                                                                              "created": "2026-01-28T05:56:51.126388624Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-28T05:56:51.126459235Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20260127\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-28T05:56:53.726938221Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890429494Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890534417Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890553228Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890570688Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890616649Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890659121Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:19.232761948Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:52.670543613Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:55.650316471Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:55.970652058Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:56.274301506Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:56.82928237Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.134416869Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.444274899Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.746599531Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.041383545Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.352119949Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.671042058Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.969834612Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.264649297Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.518696627Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.800434902Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:00.115933627Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:00.41398479Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:03.414738437Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:03.709666444Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:04.019868523Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:05.41751141Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124324267Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124384329Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124399349Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124410339Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:08.028503475Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:12:56.089921987Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:34.524252589Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:37.262239859Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:39.234075496Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:42.686286019Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:54.133364958Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:20:10.283411186Z",
                                                                              "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:20:19.407054412Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:29:42.656365894Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:37.451289936Z",
                                                                              "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:38.151652427Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:38.532191009Z",
                                                                              "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:38.532298572Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:44.609081717Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 20 09:35:03 np0005625203.localdomain sshd[276097]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:35:03 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=58977 DF PROTO=TCP SPT=38614 DPT=9102 SEQ=1246244886 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A16800000000001030307) 
Feb 20 09:35:03 np0005625203.localdomain podman[276084]: 2026-02-20 09:35:03.668577405 +0000 UTC m=+0.331830488 container remove 77edbe1d945e2b38c97d8a18f93ee06ae61ca7889352e35af1b678b276f7cbeb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': 'ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=nova_compute_init, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:35:03 np0005625203.localdomain python3[276030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute_init
Feb 20 09:35:03 np0005625203.localdomain podman[276099]: 
Feb 20 09:35:03 np0005625203.localdomain podman[276099]: 2026-02-20 09:35:03.778836447 +0000 UTC m=+0.091834264 container create 0eed8cca6dcf940995f6fb4bcae2adcd2e143ba5c2695aeb8d307e61e3ebb962 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, container_name=nova_compute_init, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=nova_compute_init, managed_by=edpm_ansible)
Feb 20 09:35:03 np0005625203.localdomain podman[276099]: 2026-02-20 09:35:03.735849841 +0000 UTC m=+0.048847728 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 20 09:35:03 np0005625203.localdomain python3[276030]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --env EDPM_CONFIG_HASH=1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece --label config_id=nova_compute_init --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Feb 20 09:35:03 np0005625203.localdomain sudo[276028]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:04 np0005625203.localdomain sudo[276243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-xeizugxqxkknqihehgdcsoyhcowdqjnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580104.3059897-2908-55527593647461/AnsiballZ_stat.py
Feb 20 09:35:04 np0005625203.localdomain sudo[276243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:04 np0005625203.localdomain python3.9[276245]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:35:04 np0005625203.localdomain sudo[276243]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:05 np0005625203.localdomain sshd[276097]: Received disconnect from 34.131.211.42 port 56062:11: Bye Bye [preauth]
Feb 20 09:35:05 np0005625203.localdomain sshd[276097]: Disconnected from authenticating user root 34.131.211.42 port 56062 [preauth]
Feb 20 09:35:05 np0005625203.localdomain python3.9[276355]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 20 09:35:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15840 DF PROTO=TCP SPT=50034 DPT=9102 SEQ=3765063134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A21C10000000001030307) 
Feb 20 09:35:06 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:35:06 np0005625203.localdomain podman[276389]: 2026-02-20 09:35:06.769638987 +0000 UTC m=+0.082016772 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:35:06 np0005625203.localdomain podman[276389]: 2026-02-20 09:35:06.83426595 +0000 UTC m=+0.146643735 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller)
Feb 20 09:35:06 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:35:06 np0005625203.localdomain sudo[276488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kqqbnxfuytfwkvixzpgovwsomkgrzsup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580106.668217-2989-99076650973807/AnsiballZ_stat.py
Feb 20 09:35:06 np0005625203.localdomain sudo[276488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:35:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:35:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:35:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:35:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:35:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:35:07 np0005625203.localdomain python3.9[276490]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:35:07 np0005625203.localdomain sudo[276488]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:07 np0005625203.localdomain sudo[276578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-vzwaexbckwgdkznbwvepbnlpjxtjtcbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580106.668217-2989-99076650973807/AnsiballZ_copy.py
Feb 20 09:35:07 np0005625203.localdomain sudo[276578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:35:07.646 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:35:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:35:07.647 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:35:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:35:07.648 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:35:07 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:35:07.678 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:35:07 np0005625203.localdomain python3.9[276580]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771580106.668217-2989-99076650973807/.source.yaml _original_basename=.6h7ax65i follow=False checksum=4d557a266f0e30e386f17a3d7c6078d564f9be8b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:35:07 np0005625203.localdomain sudo[276578]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:08 np0005625203.localdomain sudo[276688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rbzfuapzhogzxjruemysalshrorswjbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580108.4172666-3041-248944406255960/AnsiballZ_file.py
Feb 20 09:35:08 np0005625203.localdomain sudo[276688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:08 np0005625203.localdomain python3.9[276690]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:35:08 np0005625203.localdomain sudo[276688]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:09 np0005625203.localdomain sudo[276798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-yydesatkktidrvekamsegkjzcmysmqpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580109.4053273-3064-237910333665076/AnsiballZ_file.py
Feb 20 09:35:09 np0005625203.localdomain sudo[276798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:09 np0005625203.localdomain python3.9[276800]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Feb 20 09:35:09 np0005625203.localdomain sudo[276798]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:10 np0005625203.localdomain sudo[276908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-znkvplnsmtzggoihmmkjogrqfzdlpzqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580110.2385411-3088-23873870557238/AnsiballZ_stat.py
Feb 20 09:35:10 np0005625203.localdomain sudo[276908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:35:10 np0005625203.localdomain podman[276911]: 2026-02-20 09:35:10.635628018 +0000 UTC m=+0.083371934 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1770267347, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.7, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Feb 20 09:35:10 np0005625203.localdomain podman[276911]: 2026-02-20 09:35:10.677336635 +0000 UTC m=+0.125080551 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9)
Feb 20 09:35:10 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:35:10 np0005625203.localdomain python3.9[276910]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:35:10 np0005625203.localdomain sudo[276908]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:11 np0005625203.localdomain sudo[276985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-bdqnpluzfxtchfnamvpowhmnixnrhkaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580110.2385411-3088-23873870557238/AnsiballZ_file.py
Feb 20 09:35:11 np0005625203.localdomain sudo[276985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:11 np0005625203.localdomain python3.9[276987]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/nova_compute.json _original_basename=.kli81suy recurse=False state=file path=/var/lib/kolla/config_files/nova_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:35:11 np0005625203.localdomain sudo[276985]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:11 np0005625203.localdomain python3.9[277095]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:35:12 np0005625203.localdomain sshd[277292]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:35:13 np0005625203.localdomain sshd[277292]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:35:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:35:13 np0005625203.localdomain systemd[1]: tmp-crun.nr6MGt.mount: Deactivated successfully.
Feb 20 09:35:13 np0005625203.localdomain podman[277311]: 2026-02-20 09:35:13.381940025 +0000 UTC m=+0.090965808 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 20 09:35:13 np0005625203.localdomain podman[277311]: 2026-02-20 09:35:13.394345418 +0000 UTC m=+0.103371221 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 20 09:35:13 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:35:13 np0005625203.localdomain sudo[277420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sqfgbgghtvrxrwlyavfuugasfpgqcnmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580113.5871782-3199-128497142592800/AnsiballZ_container_config_data.py
Feb 20 09:35:13 np0005625203.localdomain sudo[277420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:14 np0005625203.localdomain python3.9[277422]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute config_pattern=*.json debug=False
Feb 20 09:35:14 np0005625203.localdomain sudo[277420]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:14 np0005625203.localdomain sudo[277530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tvcbkifjorkouaqwgagrjrpqrwnepmht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580114.5314927-3232-118830053242723/AnsiballZ_container_config_hash.py
Feb 20 09:35:14 np0005625203.localdomain sudo[277530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15841 DF PROTO=TCP SPT=50034 DPT=9102 SEQ=3765063134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A42800000000001030307) 
Feb 20 09:35:14 np0005625203.localdomain python3.9[277532]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Feb 20 09:35:15 np0005625203.localdomain sudo[277530]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:15 np0005625203.localdomain sudo[277640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tbbhjlfcowsgdafdyykxgqgjrflxtfdf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1771580115.4772046-3262-237961980701843/AnsiballZ_edpm_container_manage.py
Feb 20 09:35:15 np0005625203.localdomain sudo[277640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:15 np0005625203.localdomain python3[277642]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute config_id=nova_compute config_overrides={} config_patterns=*.json containers=['nova_compute'] log_base_path=/var/log/containers/stdouts debug=False
Feb 20 09:35:16 np0005625203.localdomain python3[277642]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: [
                                                               {
                                                                    "Id": "f4e0688689eb3c524117ae65df199eeb4e620e591d26898b5cb25b819a2d79fd",
                                                                    "Digest": "sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83",
                                                                    "RepoTags": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ],
                                                                    "RepoDigests": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:f96bd21c79ae0d7e8e17010c5e2573637d6c0f47f03e63134c477edd8ad73d83"
                                                                    ],
                                                                    "Parent": "",
                                                                    "Comment": "",
                                                                    "Created": "2026-01-30T06:31:38.534497001Z",
                                                                    "Config": {
                                                                         "User": "nova",
                                                                         "Env": [
                                                                              "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                                                                              "LANG=en_US.UTF-8",
                                                                              "TZ=UTC",
                                                                              "container=oci"
                                                                         ],
                                                                         "Entrypoint": [
                                                                              "dumb-init",
                                                                              "--single-child",
                                                                              "--"
                                                                         ],
                                                                         "Cmd": [
                                                                              "kolla_start"
                                                                         ],
                                                                         "Labels": {
                                                                              "io.buildah.version": "1.41.3",
                                                                              "maintainer": "OpenStack Kubernetes Operator team",
                                                                              "org.label-schema.build-date": "20260127",
                                                                              "org.label-schema.license": "GPLv2",
                                                                              "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                              "org.label-schema.schema-version": "1.0",
                                                                              "org.label-schema.vendor": "CentOS",
                                                                              "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",
                                                                              "tcib_managed": "true"
                                                                         },
                                                                         "StopSignal": "SIGTERM"
                                                                    },
                                                                    "Version": "",
                                                                    "Author": "",
                                                                    "Architecture": "amd64",
                                                                    "Os": "linux",
                                                                    "Size": 1214548351,
                                                                    "VirtualSize": 1214548351,
                                                                    "GraphDriver": {
                                                                         "Name": "overlay",
                                                                         "Data": {
                                                                              "LowerDir": "/var/lib/containers/storage/overlay/f4838a4ef132546976a08c48bf55f89a91b54cc7f0728a84d5c77d24ba7a8992/diff:/var/lib/containers/storage/overlay/1d7b7d3208029afb8b179e48c365354efe7c39d41194e42a7d13168820ab51ad/diff:/var/lib/containers/storage/overlay/1ad843ea4b31b05bcf49ccd6faa74bd0d6976ffabe60466fd78caf7ec41bf4ac/diff:/var/lib/containers/storage/overlay/57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595/diff",
                                                                              "UpperDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/diff",
                                                                              "WorkDir": "/var/lib/containers/storage/overlay/426448257cd6d6837b598e532a79ac3a86475cfca86b72c882b04ab6e3f65424/work"
                                                                         }
                                                                    },
                                                                    "RootFS": {
                                                                         "Type": "layers",
                                                                         "Layers": [
                                                                              "sha256:57c9a356b8a6d9095c1e6bfd1bb5d3b87c9d1b944c2c5d8a1da6e61dd690c595",
                                                                              "sha256:315008a247098d7a6218ae8aaacc68c9c19036e3778f3bb6313e5d0200cfa613",
                                                                              "sha256:d3142d7a25f00adc375557623676c786baeb2b8fec29945db7fe79212198a495",
                                                                              "sha256:6cac2e473d63cf2a9b8ef2ea3f4fbc7fb780c57021c3588efd56da3aa8cf8843",
                                                                              "sha256:927dd86a09392106af537557be80232b7e8ca154daa00857c24fe20f9e550a50"
                                                                         ]
                                                                    },
                                                                    "Labels": {
                                                                         "io.buildah.version": "1.41.3",
                                                                         "maintainer": "OpenStack Kubernetes Operator team",
                                                                         "org.label-schema.build-date": "20260127",
                                                                         "org.label-schema.license": "GPLv2",
                                                                         "org.label-schema.name": "CentOS Stream 9 Base Image",
                                                                         "org.label-schema.schema-version": "1.0",
                                                                         "org.label-schema.vendor": "CentOS",
                                                                         "tcib_build_tag": "b85d0548925081ae8c6bdd697658cec4",
                                                                         "tcib_managed": "true"
                                                                    },
                                                                    "Annotations": {},
                                                                    "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
                                                                    "User": "nova",
                                                                    "History": [
                                                                         {
                                                                              "created": "2026-01-28T05:56:51.126388624Z",
                                                                              "created_by": "/bin/sh -c #(nop) ADD file:54935d5b0598cdb1451aeae3c8627aade8d55dcef2e876b35185c8e36be64256 in / ",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-28T05:56:51.126459235Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL org.label-schema.schema-version=\"1.0\"     org.label-schema.name=\"CentOS Stream 9 Base Image\"     org.label-schema.vendor=\"CentOS\"     org.label-schema.license=\"GPLv2\"     org.label-schema.build-date=\"20260127\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-28T05:56:53.726938221Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"/bin/bash\"]"
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890429494Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL maintainer=\"OpenStack Kubernetes Operator team\"",
                                                                              "comment": "FROM quay.io/centos/centos:stream9",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890534417Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL tcib_managed=true",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890553228Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV LANG=\"en_US.UTF-8\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890570688Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV TZ=\"UTC\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890616649Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENV container=\"oci\"",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:18.890659121Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:19.232761948Z",
                                                                              "created_by": "/bin/sh -c if [ -f \"/etc/yum.repos.d/ubi.repo\" ]; then rm -f /etc/yum.repos.d/ubi.repo && dnf clean all && rm -rf /var/cache/dnf; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:52.670543613Z",
                                                                              "created_by": "/bin/sh -c dnf install -y crudini && crudini --del /etc/dnf/dnf.conf main override_install_langs && crudini --set /etc/dnf/dnf.conf main clean_requirements_on_remove True && crudini --set /etc/dnf/dnf.conf main exactarch 1 && crudini --set /etc/dnf/dnf.conf main gpgcheck 1 && crudini --set /etc/dnf/dnf.conf main install_weak_deps False && if [ 'centos' == 'centos' ];then crudini --set /etc/dnf/dnf.conf main best False; fi && crudini --set /etc/dnf/dnf.conf main installonly_limit 0 && crudini --set /etc/dnf/dnf.conf main keepcache 0 && crudini --set /etc/dnf/dnf.conf main obsoletes 1 && crudini --set /etc/dnf/dnf.conf main plugins 1 && crudini --set /etc/dnf/dnf.conf main skip_missing_names_on_install False && crudini --set /etc/dnf/dnf.conf main tsflags nodocs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:55.650316471Z",
                                                                              "created_by": "/bin/sh -c dnf install -y ca-certificates dumb-init glibc-langpack-en procps-ng python3 sudo util-linux-user which python-tcib-containers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:55.970652058Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/uid_gid_manage.sh /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:56.274301506Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/uid_gid_manage",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:56.82928237Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage kolla hugetlbfs libvirt qemu",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.134416869Z",
                                                                              "created_by": "/bin/sh -c touch /usr/local/bin/kolla_extend_start && chmod 755 /usr/local/bin/kolla_extend_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.444274899Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/set_configs.py /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:57.746599531Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_set_configs",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.041383545Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/start.sh /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.352119949Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_start",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.671042058Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/httpd_setup.sh /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:58.969834612Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_httpd_setup",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.264649297Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/copy_cacerts.sh /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.518696627Z",
                                                                              "created_by": "/bin/sh -c chmod 755 /usr/local/bin/kolla_copy_cacerts",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:10:59.800434902Z",
                                                                              "created_by": "/bin/sh -c cp /usr/share/tcib/container-images/kolla/base/sudoers /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:00.115933627Z",
                                                                              "created_by": "/bin/sh -c chmod 440 /etc/sudoers",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:00.41398479Z",
                                                                              "created_by": "/bin/sh -c sed -ri '/^(passwd:|group:)/ s/systemd//g' /etc/nsswitch.conf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:03.414738437Z",
                                                                              "created_by": "/bin/sh -c dnf -y reinstall which && rpm -e --nodeps tzdata && dnf -y install tzdata",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:03.709666444Z",
                                                                              "created_by": "/bin/sh -c if [ ! -f \"/etc/localtime\" ]; then ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:04.019868523Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /openstack",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:05.41751141Z",
                                                                              "created_by": "/bin/sh -c if [ 'centos' == 'centos' ];then if [ -n \"$(rpm -qa redhat-release)\" ];then rpm -e --nodeps redhat-release; fi ; dnf -y install centos-stream-release; fi",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124324267Z",
                                                                              "created_by": "/bin/sh -c dnf update --excludepkgs redhat-release -y && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124384329Z",
                                                                              "created_by": "/bin/sh -c #(nop) STOPSIGNAL SIGTERM",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124399349Z",
                                                                              "created_by": "/bin/sh -c #(nop) ENTRYPOINT [\"dumb-init\", \"--single-child\", \"--\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:07.124410339Z",
                                                                              "created_by": "/bin/sh -c #(nop) CMD [\"kolla_start\"]",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:11:08.028503475Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:12:56.089921987Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:34.524252589Z",
                                                                              "created_by": "/bin/sh -c dnf install -y python3-barbicanclient python3-cinderclient python3-designateclient python3-glanceclient python3-ironicclient python3-keystoneclient python3-manilaclient python3-neutronclient python3-novaclient python3-observabilityclient python3-octaviaclient python3-openstackclient python3-swiftclient python3-pymemcache && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:13:37.262239859Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:39.234075496Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-os:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:42.686286019Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:18:54.133364958Z",
                                                                              "created_by": "/bin/sh -c mkdir -p /etc/ssh && touch /etc/ssh/ssh_known_host",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:20:10.283411186Z",
                                                                              "created_by": "/bin/sh -c dnf install -y openstack-nova-common && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:20:19.407054412Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:29:42.656365894Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER root",
                                                                              "comment": "FROM quay.rdoproject.org/podified-antelope-centos9/openstack-nova-base:b85d0548925081ae8c6bdd697658cec4",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:37.451289936Z",
                                                                              "created_by": "/bin/sh -c dnf -y install e2fsprogs xfsprogs xorriso iscsi-initiator-utils nfs-utils targetcli nvme-cli device-mapper-multipath ceph-common openssh-clients openstack-nova-compute openvswitch swtpm swtpm-tools && dnf clean all && rm -rf /var/cache/dnf",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:38.151652427Z",
                                                                              "created_by": "/bin/sh -c bash /usr/local/bin/uid_gid_manage nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:38.532191009Z",
                                                                              "created_by": "/bin/sh -c rm -f /etc/machine-id",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:38.532298572Z",
                                                                              "created_by": "/bin/sh -c #(nop) USER nova",
                                                                              "empty_layer": true
                                                                         },
                                                                         {
                                                                              "created": "2026-01-30T06:31:44.609081717Z",
                                                                              "created_by": "/bin/sh -c #(nop) LABEL \"tcib_build_tag\"=\"b85d0548925081ae8c6bdd697658cec4\""
                                                                         }
                                                                    ],
                                                                    "NamesHistory": [
                                                                         "quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified"
                                                                    ]
                                                               }
                                                          ]
                                                          : quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:35:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:35:19 np0005625203.localdomain sudo[277703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:35:19 np0005625203.localdomain sudo[277703]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:35:19 np0005625203.localdomain sudo[277703]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:19 np0005625203.localdomain sudo[277721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:35:19 np0005625203.localdomain sudo[277721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:35:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:35:19 np0005625203.localdomain systemd[1]: tmp-crun.Z8U6cQ.mount: Deactivated successfully.
Feb 20 09:35:19 np0005625203.localdomain podman[277779]: 2026-02-20 09:35:19.782808579 +0000 UTC m=+0.093416312 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:35:19 np0005625203.localdomain podman[277779]: 2026-02-20 09:35:19.79127371 +0000 UTC m=+0.101881483 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:35:19 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:35:20 np0005625203.localdomain podman[277828]: 2026-02-20 09:35:20.074456388 +0000 UTC m=+0.109139578 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, version=7, name=rhceph, ceph=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, GIT_BRANCH=main, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:35:20 np0005625203.localdomain podman[277828]: 2026-02-20 09:35:20.182356966 +0000 UTC m=+0.217040176 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, CEPH_POINT_RELEASE=, GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, name=rhceph, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.openshift.tags=rhceph ceph)
Feb 20 09:35:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:35:20.216 228941 DEBUG oslo_service.periodic_task [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:35:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:35:20.217 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:35:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:35:20.217 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:35:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:35:20.236 228941 DEBUG nova.compute.manager [None req-2380d78f-0d74-4b46-9853-7b14755bd8fa - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:35:20 np0005625203.localdomain sudo[277721]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:20 np0005625203.localdomain sudo[277896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:35:20 np0005625203.localdomain sudo[277896]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:35:20 np0005625203.localdomain sudo[277896]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:20 np0005625203.localdomain sudo[277914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:35:20 np0005625203.localdomain sudo[277914]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:35:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:35:20.691 228941 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Feb 20 09:35:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:35:20.695 228941 DEBUG oslo_concurrency.lockutils [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:35:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:35:20.695 228941 DEBUG oslo_concurrency.lockutils [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:35:20 np0005625203.localdomain nova_compute[228937]: 2026-02-20 09:35:20.696 228941 DEBUG oslo_concurrency.lockutils [None req-490d33ac-53d5-4204-af2a-f7a8cd243c25 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:35:20 np0005625203.localdomain systemd[1]: tmp-crun.ph1KlQ.mount: Deactivated successfully.
Feb 20 09:35:21 np0005625203.localdomain virtqemud[228198]: End of file while reading data: Input/output error
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: libpod-6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f.scope: Deactivated successfully.
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: libpod-6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f.scope: Consumed 15.654s CPU time.
Feb 20 09:35:21 np0005625203.localdomain podman[277691]: 2026-02-20 09:35:21.22722574 +0000 UTC m=+4.938845509 container died 6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=nova_compute, container_name=nova_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:35:21 np0005625203.localdomain sudo[277914]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:21 np0005625203.localdomain podman[277691]: 2026-02-20 09:35:21.373574191 +0000 UTC m=+5.085193940 container cleanup 6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, container_name=nova_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 20 09:35:21 np0005625203.localdomain python3[277642]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman stop nova_compute
Feb 20 09:35:21 np0005625203.localdomain podman[277956]: 2026-02-20 09:35:21.387240118 +0000 UTC m=+0.148801387 container cleanup 6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127)
Feb 20 09:35:21 np0005625203.localdomain podman[277984]: 2026-02-20 09:35:21.482765597 +0000 UTC m=+0.086428242 container remove 6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-ceca6e0a7e6c49fe059956dd2c05951172f310c281831cfd96e02226e564c84f'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Feb 20 09:35:21 np0005625203.localdomain python3[277642]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman rm --force nova_compute
Feb 20 09:35:21 np0005625203.localdomain podman[277991]: Error: no container with name or ID "nova_compute" found: no such container
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: edpm_nova_compute.service: Control process exited, code=exited, status=125/n/a
Feb 20 09:35:21 np0005625203.localdomain podman[278010]: 
Feb 20 09:35:21 np0005625203.localdomain podman[278020]: Error: no container with name or ID "nova_compute" found: no such container
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: edpm_nova_compute.service: Control process exited, code=exited, status=125/n/a
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: edpm_nova_compute.service: Failed with result 'exit-code'.
Feb 20 09:35:21 np0005625203.localdomain podman[278010]: 2026-02-20 09:35:21.605041493 +0000 UTC m=+0.096966934 container create dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:35:21 np0005625203.localdomain podman[278010]: 2026-02-20 09:35:21.555986544 +0000 UTC m=+0.047911985 image pull  quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Feb 20 09:35:21 np0005625203.localdomain python3[277642]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece --label config_id=nova_compute --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: edpm_nova_compute.service: Scheduled restart job, restart counter is at 1.
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: Started libpod-conmon-dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22.scope.
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: Stopped nova_compute container.
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: Starting nova_compute container...
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:35:21 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10a7eaed4abd37a8eee11c3b201ea0d80651156a5c5627084b0efdfa5748781/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:21 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10a7eaed4abd37a8eee11c3b201ea0d80651156a5c5627084b0efdfa5748781/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:21 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10a7eaed4abd37a8eee11c3b201ea0d80651156a5c5627084b0efdfa5748781/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:21 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10a7eaed4abd37a8eee11c3b201ea0d80651156a5c5627084b0efdfa5748781/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:21 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10a7eaed4abd37a8eee11c3b201ea0d80651156a5c5627084b0efdfa5748781/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:21 np0005625203.localdomain podman[278035]: 2026-02-20 09:35:21.754270192 +0000 UTC m=+0.129591090 container init dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=nova_compute, managed_by=edpm_ansible)
Feb 20 09:35:21 np0005625203.localdomain podman[278035]: 2026-02-20 09:35:21.764414322 +0000 UTC m=+0.139735220 container start dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: + sudo -E kolla_set_configs
Feb 20 09:35:21 np0005625203.localdomain python3[277642]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman start nova_compute
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-5f3e0db24b6063b24bb4df15bb988811b5bcf9e6789b213c4536d36feae343ea-merged.mount: Deactivated successfully.
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6e8885cc5a65c95462bf5deb083556a1dda2bba1c8e7634587477c8f801eac9f-userdata-shm.mount: Deactivated successfully.
Feb 20 09:35:21 np0005625203.localdomain systemd[1]: Started nova_compute container.
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Validating config file
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Copying service configuration files
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Copying /var/lib/kolla/config_files/src/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Deleting /etc/ceph
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Creating directory /etc/ceph
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /etc/ceph
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Writing out command to execute
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: ++ cat /run_command
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: + CMD=nova-compute
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: + ARGS=
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: + sudo kolla_copy_cacerts
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: + [[ ! -n '' ]]
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: + . kolla_extend_start
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: + echo 'Running command: '\''nova-compute'\'''
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: Running command: 'nova-compute'
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: + umask 0022
Feb 20 09:35:21 np0005625203.localdomain nova_compute[278050]: + exec nova-compute
Feb 20 09:35:21 np0005625203.localdomain sudo[278079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:35:21 np0005625203.localdomain sudo[278079]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:35:21 np0005625203.localdomain sudo[278079]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:21 np0005625203.localdomain sudo[277640]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:22 np0005625203.localdomain sudo[278222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-ungloazxugngvifijnxydorzafjfllfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580122.6058927-3286-233306803354364/AnsiballZ_stat.py
Feb 20 09:35:22 np0005625203.localdomain sudo[278222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:23 np0005625203.localdomain python3.9[278224]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:35:23 np0005625203.localdomain sudo[278222]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:23 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:23.500 278063 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 20 09:35:23 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:23.501 278063 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 20 09:35:23 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:23.501 278063 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 20 09:35:23 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:23.502 278063 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 20 09:35:23 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:23.627 278063 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:35:23 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:23.652 278063 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:35:23 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:23.652 278063 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 20 09:35:23 np0005625203.localdomain sudo[278338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-oelpgixtwulrbqmttlixqfdhmlqscyun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580123.4526577-3313-129546620987093/AnsiballZ_file.py
Feb 20 09:35:23 np0005625203.localdomain sudo[278338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:23 np0005625203.localdomain python3.9[278340]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:35:23 np0005625203.localdomain sudo[278338]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.056 278063 INFO nova.virt.driver [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 20 09:35:24 np0005625203.localdomain sudo[278393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-kgtkpoervoqtkwtjmahsinartvhsibkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580123.4526577-3313-129546620987093/AnsiballZ_stat.py
Feb 20 09:35:24 np0005625203.localdomain sudo[278393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.176 278063 INFO nova.compute.provider_config [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.183 278063 DEBUG oslo_concurrency.lockutils [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.183 278063 DEBUG oslo_concurrency.lockutils [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.183 278063 DEBUG oslo_concurrency.lockutils [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.184 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.184 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.184 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.185 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.185 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.185 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.185 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.185 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.185 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.185 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.186 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.186 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.186 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.186 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.186 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.186 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.186 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.186 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.187 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.187 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] console_host                   = np0005625203.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.187 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.187 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.187 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.187 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.187 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.188 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.188 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.188 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.188 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.188 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.188 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.188 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.189 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.189 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.189 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.189 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.189 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.189 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.189 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] host                           = np0005625203.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.189 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.190 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.190 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.190 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.190 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.190 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.190 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.190 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.191 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.191 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.191 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.191 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.191 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.191 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.192 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.192 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.192 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.193 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.193 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.193 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.193 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.194 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.194 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.194 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.194 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.195 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.195 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.195 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.195 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.196 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.196 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.196 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.196 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.197 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.197 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.197 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.197 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.198 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.198 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.198 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.198 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.199 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] my_block_storage_ip            = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.199 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] my_ip                          = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.199 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.199 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.199 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.200 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.200 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.200 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.200 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.201 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.201 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.201 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.201 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.202 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.202 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.202 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.202 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.202 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.203 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.203 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.203 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.203 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.204 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.204 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.204 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.204 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.205 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.205 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.205 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.205 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.206 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.206 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.206 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.206 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.206 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.207 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.207 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.207 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.207 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.208 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.208 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.208 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.208 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.209 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.209 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.209 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.209 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.209 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.210 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.210 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.210 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.210 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.211 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.211 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.211 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.211 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.212 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.212 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.212 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.212 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.213 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.213 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.213 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.213 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.213 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.214 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.214 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.214 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.214 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.215 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.215 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.215 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.215 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.216 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.216 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.216 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.216 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.217 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.217 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.217 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.217 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.218 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.218 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.218 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.218 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.219 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.219 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.219 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.219 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.220 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.220 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.220 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.220 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.220 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.221 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.221 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.221 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.221 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.222 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.222 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.222 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.222 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.223 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.223 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.223 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.223 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.224 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.224 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.224 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.224 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.225 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.225 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.225 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.225 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.226 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.226 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.226 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.226 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.227 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.227 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.227 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.227 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.227 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.228 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.228 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.228 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.228 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.229 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.229 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.229 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.229 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.230 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.230 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.230 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.230 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.231 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.231 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.231 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.231 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.232 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.232 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.232 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.232 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.232 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.233 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.233 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.233 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.233 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.234 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.234 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.234 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.234 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.235 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.235 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.235 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.235 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.236 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.236 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.236 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.236 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.236 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.237 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.237 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.237 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.237 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.238 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.238 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.238 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.238 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.239 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.239 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.239 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.239 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.240 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.240 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.240 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.240 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.240 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.241 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.241 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.241 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.242 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.242 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.242 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.242 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.242 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.243 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.243 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.243 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.243 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.243 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.244 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.244 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.244 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.244 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.244 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.244 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.245 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.245 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.245 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.245 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.245 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.246 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.246 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.246 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.246 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.246 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.246 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.247 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.247 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.247 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.247 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.247 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.247 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.248 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.248 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.248 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.248 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.248 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.248 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.249 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.249 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.249 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.249 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.249 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.250 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.250 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.250 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.250 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.250 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.250 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.251 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.251 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.251 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.251 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.252 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.252 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.252 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.252 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.252 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.253 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.253 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.253 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.253 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.253 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.253 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.254 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.254 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.254 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.254 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.254 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.254 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.255 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.255 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.255 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.255 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.255 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.255 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.256 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.256 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.256 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.256 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.256 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.256 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.257 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.257 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.257 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.257 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.257 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.258 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.258 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.258 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.258 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.258 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.258 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.259 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.259 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.259 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.260 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.260 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.260 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.260 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.260 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.261 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.261 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.261 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.261 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.261 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.261 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.262 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.262 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.262 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.262 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.262 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.263 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.263 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.263 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.263 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.263 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.263 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.264 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.264 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.264 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.264 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.264 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.265 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.265 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.265 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.265 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.265 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.265 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.266 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.266 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.266 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.266 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.266 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.267 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.267 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.267 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.267 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.267 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.268 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.268 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.268 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.268 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.268 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.268 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.269 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.269 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.269 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.269 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.269 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.270 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.270 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.270 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.270 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.270 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.270 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.271 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.271 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.271 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.271 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.271 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.271 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.272 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.272 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.272 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.272 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.272 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.273 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.273 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.273 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.273 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.273 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.273 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.274 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.274 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.274 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.274 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.274 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.275 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.275 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.275 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.275 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.275 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.275 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.276 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.276 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.276 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.276 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.276 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.277 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.277 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.277 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.277 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.277 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.277 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.278 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.278 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.278 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.278 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.278 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.279 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.279 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.279 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.279 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.279 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.279 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.280 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.280 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.280 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.280 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.280 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.281 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.281 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.281 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.281 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.281 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.281 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.282 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.282 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.282 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.282 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.282 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.283 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.283 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.283 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.283 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.283 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.283 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.284 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.284 278063 WARNING oslo_config.cfg [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: and ``live_migration_inbound_addr`` respectively.
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: ).  Its value may be silently ignored in the future.
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.284 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.284 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.285 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.285 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.285 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.285 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.285 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.286 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.286 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.286 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.286 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.286 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.286 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.287 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.287 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.287 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.287 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.287 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.288 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.rbd_secret_uuid        = a8557ee9-b55d-5519-942c-cf8f6172f1d8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.288 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.288 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.288 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.288 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.288 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.289 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.289 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.289 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.289 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.289 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.290 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.290 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.290 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.290 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.290 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.291 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.291 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.291 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.291 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.291 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.291 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.292 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.292 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.292 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.292 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.292 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.293 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.293 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.293 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.293 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.293 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.294 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.294 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.294 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.294 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.294 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.295 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.295 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.295 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.295 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.295 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.295 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.296 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.296 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.296 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.296 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.296 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.297 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.297 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.297 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.297 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.297 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.297 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.298 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.298 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.298 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.298 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.298 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.298 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.299 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.299 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.299 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.299 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.299 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.300 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.300 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.300 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.300 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.301 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.301 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.301 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.301 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.301 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.302 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.302 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.302 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.302 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.302 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.303 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.303 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.303 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.303 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.303 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.303 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.304 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.304 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.304 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.304 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.304 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.304 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.305 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.305 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.305 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.305 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.305 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.306 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.306 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.306 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.306 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.306 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.306 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.307 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.307 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.307 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.307 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.307 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.307 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.308 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.308 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.308 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.308 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.308 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.309 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.309 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.309 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.309 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.309 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.309 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.310 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.310 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.310 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.310 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.310 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.311 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.311 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.311 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.311 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.311 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.312 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.312 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.312 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.312 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.312 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.312 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.313 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.313 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.313 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.313 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.313 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.314 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.314 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.314 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.314 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.314 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.315 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.315 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.315 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.315 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.315 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.315 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.316 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.316 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.316 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.316 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.316 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.316 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.317 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.317 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.317 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.317 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.318 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.318 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.318 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.318 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.319 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.319 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.319 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.319 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.319 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.320 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.320 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.320 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.320 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.320 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.320 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.321 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.321 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.321 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.321 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.322 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.322 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.322 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.322 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.322 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.322 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.323 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.323 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.323 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.323 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.323 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.324 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.324 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.324 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.324 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.324 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.324 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.325 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.325 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.326 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.327 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.328 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.328 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.328 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.328 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.328 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.329 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.329 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.329 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.329 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.329 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.329 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.330 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.330 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.330 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.330 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.330 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.330 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.331 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.331 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.331 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.331 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.331 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.332 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.332 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.332 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.332 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.332 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.332 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.333 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.333 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.333 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.333 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.334 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.334 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.334 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.334 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.334 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.335 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.335 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.335 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.335 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.335 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.336 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain python3.9[278395]: ansible-stat Invoked with path=/etc/systemd/system/edpm_nova_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.336 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.336 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.336 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.336 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.336 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.337 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.337 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.337 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.337 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.337 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.338 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.338 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.338 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.338 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.338 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.338 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.339 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.339 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.339 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.339 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.339 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.339 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.340 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.340 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.340 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.340 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.340 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.341 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.341 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.341 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.341 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.341 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.342 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.342 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.342 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.342 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.342 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.343 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.343 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.343 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.343 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.343 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.343 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.344 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.344 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.344 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.344 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.344 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.345 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.345 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.345 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.345 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.345 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.345 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.346 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.346 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.346 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.346 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.346 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.347 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain sudo[278393]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.347 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.347 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.347 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.347 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.347 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.348 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.348 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.348 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.348 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.348 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.349 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.349 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.349 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.349 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.349 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.349 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.350 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.350 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.350 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.350 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.350 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.351 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.351 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.351 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.351 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.351 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.351 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.352 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.352 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.352 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.352 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.352 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.353 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.353 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.353 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.353 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.353 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.353 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.354 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.354 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.354 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.354 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.354 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.355 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.355 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.355 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.355 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.355 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.355 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.356 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.356 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.356 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.356 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.356 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.357 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.357 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.357 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.357 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.357 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.357 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.358 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.358 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.358 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.358 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.358 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.359 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.359 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.359 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.359 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.359 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.359 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.360 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.360 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.360 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.360 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.360 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.361 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.361 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.361 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.361 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.361 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.362 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.362 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.362 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.362 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.362 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.362 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.363 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.363 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.363 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.363 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.363 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.364 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.364 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.364 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.364 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.364 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.364 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.365 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.365 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.365 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.365 278063 DEBUG oslo_service.service [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.366 278063 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.387 278063 INFO nova.virt.node [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Determined node identity e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from /var/lib/nova/compute_id
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.387 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.388 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.388 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.388 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.401 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fc9e71cb070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.404 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fc9e71cb070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.405 278063 INFO nova.virt.libvirt.driver [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Connection event '1' reason 'None'
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.412 278063 INFO nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Libvirt host capabilities <capabilities>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <host>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <uuid>a53ba227-4db8-45ed-bb70-5a295cbaca1c</uuid>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <cpu>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <arch>x86_64</arch>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model>EPYC-Rome-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <vendor>AMD</vendor>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <microcode version='16777317'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <signature family='23' model='49' stepping='0'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <maxphysaddr mode='emulate' bits='40'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='x2apic'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='tsc-deadline'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='osxsave'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='hypervisor'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='tsc_adjust'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='spec-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='stibp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='arch-capabilities'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='cmp_legacy'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='topoext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='virt-ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='lbrv'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='tsc-scale'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='vmcb-clean'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='pause-filter'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='pfthreshold'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='svme-addr-chk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='rdctl-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='skip-l1dfl-vmentry'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='mds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature name='pschange-mc-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <pages unit='KiB' size='4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <pages unit='KiB' size='2048'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <pages unit='KiB' size='1048576'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </cpu>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <power_management>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <suspend_mem/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <suspend_disk/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <suspend_hybrid/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </power_management>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <iommu support='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <migration_features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <live/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <uri_transports>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <uri_transport>tcp</uri_transport>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <uri_transport>rdma</uri_transport>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </uri_transports>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </migration_features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <topology>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <cells num='1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <cell id='0'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:           <memory unit='KiB'>16116612</memory>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:           <pages unit='KiB' size='4'>4029153</pages>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:           <pages unit='KiB' size='2048'>0</pages>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:           <pages unit='KiB' size='1048576'>0</pages>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:           <distances>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:             <sibling id='0' value='10'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:           </distances>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:           <cpus num='8'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:           </cpus>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         </cell>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </cells>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </topology>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <cache>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </cache>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <secmodel>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model>selinux</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <doi>0</doi>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </secmodel>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <secmodel>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model>dac</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <doi>0</doi>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <baselabel type='kvm'>+107:+107</baselabel>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <baselabel type='qemu'>+107:+107</baselabel>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </secmodel>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </host>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <guest>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <os_type>hvm</os_type>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <arch name='i686'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <wordsize>32</wordsize>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <domain type='qemu'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <domain type='kvm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </arch>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <pae/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <nonpae/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <acpi default='on' toggle='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <apic default='on' toggle='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <cpuselection/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <deviceboot/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <disksnapshot default='on' toggle='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <externalSnapshot/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </guest>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <guest>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <os_type>hvm</os_type>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <arch name='x86_64'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <wordsize>64</wordsize>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <domain type='qemu'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <domain type='kvm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </arch>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <acpi default='on' toggle='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <apic default='on' toggle='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <cpuselection/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <deviceboot/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <disksnapshot default='on' toggle='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <externalSnapshot/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </guest>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: </capabilities>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.415 278063 DEBUG nova.virt.libvirt.volume.mount [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.419 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.426 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: <domainCapabilities>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <domain>kvm</domain>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <arch>i686</arch>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <vcpu max='1024'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <iothreads supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <os supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <enum name='firmware'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <loader supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>rom</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pflash</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='readonly'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>yes</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>no</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='secure'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>no</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </loader>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </os>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <cpu>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>on</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>off</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='maximum' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='maximumMigratable'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>on</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>off</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='host-model' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <vendor>AMD</vendor>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='x2apic'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='stibp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='succor'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='lbrv'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='custom' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='ClearwaterForest'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ddpd-u'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sha512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm3'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ddpd-u'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sha512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm3'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cooperlake'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cooperlake-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cooperlake-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Dhyana-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Genoa'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='perfmon-v2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Turin'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='perfmon-v2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbpb'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='perfmon-v2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbpb'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-128'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-256'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-128'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-256'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='KnightsMill'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512er'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512pf'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='KnightsMill-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512er'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512pf'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tbm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tbm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='athlon'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='athlon-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='core2duo'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='core2duo-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='coreduo'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='coreduo-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='n270'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='n270-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='phenom'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='phenom-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </cpu>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <memoryBacking supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <enum name='sourceType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>file</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>anonymous</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>memfd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </memoryBacking>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <devices>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <disk supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='diskDevice'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>disk</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>cdrom</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>floppy</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>lun</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='bus'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>fdc</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>scsi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>usb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>sata</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-non-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </disk>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <graphics supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vnc</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>egl-headless</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>dbus</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </graphics>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <video supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='modelType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vga</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>cirrus</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>none</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>bochs</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>ramfb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </video>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <hostdev supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='mode'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>subsystem</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='startupPolicy'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>default</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>mandatory</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>requisite</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>optional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='subsysType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>usb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pci</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>scsi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='capsType'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='pciBackend'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </hostdev>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <rng supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-non-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendModel'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>random</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>egd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>builtin</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </rng>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <filesystem supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='driverType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>path</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>handle</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtiofs</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </filesystem>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <tpm supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tpm-tis</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tpm-crb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendModel'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>emulator</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>external</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendVersion'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>2.0</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </tpm>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <redirdev supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='bus'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>usb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </redirdev>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <channel supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pty</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>unix</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </channel>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <crypto supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>qemu</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendModel'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>builtin</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </crypto>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <interface supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>default</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>passt</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </interface>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <panic supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>isa</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>hyperv</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </panic>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <console supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>null</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vc</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pty</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>dev</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>file</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pipe</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>stdio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>udp</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tcp</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>unix</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>qemu-vdagent</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>dbus</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </console>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </devices>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <gic supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <vmcoreinfo supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <genid supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <backingStoreInput supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <backup supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <async-teardown supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <s390-pv supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <ps2 supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <tdx supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <sev supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <sgx supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <hyperv supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='features'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>relaxed</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vapic</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>spinlocks</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vpindex</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>runtime</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>synic</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>stimer</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>reset</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vendor_id</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>frequencies</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>reenlightenment</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tlbflush</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>ipi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>avic</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>emsr_bitmap</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>xmm_input</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <defaults>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <spinlocks>4095</spinlocks>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <stimer_direct>on</stimer_direct>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </defaults>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </hyperv>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <launchSecurity supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: </domainCapabilities>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.437 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: <domainCapabilities>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <domain>kvm</domain>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <arch>i686</arch>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <vcpu max='240'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <iothreads supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <os supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <enum name='firmware'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <loader supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>rom</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pflash</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='readonly'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>yes</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>no</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='secure'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>no</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </loader>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </os>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <cpu>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>on</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>off</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='maximum' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='maximumMigratable'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>on</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>off</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='host-model' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <vendor>AMD</vendor>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='x2apic'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='stibp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='succor'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='lbrv'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='custom' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='ClearwaterForest'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ddpd-u'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sha512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm3'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ddpd-u'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sha512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm3'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cooperlake'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cooperlake-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cooperlake-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Dhyana-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Genoa'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='perfmon-v2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Turin'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='perfmon-v2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbpb'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='perfmon-v2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbpb'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-128'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-256'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-128'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-256'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='KnightsMill'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512er'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512pf'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='KnightsMill-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512er'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512pf'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tbm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tbm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='athlon'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='athlon-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='core2duo'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='core2duo-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='coreduo'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='coreduo-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='n270'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='n270-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='phenom'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='phenom-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </cpu>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <memoryBacking supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <enum name='sourceType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>file</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>anonymous</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>memfd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </memoryBacking>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <devices>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <disk supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='diskDevice'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>disk</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>cdrom</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>floppy</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>lun</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='bus'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>ide</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>fdc</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>scsi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>usb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>sata</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-non-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </disk>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <graphics supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vnc</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>egl-headless</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>dbus</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </graphics>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <video supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='modelType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vga</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>cirrus</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>none</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>bochs</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>ramfb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </video>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <hostdev supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='mode'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>subsystem</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='startupPolicy'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>default</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>mandatory</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>requisite</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>optional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='subsysType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>usb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pci</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>scsi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='capsType'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='pciBackend'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </hostdev>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <rng supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-non-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendModel'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>random</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>egd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>builtin</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </rng>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <filesystem supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='driverType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>path</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>handle</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtiofs</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </filesystem>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <tpm supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tpm-tis</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tpm-crb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendModel'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>emulator</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>external</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendVersion'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>2.0</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </tpm>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <redirdev supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='bus'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>usb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </redirdev>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <channel supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pty</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>unix</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </channel>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <crypto supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>qemu</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendModel'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>builtin</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </crypto>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <interface supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>default</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>passt</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </interface>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <panic supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>isa</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>hyperv</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </panic>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <console supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>null</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vc</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pty</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>dev</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>file</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pipe</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>stdio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>udp</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tcp</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>unix</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>qemu-vdagent</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>dbus</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </console>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </devices>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <gic supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <vmcoreinfo supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <genid supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <backingStoreInput supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <backup supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <async-teardown supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <s390-pv supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <ps2 supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <tdx supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <sev supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <sgx supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <hyperv supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='features'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>relaxed</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vapic</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>spinlocks</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vpindex</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>runtime</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>synic</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>stimer</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>reset</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vendor_id</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>frequencies</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>reenlightenment</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tlbflush</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>ipi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>avic</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>emsr_bitmap</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>xmm_input</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <defaults>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <spinlocks>4095</spinlocks>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <stimer_direct>on</stimer_direct>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </defaults>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </hyperv>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <launchSecurity supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: </domainCapabilities>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.480 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.486 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: <domainCapabilities>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <domain>kvm</domain>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <arch>x86_64</arch>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <vcpu max='1024'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <iothreads supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <os supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <enum name='firmware'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>efi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <loader supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>rom</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pflash</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='readonly'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>yes</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>no</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='secure'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>yes</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>no</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </loader>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </os>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <cpu>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>on</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>off</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='maximum' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='maximumMigratable'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>on</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>off</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='host-model' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <vendor>AMD</vendor>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='x2apic'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='stibp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='succor'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='lbrv'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='custom' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='ClearwaterForest'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ddpd-u'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sha512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm3'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ddpd-u'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sha512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm3'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cooperlake'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cooperlake-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cooperlake-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Dhyana-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Genoa'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='perfmon-v2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Turin'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='perfmon-v2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbpb'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='perfmon-v2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbpb'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-128'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-256'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-128'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-256'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='KnightsMill'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512er'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512pf'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='KnightsMill-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512er'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512pf'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tbm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tbm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='athlon'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='athlon-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='core2duo'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='core2duo-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='coreduo'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='coreduo-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='n270'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='n270-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='phenom'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='phenom-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </cpu>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <memoryBacking supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <enum name='sourceType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>file</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>anonymous</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>memfd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </memoryBacking>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <devices>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <disk supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='diskDevice'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>disk</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>cdrom</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>floppy</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>lun</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='bus'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>fdc</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>scsi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>usb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>sata</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-non-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </disk>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <graphics supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vnc</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>egl-headless</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>dbus</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </graphics>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <video supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='modelType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vga</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>cirrus</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>none</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>bochs</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>ramfb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </video>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <hostdev supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='mode'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>subsystem</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='startupPolicy'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>default</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>mandatory</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>requisite</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>optional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='subsysType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>usb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pci</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>scsi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='capsType'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='pciBackend'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </hostdev>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <rng supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-non-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendModel'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>random</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>egd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>builtin</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </rng>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <filesystem supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='driverType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>path</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>handle</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtiofs</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </filesystem>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <tpm supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tpm-tis</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tpm-crb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendModel'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>emulator</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>external</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendVersion'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>2.0</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </tpm>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <redirdev supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='bus'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>usb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </redirdev>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <channel supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pty</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>unix</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </channel>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <crypto supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>qemu</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendModel'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>builtin</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </crypto>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <interface supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>default</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>passt</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </interface>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <panic supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>isa</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>hyperv</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </panic>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <console supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>null</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vc</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pty</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>dev</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>file</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pipe</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>stdio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>udp</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tcp</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>unix</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>qemu-vdagent</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>dbus</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </console>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </devices>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <gic supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <vmcoreinfo supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <genid supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <backingStoreInput supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <backup supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <async-teardown supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <s390-pv supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <ps2 supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <tdx supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <sev supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <sgx supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <hyperv supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='features'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>relaxed</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vapic</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>spinlocks</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vpindex</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>runtime</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>synic</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>stimer</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>reset</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vendor_id</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>frequencies</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>reenlightenment</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tlbflush</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>ipi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>avic</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>emsr_bitmap</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>xmm_input</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <defaults>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <spinlocks>4095</spinlocks>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <stimer_direct>on</stimer_direct>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </defaults>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </hyperv>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <launchSecurity supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: </domainCapabilities>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.553 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: <domainCapabilities>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <domain>kvm</domain>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <arch>x86_64</arch>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <vcpu max='240'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <iothreads supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <os supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <enum name='firmware'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <loader supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>rom</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pflash</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='readonly'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>yes</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>no</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='secure'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>no</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </loader>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </os>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <cpu>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>on</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>off</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='maximum' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='maximumMigratable'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>on</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>off</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='host-model' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <vendor>AMD</vendor>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='x2apic'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='stibp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='succor'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='lbrv'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <mode name='custom' supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Broadwell-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='ClearwaterForest'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ddpd-u'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sha512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm3'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ddpd-u'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sha512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm3'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sm4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cooperlake'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cooperlake-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Cooperlake-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Denverton-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Dhyana-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Genoa'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='perfmon-v2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Turin'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='perfmon-v2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbpb'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amd-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='auto-ibrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='perfmon-v2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbpb'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='stibp-always-on'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='EPYC-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-128'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-256'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-128'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-256'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx10-512'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='prefetchiti'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Haswell-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='IvyBridge-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='KnightsMill'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512er'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512pf'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='KnightsMill-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512er'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512pf'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tbm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fma4'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tbm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xop'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='amx-tile'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-bf16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-fp16'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bitalg'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrc'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fzrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='la57'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='taa-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='SierraForest-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ifma'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cmpccxadd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fbsdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='fsrs'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ibrs-all'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='intel-psfd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='lam'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mcdt-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pbrsb-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='psdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rfds-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='serialize'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vaes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='hle'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='rtm'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512bw'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512cd'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512dq'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512f'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='avx512vl'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='invpcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pcid'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='pku'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='mpx'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v2'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v3'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='core-capability'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='split-lock-detect'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='Snowridge-v4'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='cldemote'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='erms'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='gfni'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdir64b'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='movdiri'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='xsaves'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='athlon'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='athlon-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='core2duo'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='core2duo-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='coreduo'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='coreduo-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='n270'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='n270-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='ss'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='phenom'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <blockers model='phenom-v1'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnow'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <feature name='3dnowext'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </blockers>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </mode>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </cpu>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <memoryBacking supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <enum name='sourceType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>file</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>anonymous</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <value>memfd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </memoryBacking>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <devices>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <disk supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='diskDevice'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>disk</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>cdrom</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>floppy</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>lun</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='bus'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>ide</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>fdc</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>scsi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>usb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>sata</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-non-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </disk>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <graphics supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vnc</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>egl-headless</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>dbus</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </graphics>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <video supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='modelType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vga</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>cirrus</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>none</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>bochs</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>ramfb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </video>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <hostdev supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='mode'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>subsystem</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='startupPolicy'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>default</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>mandatory</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>requisite</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>optional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='subsysType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>usb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pci</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>scsi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='capsType'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='pciBackend'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </hostdev>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <rng supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtio-non-transitional</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendModel'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>random</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>egd</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>builtin</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </rng>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <filesystem supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='driverType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>path</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>handle</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>virtiofs</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </filesystem>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <tpm supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tpm-tis</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tpm-crb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendModel'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>emulator</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>external</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendVersion'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>2.0</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </tpm>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <redirdev supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='bus'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>usb</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </redirdev>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <channel supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pty</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>unix</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </channel>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <crypto supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>qemu</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendModel'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>builtin</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </crypto>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <interface supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='backendType'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>default</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>passt</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </interface>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <panic supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='model'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>isa</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>hyperv</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </panic>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <console supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='type'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>null</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vc</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pty</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>dev</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>file</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>pipe</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>stdio</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>udp</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tcp</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>unix</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>qemu-vdagent</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>dbus</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </console>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </devices>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   <features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <gic supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <vmcoreinfo supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <genid supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <backingStoreInput supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <backup supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <async-teardown supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <s390-pv supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <ps2 supported='yes'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <tdx supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <sev supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <sgx supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <hyperv supported='yes'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <enum name='features'>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>relaxed</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vapic</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>spinlocks</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vpindex</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>runtime</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>synic</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>stimer</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>reset</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>vendor_id</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>frequencies</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>reenlightenment</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>tlbflush</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>ipi</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>avic</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>emsr_bitmap</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <value>xmm_input</value>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </enum>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       <defaults>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <spinlocks>4095</spinlocks>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <stimer_direct>on</stimer_direct>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:       </defaults>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     </hyperv>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:     <launchSecurity supported='no'/>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:   </features>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: </domainCapabilities>
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.615 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.615 278063 INFO nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Secure Boot support detected
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.617 278063 INFO nova.virt.libvirt.driver [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.618 278063 INFO nova.virt.libvirt.driver [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.629 278063 DEBUG nova.virt.libvirt.driver [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.644 278063 INFO nova.virt.node [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Determined node identity e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from /var/lib/nova/compute_id
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.661 278063 DEBUG nova.compute.manager [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Verified node e5d5157a-2df2-4f51-b5fb-cd2da3a8584e matches my host np0005625203.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.686 278063 INFO nova.compute.manager [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.768 278063 DEBUG oslo_concurrency.lockutils [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.768 278063 DEBUG oslo_concurrency.lockutils [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.769 278063 DEBUG oslo_concurrency.lockutils [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.769 278063 DEBUG nova.compute.resource_tracker [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:35:24 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:24.770 278063 DEBUG oslo_concurrency.processutils [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:35:24 np0005625203.localdomain sudo[278523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-mgmhuteyienunredhwrjvxmhcqylwkvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580124.3881946-3313-255853799747638/AnsiballZ_copy.py
Feb 20 09:35:24 np0005625203.localdomain sudo[278523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:24 np0005625203.localdomain python3.9[278526]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1771580124.3881946-3313-255853799747638/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:35:24 np0005625203.localdomain sudo[278523]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.248 278063 DEBUG oslo_concurrency.processutils [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.496 278063 WARNING nova.virt.libvirt.driver [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.498 278063 DEBUG nova.compute.resource_tracker [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12938MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.498 278063 DEBUG oslo_concurrency.lockutils [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.499 278063 DEBUG oslo_concurrency.lockutils [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:35:25 np0005625203.localdomain sudo[278600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jcehzesvqhwbtsebaopmhkrdhbrengmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580124.3881946-3313-255853799747638/AnsiballZ_systemd.py
Feb 20 09:35:25 np0005625203.localdomain sudo[278600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.658 278063 DEBUG nova.compute.resource_tracker [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.659 278063 DEBUG nova.compute.resource_tracker [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.719 278063 DEBUG nova.scheduler.client.report [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Refreshing inventories for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.740 278063 DEBUG nova.scheduler.client.report [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Updating ProviderTree inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.741 278063 DEBUG nova.compute.provider_tree [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.766 278063 DEBUG nova.scheduler.client.report [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Refreshing aggregate associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.805 278063 DEBUG nova.scheduler.client.report [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Refreshing trait associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, traits: COMPUTE_NODE,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_ABM,HW_CPU_X86_FMA3,HW_CPU_X86_BMI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SVM,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_MMX,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_AMD_SVM,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE41,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE4A,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_BMI2,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE2,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 20 09:35:25 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:25.835 278063 DEBUG oslo_concurrency.processutils [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:35:25 np0005625203.localdomain python3.9[278602]: ansible-systemd Invoked with state=started name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:35:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:35:25 np0005625203.localdomain podman[278605]: 2026-02-20 09:35:25.990612153 +0000 UTC m=+0.095065534 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:35:26 np0005625203.localdomain podman[278605]: 2026-02-20 09:35:26.00554271 +0000 UTC m=+0.109996121 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:35:26 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:35:26 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:26.332 278063 DEBUG oslo_concurrency.processutils [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:35:26 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:26.339 278063 DEBUG nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 20 09:35:26 np0005625203.localdomain nova_compute[278050]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Feb 20 09:35:26 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:26.340 278063 INFO nova.virt.libvirt.host [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] kernel doesn't support AMD SEV
Feb 20 09:35:26 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:26.342 278063 DEBUG nova.compute.provider_tree [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:35:26 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:26.342 278063 DEBUG nova.virt.libvirt.driver [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 20 09:35:26 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:26.367 278063 DEBUG nova.scheduler.client.report [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:35:26 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:26.396 278063 DEBUG nova.compute.resource_tracker [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:35:26 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:26.397 278063 DEBUG oslo_concurrency.lockutils [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:35:26 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:26.397 278063 DEBUG nova.service [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Feb 20 09:35:26 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:26.425 278063 DEBUG nova.service [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Feb 20 09:35:26 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:26.425 278063 DEBUG nova.servicegroup.drivers.db [None req-3606f123-3f54-47ed-806d-277b81ccc136 - - - - - -] DB_Driver: join new ServiceGroup member np0005625203.localdomain to the compute group, service = <Service: host=np0005625203.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Feb 20 09:35:26 np0005625203.localdomain sudo[278600]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:27 np0005625203.localdomain python3.9[278758]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Feb 20 09:35:28 np0005625203.localdomain sudo[278866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-grejscxcslnwqjbfbekjfeoqnobndszf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580128.5031154-3436-204267961473098/AnsiballZ_stat.py
Feb 20 09:35:28 np0005625203.localdomain sudo[278866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:28 np0005625203.localdomain python3.9[278868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Feb 20 09:35:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:35:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:35:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:35:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147353 "" "Go-http-client/1.1"
Feb 20 09:35:29 np0005625203.localdomain sudo[278866]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16317 "" "Go-http-client/1.1"
Feb 20 09:35:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52190 DF PROTO=TCP SPT=58272 DPT=9102 SEQ=1194113709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A7B350000000001030307) 
Feb 20 09:35:29 np0005625203.localdomain sudo[278956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tkjdsuacrlzocnkdavxuteetwqraqvvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580128.5031154-3436-204267961473098/AnsiballZ_copy.py
Feb 20 09:35:29 np0005625203.localdomain sudo[278956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:29 np0005625203.localdomain python3.9[278958]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1771580128.5031154-3436-204267961473098/.source.yaml _original_basename=.dis_oia0 follow=False checksum=1398ce19331de48b62372cc81e1a3aaab78c97b5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:35:29 np0005625203.localdomain sudo[278956]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52191 DF PROTO=TCP SPT=58272 DPT=9102 SEQ=1194113709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A7F400000000001030307) 
Feb 20 09:35:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:35:30 np0005625203.localdomain python3.9[279066]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:35:30 np0005625203.localdomain podman[279067]: 2026-02-20 09:35:30.776352531 +0000 UTC m=+0.092746205 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:35:30 np0005625203.localdomain podman[279067]: 2026-02-20 09:35:30.805769189 +0000 UTC m=+0.122162883 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:35:30 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:35:31 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15842 DF PROTO=TCP SPT=50034 DPT=9102 SEQ=3765063134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A82800000000001030307) 
Feb 20 09:35:31 np0005625203.localdomain python3.9[279192]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:35:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52192 DF PROTO=TCP SPT=58272 DPT=9102 SEQ=1194113709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A87400000000001030307) 
Feb 20 09:35:33 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=17008 DF PROTO=TCP SPT=53564 DPT=9102 SEQ=2179133752 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A8A800000000001030307) 
Feb 20 09:35:33 np0005625203.localdomain python3.9[279300]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Feb 20 09:35:34 np0005625203.localdomain sudo[279408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-fntlzoskogpfqrtfolerpmepyvkykhgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580133.6829515-3586-276808999096196/AnsiballZ_podman_container.py
Feb 20 09:35:34 np0005625203.localdomain sudo[279408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:34 np0005625203.localdomain python3.9[279410]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 20 09:35:34 np0005625203.localdomain sudo[279408]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:34 np0005625203.localdomain systemd-journald[48285]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 120.4 (401 of 333 items), suggesting rotation.
Feb 20 09:35:34 np0005625203.localdomain systemd-journald[48285]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating.
Feb 20 09:35:34 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:35:34 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:35:35 np0005625203.localdomain sudo[279542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-rjtsstrslbndwgxpewldaqxihpzedoef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580135.2640462-3610-89982449319528/AnsiballZ_systemd.py
Feb 20 09:35:35 np0005625203.localdomain sudo[279542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:35 np0005625203.localdomain python3.9[279544]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Feb 20 09:35:35 np0005625203.localdomain systemd[1]: Stopping nova_compute container...
Feb 20 09:35:35 np0005625203.localdomain systemd[1]: tmp-crun.hO6JmZ.mount: Deactivated successfully.
Feb 20 09:35:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52193 DF PROTO=TCP SPT=58272 DPT=9102 SEQ=1194113709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0A97000000000001030307) 
Feb 20 09:35:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:35:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:35:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:35:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:35:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:35:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:35:37 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:35:37 np0005625203.localdomain systemd[1]: tmp-crun.BSa3JE.mount: Deactivated successfully.
Feb 20 09:35:37 np0005625203.localdomain podman[279563]: 2026-02-20 09:35:37.764191307 +0000 UTC m=+0.081057968 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:35:37 np0005625203.localdomain podman[279563]: 2026-02-20 09:35:37.845245323 +0000 UTC m=+0.162111974 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 20 09:35:37 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:35:39 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:39.638 278063 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored
Feb 20 09:35:39 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:39.641 278063 DEBUG oslo_concurrency.lockutils [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:35:39 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:39.641 278063 DEBUG oslo_concurrency.lockutils [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:35:39 np0005625203.localdomain nova_compute[278050]: 2026-02-20 09:35:39.642 278063 DEBUG oslo_concurrency.lockutils [None req-fa59a7ef-e380-4605-ba97-a4a4379a48d7 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:35:40 np0005625203.localdomain virtqemud[228198]: End of file while reading data: Input/output error
Feb 20 09:35:40 np0005625203.localdomain systemd[1]: libpod-dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22.scope: Deactivated successfully.
Feb 20 09:35:40 np0005625203.localdomain systemd[1]: libpod-dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22.scope: Consumed 3.827s CPU time.
Feb 20 09:35:40 np0005625203.localdomain podman[279548]: 2026-02-20 09:35:40.019482302 +0000 UTC m=+4.127515578 container died dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=nova_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2)
Feb 20 09:35:40 np0005625203.localdomain systemd[1]: tmp-crun.byRu6n.mount: Deactivated successfully.
Feb 20 09:35:40 np0005625203.localdomain podman[279548]: 2026-02-20 09:35:40.09241657 +0000 UTC m=+4.200449806 container cleanup dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:35:40 np0005625203.localdomain podman[279548]: nova_compute
Feb 20 09:35:40 np0005625203.localdomain podman[279589]: 2026-02-20 09:35:40.106012286 +0000 UTC m=+0.077966664 container cleanup dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:35:40 np0005625203.localdomain systemd[1]: libpod-conmon-dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22.scope: Deactivated successfully.
Feb 20 09:35:40 np0005625203.localdomain podman[279619]: error opening file `/run/crun/dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22/status`: No such file or directory
Feb 20 09:35:40 np0005625203.localdomain podman[279606]: 2026-02-20 09:35:40.217265325 +0000 UTC m=+0.071797595 container cleanup dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, config_id=nova_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:35:40 np0005625203.localdomain podman[279606]: nova_compute
Feb 20 09:35:40 np0005625203.localdomain systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Feb 20 09:35:40 np0005625203.localdomain systemd[1]: Stopped nova_compute container.
Feb 20 09:35:40 np0005625203.localdomain systemd[1]: Starting nova_compute container...
Feb 20 09:35:40 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:35:40 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10a7eaed4abd37a8eee11c3b201ea0d80651156a5c5627084b0efdfa5748781/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:40 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10a7eaed4abd37a8eee11c3b201ea0d80651156a5c5627084b0efdfa5748781/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:40 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10a7eaed4abd37a8eee11c3b201ea0d80651156a5c5627084b0efdfa5748781/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:40 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10a7eaed4abd37a8eee11c3b201ea0d80651156a5c5627084b0efdfa5748781/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:40 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d10a7eaed4abd37a8eee11c3b201ea0d80651156a5c5627084b0efdfa5748781/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:40 np0005625203.localdomain podman[279621]: 2026-02-20 09:35:40.376334475 +0000 UTC m=+0.122241316 container init dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:35:40 np0005625203.localdomain podman[279621]: 2026-02-20 09:35:40.385833525 +0000 UTC m=+0.131740366 container start dec368cb98dfcd826a2717a8561372ae292aea08a549971ca164a51f21bcda22 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-30a66d126d7b377a2bbf9bc0d57e51bdf55bb85eee6d379d83e3d3aa2e3d7293-1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2)
Feb 20 09:35:40 np0005625203.localdomain podman[279621]: nova_compute
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: + sudo -E kolla_set_configs
Feb 20 09:35:40 np0005625203.localdomain systemd[1]: Started nova_compute container.
Feb 20 09:35:40 np0005625203.localdomain sudo[279542]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Validating config file
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Copying service configuration files
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Deleting /etc/nova/nova.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Deleting /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Copying /var/lib/kolla/config_files/src/99-nova-compute-cells-workarounds.conf to /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/99-nova-compute-cells-workarounds.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Deleting /etc/ceph
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Creating directory /etc/ceph
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /etc/ceph
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Writing out command to execute
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: ++ cat /run_command
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: + CMD=nova-compute
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: + ARGS=
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: + sudo kolla_copy_cacerts
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: + [[ ! -n '' ]]
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: + . kolla_extend_start
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: + echo 'Running command: '\''nova-compute'\'''
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: Running command: 'nova-compute'
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: + umask 0022
Feb 20 09:35:40 np0005625203.localdomain nova_compute[279636]: + exec nova-compute
Feb 20 09:35:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:35:41 np0005625203.localdomain podman[279665]: 2026-02-20 09:35:41.027734687 +0000 UTC m=+0.094801408 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, release=1770267347, config_id=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, distribution-scope=public, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:35:41 np0005625203.localdomain podman[279665]: 2026-02-20 09:35:41.03929782 +0000 UTC m=+0.106364551 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, version=9.7, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7)
Feb 20 09:35:41 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:35:41 np0005625203.localdomain sudo[279775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-meesengzqdmzfjcjvncjoikgkkznxbwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1771580141.2387793-3637-177884641011657/AnsiballZ_podman_container.py
Feb 20 09:35:41 np0005625203.localdomain sudo[279775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Feb 20 09:35:41 np0005625203.localdomain python3.9[279777]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Feb 20 09:35:42 np0005625203.localdomain systemd[1]: Started libpod-conmon-0eed8cca6dcf940995f6fb4bcae2adcd2e143ba5c2695aeb8d307e61e3ebb962.scope.
Feb 20 09:35:42 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:35:42 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d730ecca856718a3d4e00c1ab30444b07eb2a53dd78143f1ec6fb07f9a60b9/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:42 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d730ecca856718a3d4e00c1ab30444b07eb2a53dd78143f1ec6fb07f9a60b9/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:42 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2d730ecca856718a3d4e00c1ab30444b07eb2a53dd78143f1ec6fb07f9a60b9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Feb 20 09:35:42 np0005625203.localdomain podman[279803]: 2026-02-20 09:35:42.079211473 +0000 UTC m=+0.167203380 container init 0eed8cca6dcf940995f6fb4bcae2adcd2e143ba5c2695aeb8d307e61e3ebb962 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=nova_compute_init, container_name=nova_compute_init, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:35:42 np0005625203.localdomain systemd[1]: tmp-crun.JV4pZS.mount: Deactivated successfully.
Feb 20 09:35:42 np0005625203.localdomain podman[279803]: 2026-02-20 09:35:42.096419568 +0000 UTC m=+0.184411465 container start 0eed8cca6dcf940995f6fb4bcae2adcd2e143ba5c2695aeb8d307e61e3ebb962 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:35:42 np0005625203.localdomain python3.9[279777]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Applying nova statedir ownership
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Checking uid: 0 gid: 0 path: /var/lib/nova/delay-nova-compute
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache already 42436:42436
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache to system_u:object_r:container_file_t:s0
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Ownership of /var/lib/nova/.cache/python-entrypoints already 42436:42436
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.cache/python-entrypoints to system_u:object_r:container_file_t:s0
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/fc52238ffcbdcb325c6bf3fe6412477fc4bdb6cd9151f39289b74f25e08e0db9
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.cache/python-entrypoints/d301d14069645d8c23fee2987984776b3e88a570e1aa96d6cf3e31fa880385fd
Feb 20 09:35:42 np0005625203.localdomain nova_compute_init[279825]: INFO:nova_statedir:Nova statedir ownership complete
Feb 20 09:35:42 np0005625203.localdomain systemd[1]: libpod-0eed8cca6dcf940995f6fb4bcae2adcd2e143ba5c2695aeb8d307e61e3ebb962.scope: Deactivated successfully.
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.175 279640 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.175 279640 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.176 279640 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.176 279640 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Feb 20 09:35:42 np0005625203.localdomain podman[279838]: 2026-02-20 09:35:42.247477013 +0000 UTC m=+0.065943356 container died 0eed8cca6dcf940995f6fb4bcae2adcd2e143ba5c2695aeb8d307e61e3ebb962 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=nova_compute_init, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:35:42 np0005625203.localdomain sudo[279775]: pam_unix(sudo:session): session closed for user root
Feb 20 09:35:42 np0005625203.localdomain podman[279838]: 2026-02-20 09:35:42.272775956 +0000 UTC m=+0.091242269 container cleanup 0eed8cca6dcf940995f6fb4bcae2adcd2e143ba5c2695aeb8d307e61e3ebb962 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '1e1a77c58d37d586d6767e4afe88f7206bd2d76451c054002ad3541a88fa2ece'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=nova_compute_init, container_name=nova_compute_init, org.label-schema.build-date=20260127)
Feb 20 09:35:42 np0005625203.localdomain systemd[1]: libpod-conmon-0eed8cca6dcf940995f6fb4bcae2adcd2e143ba5c2695aeb8d307e61e3ebb962.scope: Deactivated successfully.
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.309 279640 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.332 279640 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.333 279640 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.704 279640 INFO nova.virt.driver [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.814 279640 INFO nova.compute.provider_config [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.822 279640 DEBUG oslo_concurrency.lockutils [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.823 279640 DEBUG oslo_concurrency.lockutils [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.823 279640 DEBUG oslo_concurrency.lockutils [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.823 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.823 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.824 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.824 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.824 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.824 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.824 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.824 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.824 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.824 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.825 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.825 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.825 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.825 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.825 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.825 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.825 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.826 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.826 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.826 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] console_host                   = np0005625203.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.826 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.826 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.826 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.826 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.827 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.827 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.827 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.827 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.827 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.828 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.828 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.828 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.828 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.828 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.828 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.829 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.829 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.829 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.829 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] host                           = np0005625203.localdomain log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.829 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.829 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain sshd[263179]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.830 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.830 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.830 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.830 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.830 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.830 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.830 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.831 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.831 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.831 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.831 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.831 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.831 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.831 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.831 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.832 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.832 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.832 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.832 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.832 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.832 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.832 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.833 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.833 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain systemd[1]: session-58.scope: Deactivated successfully.
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.833 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.833 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain systemd[1]: session-58.scope: Consumed 1min 25.994s CPU time.
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.833 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.833 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.833 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.834 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.834 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.834 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.834 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.834 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.834 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.834 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain systemd-logind[759]: Session 58 logged out. Waiting for processes to exit.
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.835 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.835 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.835 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.835 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] my_block_storage_ip            = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.835 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] my_ip                          = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.835 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.835 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.836 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.836 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.836 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.836 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.836 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.836 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.837 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.837 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.837 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.837 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.837 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.837 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain systemd-logind[759]: Removed session 58.
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.837 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.838 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.838 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.838 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.838 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.838 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.838 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.838 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.838 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.839 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.839 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.839 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.839 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.839 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.839 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.839 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.840 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.840 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.840 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.840 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.840 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.840 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.840 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.841 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.841 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.841 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.841 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.841 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.841 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.841 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.841 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.842 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.842 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.842 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.842 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.842 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.842 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.842 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.842 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.843 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.843 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.843 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.843 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.843 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.843 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.843 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.844 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.844 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.844 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.844 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.844 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.844 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.844 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.844 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.845 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.845 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.845 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.845 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.845 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.845 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.845 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.846 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.846 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.846 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.846 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.846 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.846 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.846 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.847 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.847 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.847 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.847 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.847 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.847 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.847 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.847 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.848 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.848 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.848 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.848 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.848 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.848 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.848 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.849 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.849 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.849 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.849 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.849 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.849 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.849 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.849 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.850 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.850 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.850 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.850 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.850 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.850 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.850 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.850 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.851 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.851 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.851 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.851 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.851 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.851 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.851 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.852 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.852 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.852 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.852 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.852 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.852 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.852 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.852 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.853 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.853 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.853 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.853 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.853 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.853 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.853 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.854 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.854 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.854 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.854 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.854 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.854 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.854 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.854 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.855 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.855 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.855 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.855 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.855 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.855 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.855 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.855 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.856 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.856 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.856 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.856 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.856 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.856 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.856 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.857 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.857 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.857 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.857 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.857 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.857 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.857 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.857 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.858 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.858 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.858 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.858 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.858 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.858 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.858 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.859 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.859 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.859 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.859 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.859 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.859 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.859 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.859 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.860 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.860 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.860 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.860 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.860 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.860 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.860 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.860 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.861 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.861 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.861 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.861 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.861 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.861 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.861 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.862 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.862 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.862 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.862 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.862 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.862 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.862 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.863 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.863 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.863 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.863 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.863 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.863 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.863 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.863 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.864 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.864 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.864 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.864 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.864 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.864 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.864 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.864 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.865 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.865 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.865 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.865 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.865 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.865 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.865 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.866 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.866 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.866 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.866 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.866 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.866 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.866 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.866 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.867 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.867 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.867 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.867 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.867 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.867 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.867 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.868 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.868 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.868 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.868 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.868 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.868 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.868 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.868 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.869 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.869 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.869 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.869 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.869 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.869 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.869 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.870 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.870 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.870 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.870 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.870 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.870 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.870 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.870 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.871 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.871 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.871 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.871 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.871 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.871 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.871 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.871 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.872 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.872 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.872 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.872 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.872 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.872 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.873 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.873 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.873 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.873 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.873 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.873 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.873 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.874 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.874 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.874 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.874 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.874 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.874 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.874 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.874 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.875 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.875 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.875 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.875 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.875 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.875 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.875 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.875 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.876 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.876 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.876 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.876 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.876 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.876 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.876 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.877 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.877 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.877 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.877 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.877 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.877 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.877 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.878 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.878 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.878 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.878 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.878 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.878 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.878 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.878 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.879 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.879 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.879 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.879 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.879 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.879 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.879 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.879 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.880 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.880 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.880 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.880 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.880 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.880 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.880 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.880 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.881 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.881 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.881 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.881 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.881 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.881 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.881 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.882 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.882 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.882 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.882 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.882 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.882 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.882 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.882 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.883 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.883 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.883 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.883 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.883 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.883 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.883 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.883 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.884 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.884 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.884 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.884 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.884 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.884 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.884 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.885 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.885 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.885 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.885 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.885 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.885 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.885 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.885 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.886 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.886 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.886 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.886 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.886 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.886 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.886 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.887 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.887 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.887 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.887 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.887 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.887 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.887 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.887 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.888 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.888 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.888 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.888 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.888 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.888 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.888 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.889 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.889 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.889 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.889 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.889 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.889 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.890 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.890 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.890 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.890 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.890 279640 WARNING oslo_config.cfg [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: live_migration_uri is deprecated for removal in favor of two other options that
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: and ``live_migration_inbound_addr`` respectively.
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: ).  Its value may be silently ignored in the future.
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.890 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_uri     = qemu+ssh://nova@%s/system?keyfile=/var/lib/nova/.ssh/ssh-privatekey log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.891 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.live_migration_with_native_tls = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.891 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.891 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.891 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.891 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.891 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.891 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.892 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.892 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.892 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.892 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.892 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.892 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.893 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.893 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.893 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.893 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.893 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.rbd_secret_uuid        = a8557ee9-b55d-5519-942c-cf8f6172f1d8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.893 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.893 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.894 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.894 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.894 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.894 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.894 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.894 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.894 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.895 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.895 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.895 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.895 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.895 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.895 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.895 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.896 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.896 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.896 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.896 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.896 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.896 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.896 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.897 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.897 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.897 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.897 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.897 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.897 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.897 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.898 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.898 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.898 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.898 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.898 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.898 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.898 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.898 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.899 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.899 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.899 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.899 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.899 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.899 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.899 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.900 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.900 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.900 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.900 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.900 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.900 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.900 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.900 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.901 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.901 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.901 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.901 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.901 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.901 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.901 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.902 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.902 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.902 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.902 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.902 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.902 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.902 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.903 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.903 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.903 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.903 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.903 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.auth_url             = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.903 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.903 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.904 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.904 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.904 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.904 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.904 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.904 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.904 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.904 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.905 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.905 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.905 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.905 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.905 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.905 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.905 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.906 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.906 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.906 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.906 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.906 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.906 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.906 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.906 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.907 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.907 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.907 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.907 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.907 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.907 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.907 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.908 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.908 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.908 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.908 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.908 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.908 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.908 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.908 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.909 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.909 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.909 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.909 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.909 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.909 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.909 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.910 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.910 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.910 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.910 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.910 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.910 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.911 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.911 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.911 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.911 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.911 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.911 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.911 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.912 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.912 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.912 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.912 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.912 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.912 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.912 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.913 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.913 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.913 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.913 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.913 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.913 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.913 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.913 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.914 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.914 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.914 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.914 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.914 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.914 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.914 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.915 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.915 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.915 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.915 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.915 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.915 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.915 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.916 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.916 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.916 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.916 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.916 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.916 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.916 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.917 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.917 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.917 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.917 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.917 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.917 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.917 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.917 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.918 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.918 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.918 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.918 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.918 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.918 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.919 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.919 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.919 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.919 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.919 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.919 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.919 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.919 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.920 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.920 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.920 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.920 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.920 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.920 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.920 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.921 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.921 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.921 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.921 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.921 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.921 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.921 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.922 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.922 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.922 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.922 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.922 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.922 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.922 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.923 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.923 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.923 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.923 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.923 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.923 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.924 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.924 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.924 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.924 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.924 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.924 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.924 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.925 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.925 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.925 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.925 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.925 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.925 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vnc.novncproxy_base_url        = http://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.925 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.926 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.926 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.926 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vnc.server_proxyclient_address = 192.168.122.107 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.926 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.926 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.926 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.926 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.927 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.927 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.927 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.927 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.927 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.927 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.927 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.927 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.928 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.928 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.928 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.928 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.928 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.928 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.928 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.929 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.929 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.929 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.929 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.929 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.929 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.929 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.929 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.930 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.930 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.930 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.930 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.930 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.930 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.930 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.931 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.931 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.931 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.931 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.931 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.931 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.931 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.932 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.932 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.932 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.932 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.932 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.932 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.932 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.932 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.933 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.933 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.933 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.933 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.933 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.933 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.933 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.934 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.934 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.934 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.934 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.934 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.934 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.934 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.935 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.935 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.935 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.935 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.935 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.935 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.935 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.935 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.936 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.936 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.936 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.936 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.936 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.936 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.936 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.937 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.937 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.937 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.937 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.937 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.937 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.937 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.938 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.938 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.938 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.938 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.938 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.938 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.auth_url            = http://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.938 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.939 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.939 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.939 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.939 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.939 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.939 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.939 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.939 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.940 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.940 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.940 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.940 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.940 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.940 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.940 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.941 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.941 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.941 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.941 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.941 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.941 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.941 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.942 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.942 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.942 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.942 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.942 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.942 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.942 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.942 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.943 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.943 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.943 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.943 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.943 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.943 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.943 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.944 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.944 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.944 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.944 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.944 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.944 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.944 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.945 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.945 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.945 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.945 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.945 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.945 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.945 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.945 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.946 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.946 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.946 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.946 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.946 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.946 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.946 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.947 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.947 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.947 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.947 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.947 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.947 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.947 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.948 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.948 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.948 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.948 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.948 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.948 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.948 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.948 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.949 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.949 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.949 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.949 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.949 279640 DEBUG oslo_service.service [None req-c16eaee7-dfa2-46fe-863c-3d5fda9b6d77 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.950 279640 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260127144738.eaa65f0.el9)
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.968 279640 INFO nova.virt.node [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Determined node identity e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from /var/lib/nova/compute_id
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.969 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.969 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.970 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.970 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.980 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fbe6fae4580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.982 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fbe6fae4580> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.983 279640 INFO nova.virt.libvirt.driver [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Connection event '1' reason 'None'
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.988 279640 INFO nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Libvirt host capabilities <capabilities>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   <host>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <uuid>a53ba227-4db8-45ed-bb70-5a295cbaca1c</uuid>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <cpu>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <arch>x86_64</arch>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <model>EPYC-Rome-v4</model>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <vendor>AMD</vendor>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <microcode version='16777317'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <signature family='23' model='49' stepping='0'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <maxphysaddr mode='emulate' bits='40'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='x2apic'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='tsc-deadline'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='osxsave'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='hypervisor'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='tsc_adjust'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='spec-ctrl'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='stibp'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='arch-capabilities'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='ssbd'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='cmp_legacy'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='topoext'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='virt-ssbd'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='lbrv'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='tsc-scale'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='vmcb-clean'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='pause-filter'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='pfthreshold'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='svme-addr-chk'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='rdctl-no'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='skip-l1dfl-vmentry'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='mds-no'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature name='pschange-mc-no'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <pages unit='KiB' size='4'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <pages unit='KiB' size='2048'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <pages unit='KiB' size='1048576'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </cpu>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <power_management>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <suspend_mem/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <suspend_disk/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <suspend_hybrid/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </power_management>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <iommu support='no'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <migration_features>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <live/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <uri_transports>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <uri_transport>tcp</uri_transport>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <uri_transport>rdma</uri_transport>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       </uri_transports>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </migration_features>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <topology>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <cells num='1'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <cell id='0'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:           <memory unit='KiB'>16116612</memory>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:           <pages unit='KiB' size='4'>4029153</pages>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:           <pages unit='KiB' size='2048'>0</pages>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:           <pages unit='KiB' size='1048576'>0</pages>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:           <distances>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:             <sibling id='0' value='10'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:           </distances>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:           <cpus num='8'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:           </cpus>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         </cell>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       </cells>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </topology>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <cache>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </cache>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <secmodel>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <model>selinux</model>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <doi>0</doi>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </secmodel>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <secmodel>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <model>dac</model>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <doi>0</doi>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <baselabel type='kvm'>+107:+107</baselabel>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <baselabel type='qemu'>+107:+107</baselabel>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </secmodel>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   </host>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   <guest>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <os_type>hvm</os_type>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <arch name='i686'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <wordsize>32</wordsize>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <domain type='qemu'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <domain type='kvm'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </arch>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <features>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <pae/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <nonpae/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <acpi default='on' toggle='yes'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <apic default='on' toggle='no'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <cpuselection/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <deviceboot/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <disksnapshot default='on' toggle='no'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <externalSnapshot/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </features>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   </guest>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   <guest>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <os_type>hvm</os_type>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <arch name='x86_64'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <wordsize>64</wordsize>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <domain type='qemu'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <domain type='kvm'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </arch>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <features>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <acpi default='on' toggle='yes'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <apic default='on' toggle='no'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <cpuselection/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <deviceboot/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <disksnapshot default='on' toggle='no'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <externalSnapshot/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </features>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   </guest>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: </capabilities>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.993 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:42.999 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]: <domainCapabilities>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   <domain>kvm</domain>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   <arch>i686</arch>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   <vcpu max='1024'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   <iothreads supported='yes'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   <os supported='yes'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <enum name='firmware'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <loader supported='yes'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <value>rom</value>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <value>pflash</value>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <enum name='readonly'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <value>yes</value>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <value>no</value>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <enum name='secure'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <value>no</value>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </loader>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   </os>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:   <cpu>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <value>on</value>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <value>off</value>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <mode name='maximum' supported='yes'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <enum name='maximumMigratable'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <value>on</value>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <value>off</value>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <mode name='host-model' supported='yes'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <vendor>AMD</vendor>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='x2apic'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='stibp'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='ssbd'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='succor'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='ibrs'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='lbrv'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:     <mode name='custom' supported='yes'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:42 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='ClearwaterForest'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ddpd-u'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sha512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm3'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ddpd-u'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sha512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm3'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cooperlake'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cooperlake-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cooperlake-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Dhyana-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Genoa'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='perfmon-v2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Turin'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='perfmon-v2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbpb'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='perfmon-v2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbpb'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-128'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-256'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-128'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-256'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='KnightsMill'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512er'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512pf'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='KnightsMill-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512er'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512pf'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tbm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tbm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c2d730ecca856718a3d4e00c1ab30444b07eb2a53dd78143f1ec6fb07f9a60b9-merged.mount: Deactivated successfully.
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0eed8cca6dcf940995f6fb4bcae2adcd2e143ba5c2695aeb8d307e61e3ebb962-userdata-shm.mount: Deactivated successfully.
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='athlon'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='athlon-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='core2duo'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='core2duo-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='coreduo'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='coreduo-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='n270'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='n270-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='phenom'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='phenom-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </cpu>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <memoryBacking supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <enum name='sourceType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>file</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>anonymous</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>memfd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </memoryBacking>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <devices>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <disk supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='diskDevice'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>disk</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>cdrom</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>floppy</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>lun</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='bus'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>fdc</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>scsi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>usb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>sata</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-non-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </disk>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <graphics supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vnc</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>egl-headless</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>dbus</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </graphics>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <video supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='modelType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vga</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>cirrus</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>none</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>bochs</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>ramfb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </video>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <hostdev supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='mode'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>subsystem</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='startupPolicy'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>default</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>mandatory</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>requisite</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>optional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='subsysType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>usb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pci</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>scsi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='capsType'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='pciBackend'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </hostdev>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <rng supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-non-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendModel'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>random</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>egd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>builtin</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </rng>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <filesystem supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='driverType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>path</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>handle</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtiofs</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </filesystem>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <tpm supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tpm-tis</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tpm-crb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendModel'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>emulator</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>external</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendVersion'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>2.0</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </tpm>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <redirdev supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='bus'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>usb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </redirdev>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <channel supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pty</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>unix</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </channel>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <crypto supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>qemu</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendModel'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>builtin</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </crypto>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <interface supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>default</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>passt</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </interface>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <panic supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>isa</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>hyperv</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </panic>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <console supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>null</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vc</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pty</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>dev</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>file</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pipe</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>stdio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>udp</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tcp</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>unix</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>qemu-vdagent</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>dbus</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </console>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </devices>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <features>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <gic supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <vmcoreinfo supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <genid supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <backingStoreInput supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <backup supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <async-teardown supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <s390-pv supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <ps2 supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <tdx supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <sev supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <sgx supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <hyperv supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='features'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>relaxed</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vapic</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>spinlocks</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vpindex</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>runtime</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>synic</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>stimer</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>reset</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vendor_id</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>frequencies</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>reenlightenment</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tlbflush</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>ipi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>avic</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>emsr_bitmap</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>xmm_input</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <defaults>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <spinlocks>4095</spinlocks>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <stimer_direct>on</stimer_direct>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </defaults>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </hyperv>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <launchSecurity supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </features>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: </domainCapabilities>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.005 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: <domainCapabilities>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <domain>kvm</domain>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <arch>i686</arch>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <vcpu max='240'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <iothreads supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <os supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <enum name='firmware'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <loader supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>rom</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pflash</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='readonly'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>yes</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>no</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='secure'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>no</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </loader>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </os>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <cpu>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>on</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>off</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <mode name='maximum' supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='maximumMigratable'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>on</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>off</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <mode name='host-model' supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <vendor>AMD</vendor>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='x2apic'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='stibp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='ssbd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='succor'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='lbrv'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <mode name='custom' supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='ClearwaterForest'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ddpd-u'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sha512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm3'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ddpd-u'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sha512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm3'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cooperlake'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cooperlake-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cooperlake-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Dhyana-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Genoa'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='perfmon-v2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Turin'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='perfmon-v2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbpb'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='perfmon-v2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbpb'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-128'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-256'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-128'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-256'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='KnightsMill'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512er'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512pf'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='KnightsMill-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512er'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512pf'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tbm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tbm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='athlon'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='athlon-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='core2duo'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='core2duo-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='coreduo'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='coreduo-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='n270'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='n270-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='phenom'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='phenom-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </cpu>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <memoryBacking supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <enum name='sourceType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>file</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>anonymous</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>memfd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </memoryBacking>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <devices>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <disk supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='diskDevice'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>disk</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>cdrom</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>floppy</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>lun</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='bus'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>ide</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>fdc</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>scsi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>usb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>sata</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-non-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </disk>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <graphics supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vnc</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>egl-headless</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>dbus</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </graphics>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <video supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='modelType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vga</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>cirrus</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>none</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>bochs</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>ramfb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </video>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <hostdev supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='mode'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>subsystem</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='startupPolicy'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>default</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>mandatory</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>requisite</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>optional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='subsysType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>usb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pci</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>scsi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='capsType'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='pciBackend'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </hostdev>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <rng supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-non-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendModel'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>random</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>egd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>builtin</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </rng>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <filesystem supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='driverType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>path</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>handle</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtiofs</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </filesystem>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <tpm supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tpm-tis</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tpm-crb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendModel'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>emulator</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>external</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendVersion'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>2.0</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </tpm>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <redirdev supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='bus'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>usb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </redirdev>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <channel supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pty</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>unix</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </channel>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <crypto supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>qemu</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendModel'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>builtin</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </crypto>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <interface supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>default</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>passt</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </interface>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <panic supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>isa</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>hyperv</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </panic>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <console supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>null</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vc</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pty</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>dev</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>file</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pipe</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>stdio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>udp</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tcp</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>unix</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>qemu-vdagent</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>dbus</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </console>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </devices>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <features>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <gic supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <vmcoreinfo supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <genid supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <backingStoreInput supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <backup supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <async-teardown supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <s390-pv supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <ps2 supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <tdx supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <sev supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <sgx supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <hyperv supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='features'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>relaxed</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vapic</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>spinlocks</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vpindex</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>runtime</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>synic</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>stimer</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>reset</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vendor_id</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>frequencies</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>reenlightenment</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tlbflush</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>ipi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>avic</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>emsr_bitmap</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>xmm_input</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <defaults>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <spinlocks>4095</spinlocks>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <stimer_direct>on</stimer_direct>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </defaults>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </hyperv>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <launchSecurity supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </features>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: </domainCapabilities>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.069 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.072 279640 DEBUG nova.virt.libvirt.volume.mount [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.076 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: <domainCapabilities>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <domain>kvm</domain>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <machine>pc-q35-rhel9.8.0</machine>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <arch>x86_64</arch>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <vcpu max='1024'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <iothreads supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <os supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <enum name='firmware'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>efi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <loader supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>rom</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pflash</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='readonly'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>yes</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>no</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='secure'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>yes</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>no</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </loader>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </os>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <cpu>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>on</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>off</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <mode name='maximum' supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='maximumMigratable'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>on</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>off</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <mode name='host-model' supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <vendor>AMD</vendor>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='x2apic'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='stibp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='ssbd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='succor'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='lbrv'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <mode name='custom' supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='ClearwaterForest'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ddpd-u'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sha512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm3'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ddpd-u'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sha512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm3'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cooperlake'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cooperlake-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cooperlake-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Dhyana-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Genoa'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='perfmon-v2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Turin'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='perfmon-v2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbpb'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='perfmon-v2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbpb'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-128'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-256'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-128'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-256'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='KnightsMill'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512er'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512pf'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='KnightsMill-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512er'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512pf'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tbm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tbm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='athlon'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='athlon-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='core2duo'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='core2duo-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='coreduo'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='coreduo-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='n270'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='n270-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='phenom'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='phenom-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </cpu>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <memoryBacking supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <enum name='sourceType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>file</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>anonymous</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>memfd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </memoryBacking>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <devices>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <disk supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='diskDevice'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>disk</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>cdrom</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>floppy</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>lun</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='bus'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>fdc</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>scsi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>usb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>sata</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-non-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </disk>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <graphics supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vnc</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>egl-headless</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>dbus</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </graphics>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <video supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='modelType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vga</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>cirrus</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>none</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>bochs</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>ramfb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </video>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <hostdev supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='mode'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>subsystem</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='startupPolicy'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>default</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>mandatory</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>requisite</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>optional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='subsysType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>usb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pci</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>scsi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='capsType'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='pciBackend'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </hostdev>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <rng supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-non-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendModel'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>random</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>egd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>builtin</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </rng>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <filesystem supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='driverType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>path</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>handle</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtiofs</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </filesystem>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <tpm supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tpm-tis</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tpm-crb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendModel'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>emulator</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>external</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendVersion'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>2.0</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </tpm>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <redirdev supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='bus'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>usb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </redirdev>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <channel supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pty</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>unix</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </channel>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <crypto supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>qemu</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendModel'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>builtin</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </crypto>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <interface supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>default</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>passt</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </interface>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <panic supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>isa</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>hyperv</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </panic>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <console supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>null</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vc</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pty</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>dev</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>file</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pipe</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>stdio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>udp</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tcp</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>unix</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>qemu-vdagent</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>dbus</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </console>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </devices>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <features>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <gic supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <vmcoreinfo supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <genid supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <backingStoreInput supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <backup supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <async-teardown supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <s390-pv supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <ps2 supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <tdx supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <sev supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <sgx supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <hyperv supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='features'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>relaxed</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vapic</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>spinlocks</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vpindex</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>runtime</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>synic</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>stimer</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>reset</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vendor_id</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>frequencies</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>reenlightenment</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tlbflush</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>ipi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>avic</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>emsr_bitmap</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>xmm_input</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <defaults>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <spinlocks>4095</spinlocks>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <stimer_direct>on</stimer_direct>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </defaults>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </hyperv>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <launchSecurity supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </features>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: </domainCapabilities>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.141 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: <domainCapabilities>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <path>/usr/libexec/qemu-kvm</path>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <domain>kvm</domain>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <machine>pc-i440fx-rhel7.6.0</machine>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <arch>x86_64</arch>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <vcpu max='240'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <iothreads supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <os supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <enum name='firmware'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <loader supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>rom</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pflash</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='readonly'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>yes</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>no</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='secure'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>no</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </loader>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </os>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <cpu>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <mode name='host-passthrough' supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='hostPassthroughMigratable'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>on</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>off</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <mode name='maximum' supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='maximumMigratable'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>on</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>off</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <mode name='host-model' supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model fallback='forbid'>EPYC-Rome</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <vendor>AMD</vendor>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <maxphysaddr mode='passthrough' limit='40'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='x2apic'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='tsc-deadline'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='hypervisor'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='tsc_adjust'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='spec-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='stibp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='ssbd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='cmp_legacy'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='overflow-recov'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='succor'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='amd-ssbd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='virt-ssbd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='lbrv'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='tsc-scale'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='vmcb-clean'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='pause-filter'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='pfthreshold'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='svme-addr-chk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='require' name='lfence-always-serializing'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <feature policy='disable' name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <mode name='custom' supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Broadwell-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cascadelake-Server-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='ClearwaterForest'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ddpd-u'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sha512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm3'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='ClearwaterForest-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ddpd-u'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sha512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm3'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sm4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cooperlake'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cooperlake-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Cooperlake-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Denverton-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Dhyana-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Genoa'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Genoa-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Genoa-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='perfmon-v2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Milan-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Rome-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Turin'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='perfmon-v2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbpb'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-Turin-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amd-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='auto-ibrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vp2intersect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fs-gs-base-ns'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibpb-brtype'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='no-nested-data-bp'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='null-sel-clr-base'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='perfmon-v2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbpb'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='srso-user-kernel-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='stibp-always-on'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='EPYC-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-128'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-256'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='GraniteRapids-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-128'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-256'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx10-512'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='prefetchiti'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Haswell-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-noTSX'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v6'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Icelake-Server-v7'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='IvyBridge-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='KnightsMill'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512er'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512pf'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='KnightsMill-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4fmaps'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-4vnniw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512er'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512pf'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G4-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tbm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Opteron_G5-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fma4'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tbm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xop'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SapphireRapids-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='amx-tile'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-bf16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-fp16'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512-vpopcntdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bitalg'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vbmi2'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrc'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fzrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='la57'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='taa-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='tsx-ldtrk'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='SierraForest-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ifma'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-ne-convert'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx-vnni-int8'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bhi-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='bus-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cmpccxadd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fbsdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='fsrs'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ibrs-all'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='intel-psfd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ipred-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='lam'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mcdt-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pbrsb-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='psdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rfds-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rrsba-ctrl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='sbdr-ssdp-no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='serialize'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vaes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='vpclmulqdq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Client-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='hle'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='rtm'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Skylake-Server-v5'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512bw'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512cd'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512dq'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512f'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='avx512vl'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='invpcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pcid'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='pku'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='mpx'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v2'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v3'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='core-capability'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='split-lock-detect'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='Snowridge-v4'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='cldemote'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='erms'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='gfni'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdir64b'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='movdiri'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='xsaves'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='athlon'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='athlon-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='core2duo'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='core2duo-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='coreduo'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='coreduo-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='n270'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='n270-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='ss'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='phenom'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <blockers model='phenom-v1'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnow'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <feature name='3dnowext'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </blockers>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </mode>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </cpu>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <memoryBacking supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <enum name='sourceType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>file</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>anonymous</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <value>memfd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </memoryBacking>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <devices>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <disk supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='diskDevice'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>disk</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>cdrom</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>floppy</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>lun</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='bus'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>ide</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>fdc</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>scsi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>usb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>sata</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-non-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </disk>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <graphics supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vnc</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>egl-headless</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>dbus</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </graphics>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <video supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='modelType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vga</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>cirrus</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>none</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>bochs</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>ramfb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </video>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <hostdev supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='mode'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>subsystem</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='startupPolicy'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>default</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>mandatory</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>requisite</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>optional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='subsysType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>usb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pci</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>scsi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='capsType'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='pciBackend'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </hostdev>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <rng supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtio-non-transitional</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendModel'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>random</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>egd</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>builtin</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </rng>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <filesystem supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='driverType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>path</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>handle</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>virtiofs</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </filesystem>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <tpm supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tpm-tis</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tpm-crb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendModel'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>emulator</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>external</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendVersion'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>2.0</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </tpm>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <redirdev supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='bus'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>usb</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </redirdev>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <channel supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pty</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>unix</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </channel>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <crypto supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>qemu</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendModel'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>builtin</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </crypto>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <interface supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='backendType'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>default</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>passt</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </interface>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <panic supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='model'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>isa</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>hyperv</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </panic>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <console supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='type'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>null</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vc</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pty</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>dev</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>file</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>pipe</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>stdio</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>udp</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tcp</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>unix</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>qemu-vdagent</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>dbus</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </console>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </devices>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   <features>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <gic supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <vmcoreinfo supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <genid supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <backingStoreInput supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <backup supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <async-teardown supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <s390-pv supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <ps2 supported='yes'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <tdx supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <sev supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <sgx supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <hyperv supported='yes'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <enum name='features'>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>relaxed</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vapic</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>spinlocks</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vpindex</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>runtime</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>synic</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>stimer</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>reset</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>vendor_id</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>frequencies</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>reenlightenment</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>tlbflush</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>ipi</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>avic</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>emsr_bitmap</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <value>xmm_input</value>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </enum>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       <defaults>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <spinlocks>4095</spinlocks>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <stimer_direct>on</stimer_direct>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <tlbflush_direct>off</tlbflush_direct>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <tlbflush_extended>off</tlbflush_extended>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:         <vendor_id>Linux KVM Hv</vendor_id>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:       </defaults>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     </hyperv>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:     <launchSecurity supported='no'/>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:   </features>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: </domainCapabilities>
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.200 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.201 279640 INFO nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Secure Boot support detected
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.203 279640 INFO nova.virt.libvirt.driver [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.203 279640 INFO nova.virt.libvirt.driver [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.214 279640 DEBUG nova.virt.libvirt.driver [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.247 279640 INFO nova.virt.node [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Determined node identity e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from /var/lib/nova/compute_id
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.272 279640 DEBUG nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Verified node e5d5157a-2df2-4f51-b5fb-cd2da3a8584e matches my host np0005625203.localdomain _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.328 279640 INFO nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.401 279640 DEBUG oslo_concurrency.lockutils [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.402 279640 DEBUG oslo_concurrency.lockutils [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.402 279640 DEBUG oslo_concurrency.lockutils [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.402 279640 DEBUG nova.compute.resource_tracker [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.403 279640 DEBUG oslo_concurrency.processutils [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:35:43 np0005625203.localdomain rsyslogd[758]: imjournal from <localhost:nova_compute>: begin to drop messages due to rate-limiting
Feb 20 09:35:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:35:43 np0005625203.localdomain podman[279920]: 2026-02-20 09:35:43.779601653 +0000 UTC m=+0.091254688 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute)
Feb 20 09:35:43 np0005625203.localdomain podman[279920]: 2026-02-20 09:35:43.797203181 +0000 UTC m=+0.108856226 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Feb 20 09:35:43 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:35:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:43.865 279640 DEBUG oslo_concurrency.processutils [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.072 279640 WARNING nova.virt.libvirt.driver [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.074 279640 DEBUG nova.compute.resource_tracker [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12934MB free_disk=41.83720779418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.074 279640 DEBUG oslo_concurrency.lockutils [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.075 279640 DEBUG oslo_concurrency.lockutils [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.187 279640 DEBUG nova.compute.resource_tracker [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.188 279640 DEBUG nova.compute.resource_tracker [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.241 279640 DEBUG nova.scheduler.client.report [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Refreshing inventories for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.266 279640 DEBUG nova.scheduler.client.report [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Updating ProviderTree inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.267 279640 DEBUG nova.compute.provider_tree [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.284 279640 DEBUG nova.scheduler.client.report [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Refreshing aggregate associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.311 279640 DEBUG nova.scheduler.client.report [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Refreshing trait associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, traits: COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.335 279640 DEBUG oslo_concurrency.processutils [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:35:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52194 DF PROTO=TCP SPT=58272 DPT=9102 SEQ=1194113709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0AB6800000000001030307) 
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.806 279640 DEBUG oslo_concurrency.processutils [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.815 279640 DEBUG nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.816 279640 INFO nova.virt.libvirt.host [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] kernel doesn't support AMD SEV
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.817 279640 DEBUG nova.compute.provider_tree [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.818 279640 DEBUG nova.virt.libvirt.driver [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.842 279640 DEBUG nova.scheduler.client.report [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.872 279640 DEBUG nova.compute.resource_tracker [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.872 279640 DEBUG oslo_concurrency.lockutils [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.873 279640 DEBUG nova.service [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.900 279640 DEBUG nova.service [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Feb 20 09:35:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:35:44.901 279640 DEBUG nova.servicegroup.drivers.db [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] DB_Driver: join new ServiceGroup member np0005625203.localdomain to the compute group, service = <Service: host=np0005625203.localdomain, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Feb 20 09:35:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:35:50 np0005625203.localdomain podman[279963]: 2026-02-20 09:35:50.768194283 +0000 UTC m=+0.083369058 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:35:50 np0005625203.localdomain podman[279963]: 2026-02-20 09:35:50.783309096 +0000 UTC m=+0.098483861 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:35:50 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:35:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:35:56 np0005625203.localdomain systemd[1]: tmp-crun.y1zg44.mount: Deactivated successfully.
Feb 20 09:35:56 np0005625203.localdomain podman[279986]: 2026-02-20 09:35:56.781518597 +0000 UTC m=+0.090923800 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:35:56 np0005625203.localdomain podman[279986]: 2026-02-20 09:35:56.78852294 +0000 UTC m=+0.097928183 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:35:56 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:35:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:35:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:35:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:35:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:35:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16318 "" "Go-http-client/1.1"
Feb 20 09:35:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=83 DF PROTO=TCP SPT=52806 DPT=9102 SEQ=1420063932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0AF0650000000001030307) 
Feb 20 09:36:00 np0005625203.localdomain sshd[280009]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:36:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=84 DF PROTO=TCP SPT=52806 DPT=9102 SEQ=1420063932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0AF4810000000001030307) 
Feb 20 09:36:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52195 DF PROTO=TCP SPT=58272 DPT=9102 SEQ=1194113709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0AF6810000000001030307) 
Feb 20 09:36:01 np0005625203.localdomain sshd[280009]: Invalid user httpd from 152.32.129.236 port 54074
Feb 20 09:36:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:36:01 np0005625203.localdomain podman[280011]: 2026-02-20 09:36:01.629087713 +0000 UTC m=+0.087934568 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 20 09:36:01 np0005625203.localdomain podman[280011]: 2026-02-20 09:36:01.665295339 +0000 UTC m=+0.124142214 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 20 09:36:01 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:36:01 np0005625203.localdomain sshd[280009]: Received disconnect from 152.32.129.236 port 54074:11: Bye Bye [preauth]
Feb 20 09:36:01 np0005625203.localdomain sshd[280009]: Disconnected from invalid user httpd 152.32.129.236 port 54074 [preauth]
Feb 20 09:36:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=85 DF PROTO=TCP SPT=52806 DPT=9102 SEQ=1420063932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0AFC800000000001030307) 
Feb 20 09:36:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:36:02.576 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:36:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:36:02.577 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:36:03 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=15843 DF PROTO=TCP SPT=50034 DPT=9102 SEQ=3765063134 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0B00810000000001030307) 
Feb 20 09:36:04 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:36:04.579 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:36:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:04.903 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:36:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:04.921 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:36:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=86 DF PROTO=TCP SPT=52806 DPT=9102 SEQ=1420063932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0B0C400000000001030307) 
Feb 20 09:36:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:36:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:36:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:36:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:36:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:36:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:36:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:36:07.648 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:36:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:36:07.648 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:36:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:36:07.648 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:36:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:36:08 np0005625203.localdomain podman[280029]: 2026-02-20 09:36:08.753343638 +0000 UTC m=+0.071877427 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb 20 09:36:08 np0005625203.localdomain podman[280029]: 2026-02-20 09:36:08.79827214 +0000 UTC m=+0.116805949 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:36:08 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:36:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:36:11 np0005625203.localdomain podman[280056]: 2026-02-20 09:36:11.751999124 +0000 UTC m=+0.073041322 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1770267347, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64)
Feb 20 09:36:11 np0005625203.localdomain podman[280056]: 2026-02-20 09:36:11.76725627 +0000 UTC m=+0.088298448 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=openstack_network_exporter, managed_by=edpm_ansible, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Feb 20 09:36:11 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:36:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:36:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=87 DF PROTO=TCP SPT=52806 DPT=9102 SEQ=1420063932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0B2C800000000001030307) 
Feb 20 09:36:14 np0005625203.localdomain podman[280076]: 2026-02-20 09:36:14.758359258 +0000 UTC m=+0.079963715 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Feb 20 09:36:14 np0005625203.localdomain podman[280076]: 2026-02-20 09:36:14.774292035 +0000 UTC m=+0.095896502 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 20 09:36:14 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:36:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:36:21 np0005625203.localdomain podman[280094]: 2026-02-20 09:36:21.765236225 +0000 UTC m=+0.083645276 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:36:21 np0005625203.localdomain podman[280094]: 2026-02-20 09:36:21.800245205 +0000 UTC m=+0.118654246 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:36:21 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:36:22 np0005625203.localdomain sudo[280117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:36:22 np0005625203.localdomain sudo[280117]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:36:22 np0005625203.localdomain sudo[280117]: pam_unix(sudo:session): session closed for user root
Feb 20 09:36:22 np0005625203.localdomain sudo[280135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:36:22 np0005625203.localdomain sudo[280135]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:36:22 np0005625203.localdomain sshd[280153]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:36:22 np0005625203.localdomain sshd[280169]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:36:22 np0005625203.localdomain sudo[280135]: pam_unix(sudo:session): session closed for user root
Feb 20 09:36:22 np0005625203.localdomain sshd[280153]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:36:23 np0005625203.localdomain sudo[280188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:36:23 np0005625203.localdomain sudo[280188]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:36:23 np0005625203.localdomain sudo[280188]: pam_unix(sudo:session): session closed for user root
Feb 20 09:36:24 np0005625203.localdomain sshd[280169]: Invalid user sshuser from 118.99.80.29 port 27953
Feb 20 09:36:24 np0005625203.localdomain sshd[280169]: Received disconnect from 118.99.80.29 port 27953:11: Bye Bye [preauth]
Feb 20 09:36:24 np0005625203.localdomain sshd[280169]: Disconnected from invalid user sshuser 118.99.80.29 port 27953 [preauth]
Feb 20 09:36:25 np0005625203.localdomain sshd[280206]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:36:26 np0005625203.localdomain sshd[280206]: Invalid user admin from 194.107.115.2 port 13178
Feb 20 09:36:26 np0005625203.localdomain sshd[280206]: Received disconnect from 194.107.115.2 port 13178:11: Bye Bye [preauth]
Feb 20 09:36:26 np0005625203.localdomain sshd[280206]: Disconnected from invalid user admin 194.107.115.2 port 13178 [preauth]
Feb 20 09:36:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:36:27 np0005625203.localdomain systemd[1]: tmp-crun.oBzweq.mount: Deactivated successfully.
Feb 20 09:36:27 np0005625203.localdomain podman[280208]: 2026-02-20 09:36:27.766056816 +0000 UTC m=+0.087214225 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:36:27 np0005625203.localdomain podman[280208]: 2026-02-20 09:36:27.801218841 +0000 UTC m=+0.122376250 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:36:27 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:36:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:36:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:36:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:36:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:36:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16315 "" "Go-http-client/1.1"
Feb 20 09:36:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7003 DF PROTO=TCP SPT=49780 DPT=9102 SEQ=1852943474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0B65950000000001030307) 
Feb 20 09:36:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7004 DF PROTO=TCP SPT=49780 DPT=9102 SEQ=1852943474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0B69800000000001030307) 
Feb 20 09:36:31 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=88 DF PROTO=TCP SPT=52806 DPT=9102 SEQ=1420063932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0B6C800000000001030307) 
Feb 20 09:36:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7005 DF PROTO=TCP SPT=49780 DPT=9102 SEQ=1852943474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0B71800000000001030307) 
Feb 20 09:36:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:36:32 np0005625203.localdomain podman[280232]: 2026-02-20 09:36:32.765905855 +0000 UTC m=+0.083722759 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:36:32 np0005625203.localdomain podman[280232]: 2026-02-20 09:36:32.798436478 +0000 UTC m=+0.116253382 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Feb 20 09:36:32 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:36:33 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=52196 DF PROTO=TCP SPT=58272 DPT=9102 SEQ=1194113709 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0B74810000000001030307) 
Feb 20 09:36:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7006 DF PROTO=TCP SPT=49780 DPT=9102 SEQ=1852943474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0B81400000000001030307) 
Feb 20 09:36:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:36:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:36:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:36:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:36:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:36:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:36:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:36:39 np0005625203.localdomain podman[280251]: 2026-02-20 09:36:39.765304255 +0000 UTC m=+0.083645546 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller)
Feb 20 09:36:39 np0005625203.localdomain podman[280251]: 2026-02-20 09:36:39.828247489 +0000 UTC m=+0.146588750 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:36:39 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.344 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.346 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.347 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.347 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.359 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.359 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.361 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.361 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.362 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.362 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.363 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.363 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.363 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.380 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.381 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.381 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.381 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.382 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:36:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:36:42 np0005625203.localdomain systemd[1]: tmp-crun.MW8urs.mount: Deactivated successfully.
Feb 20 09:36:42 np0005625203.localdomain podman[280296]: 2026-02-20 09:36:42.773570245 +0000 UTC m=+0.088976238 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.7, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, distribution-scope=public, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64)
Feb 20 09:36:42 np0005625203.localdomain podman[280296]: 2026-02-20 09:36:42.788089789 +0000 UTC m=+0.103495792 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, io.buildah.version=1.33.7, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, version=9.7)
Feb 20 09:36:42 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:36:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:42.896 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:36:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:43.098 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:36:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:43.100 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12932MB free_disk=41.8370475769043GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:36:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:43.101 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:36:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:43.101 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:36:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:43.161 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:36:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:43.161 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:36:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:43.177 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:36:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:43.642 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:36:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:43.650 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:36:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:43.665 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:36:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:43.668 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:36:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:36:43.668 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:36:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7007 DF PROTO=TCP SPT=49780 DPT=9102 SEQ=1852943474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0BA2810000000001030307) 
Feb 20 09:36:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:36:45 np0005625203.localdomain systemd[1]: tmp-crun.suApeA.mount: Deactivated successfully.
Feb 20 09:36:45 np0005625203.localdomain podman[280340]: 2026-02-20 09:36:45.779445923 +0000 UTC m=+0.092211178 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Feb 20 09:36:45 np0005625203.localdomain podman[280340]: 2026-02-20 09:36:45.815735092 +0000 UTC m=+0.128500307 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:36:45 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:36:45 np0005625203.localdomain sshd[280360]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:36:46 np0005625203.localdomain sshd[280360]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:36:48 np0005625203.localdomain sshd[280362]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:36:49 np0005625203.localdomain sshd[280362]: Invalid user n8n from 5.253.59.68 port 55742
Feb 20 09:36:49 np0005625203.localdomain sshd[280362]: Received disconnect from 5.253.59.68 port 55742:11: Bye Bye [preauth]
Feb 20 09:36:49 np0005625203.localdomain sshd[280362]: Disconnected from invalid user n8n 5.253.59.68 port 55742 [preauth]
Feb 20 09:36:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:36:52 np0005625203.localdomain systemd[1]: tmp-crun.MPb1gZ.mount: Deactivated successfully.
Feb 20 09:36:52 np0005625203.localdomain podman[280364]: 2026-02-20 09:36:52.770261011 +0000 UTC m=+0.087518366 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:36:52 np0005625203.localdomain podman[280364]: 2026-02-20 09:36:52.779438681 +0000 UTC m=+0.096696016 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:36:52 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:36:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:36:58 np0005625203.localdomain podman[280386]: 2026-02-20 09:36:58.773043672 +0000 UTC m=+0.085590037 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:36:58 np0005625203.localdomain podman[280386]: 2026-02-20 09:36:58.783172571 +0000 UTC m=+0.095718936 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:36:58 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:36:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:36:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:36:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:36:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:36:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16317 "" "Go-http-client/1.1"
Feb 20 09:36:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57633 DF PROTO=TCP SPT=42548 DPT=9102 SEQ=930238988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0BDAC50000000001030307) 
Feb 20 09:37:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57634 DF PROTO=TCP SPT=42548 DPT=9102 SEQ=930238988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0BDEC00000000001030307) 
Feb 20 09:37:01 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7008 DF PROTO=TCP SPT=49780 DPT=9102 SEQ=1852943474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0BE2800000000001030307) 
Feb 20 09:37:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57635 DF PROTO=TCP SPT=42548 DPT=9102 SEQ=930238988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0BE6C00000000001030307) 
Feb 20 09:37:02 np0005625203.localdomain sshd[280409]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:37:03 np0005625203.localdomain sshd[280409]: Invalid user titu from 185.196.11.208 port 34140
Feb 20 09:37:03 np0005625203.localdomain sshd[280409]: Received disconnect from 185.196.11.208 port 34140:11: Bye Bye [preauth]
Feb 20 09:37:03 np0005625203.localdomain sshd[280409]: Disconnected from invalid user titu 185.196.11.208 port 34140 [preauth]
Feb 20 09:37:03 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=89 DF PROTO=TCP SPT=52806 DPT=9102 SEQ=1420063932 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0BEA800000000001030307) 
Feb 20 09:37:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:37:03 np0005625203.localdomain podman[280411]: 2026-02-20 09:37:03.508001867 +0000 UTC m=+0.081025096 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:37:03 np0005625203.localdomain podman[280411]: 2026-02-20 09:37:03.543356667 +0000 UTC m=+0.116379956 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:37:03 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:37:06 np0005625203.localdomain sshd[280429]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:37:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57636 DF PROTO=TCP SPT=42548 DPT=9102 SEQ=930238988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0BF6800000000001030307) 
Feb 20 09:37:06 np0005625203.localdomain sshd[280429]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:37:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:37:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:37:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:37:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:37:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:37:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:37:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:37:07.650 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:37:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:37:07.650 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:37:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:37:07.651 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:37:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:37:10 np0005625203.localdomain podman[280431]: 2026-02-20 09:37:10.743088747 +0000 UTC m=+0.063967954 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:37:10 np0005625203.localdomain podman[280431]: 2026-02-20 09:37:10.83743047 +0000 UTC m=+0.158309697 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:37:10 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:37:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:37:13 np0005625203.localdomain podman[280455]: 2026-02-20 09:37:13.764049206 +0000 UTC m=+0.081012256 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, config_id=openstack_network_exporter, version=9.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, name=ubi9/ubi-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 09:37:13 np0005625203.localdomain podman[280455]: 2026-02-20 09:37:13.780456327 +0000 UTC m=+0.097419437 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.expose-services=, version=9.7, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 09:37:13 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:37:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57637 DF PROTO=TCP SPT=42548 DPT=9102 SEQ=930238988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0C16800000000001030307) 
Feb 20 09:37:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:37:16 np0005625203.localdomain podman[280474]: 2026-02-20 09:37:16.738322379 +0000 UTC m=+0.061561852 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Feb 20 09:37:16 np0005625203.localdomain podman[280474]: 2026-02-20 09:37:16.774086581 +0000 UTC m=+0.097326074 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:37:16 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.181 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:37:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:37:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:37:23 np0005625203.localdomain podman[280493]: 2026-02-20 09:37:23.773392628 +0000 UTC m=+0.085693350 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:37:23 np0005625203.localdomain podman[280493]: 2026-02-20 09:37:23.785357173 +0000 UTC m=+0.097657895 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:37:23 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:37:24 np0005625203.localdomain sudo[280515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:37:24 np0005625203.localdomain sudo[280515]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:37:24 np0005625203.localdomain sudo[280515]: pam_unix(sudo:session): session closed for user root
Feb 20 09:37:24 np0005625203.localdomain sudo[280533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:37:24 np0005625203.localdomain sudo[280533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:37:24 np0005625203.localdomain sudo[280533]: pam_unix(sudo:session): session closed for user root
Feb 20 09:37:27 np0005625203.localdomain sudo[280583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:37:27 np0005625203.localdomain sudo[280583]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:37:27 np0005625203.localdomain sudo[280583]: pam_unix(sudo:session): session closed for user root
Feb 20 09:37:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:37:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:37:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:37:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:37:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16315 "" "Go-http-client/1.1"
Feb 20 09:37:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49215 DF PROTO=TCP SPT=60342 DPT=9102 SEQ=1882577805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0C4FF40000000001030307) 
Feb 20 09:37:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:37:29 np0005625203.localdomain systemd[1]: tmp-crun.q1hkZP.mount: Deactivated successfully.
Feb 20 09:37:29 np0005625203.localdomain podman[280601]: 2026-02-20 09:37:29.769114552 +0000 UTC m=+0.084918126 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:37:29 np0005625203.localdomain podman[280601]: 2026-02-20 09:37:29.781226481 +0000 UTC m=+0.097030075 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:37:29 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:37:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49216 DF PROTO=TCP SPT=60342 DPT=9102 SEQ=1882577805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0C54000000000001030307) 
Feb 20 09:37:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57638 DF PROTO=TCP SPT=42548 DPT=9102 SEQ=930238988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0C56810000000001030307) 
Feb 20 09:37:31 np0005625203.localdomain sshd[280624]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:37:31 np0005625203.localdomain sshd[280624]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:37:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49217 DF PROTO=TCP SPT=60342 DPT=9102 SEQ=1882577805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0C5C000000000001030307) 
Feb 20 09:37:33 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=7009 DF PROTO=TCP SPT=49780 DPT=9102 SEQ=1852943474 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0C60800000000001030307) 
Feb 20 09:37:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:37:33 np0005625203.localdomain podman[280626]: 2026-02-20 09:37:33.757976591 +0000 UTC m=+0.075824766 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 20 09:37:33 np0005625203.localdomain podman[280626]: 2026-02-20 09:37:33.793285274 +0000 UTC m=+0.111133519 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 20 09:37:33 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:37:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49218 DF PROTO=TCP SPT=60342 DPT=9102 SEQ=1882577805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0C6BC00000000001030307) 
Feb 20 09:37:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:37:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:37:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:37:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:37:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:37:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:37:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:37:41 np0005625203.localdomain systemd[1]: tmp-crun.YGfbDR.mount: Deactivated successfully.
Feb 20 09:37:41 np0005625203.localdomain podman[280644]: 2026-02-20 09:37:41.781609153 +0000 UTC m=+0.090346035 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:37:41 np0005625203.localdomain podman[280644]: 2026-02-20 09:37:41.840213885 +0000 UTC m=+0.148950747 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 20 09:37:41 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:37:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:43.659 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:37:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:43.660 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:37:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:43.686 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:37:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:43.686 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:37:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:43.686 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:37:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:43.687 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:37:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:43.687 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:37:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:43.711 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:37:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:43.711 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:37:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:43.712 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:37:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:43.712 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:37:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:43.712 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:37:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:44.189 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:37:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:44.413 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:37:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:44.416 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12905MB free_disk=41.83699035644531GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:37:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:44.417 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:37:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:44.418 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:37:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:44.488 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:37:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:44.488 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:37:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:44.510 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:37:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:37:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49219 DF PROTO=TCP SPT=60342 DPT=9102 SEQ=1882577805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0C8C800000000001030307) 
Feb 20 09:37:44 np0005625203.localdomain podman[280711]: 2026-02-20 09:37:44.807919673 +0000 UTC m=+0.072272485 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 09:37:44 np0005625203.localdomain podman[280711]: 2026-02-20 09:37:44.822361491 +0000 UTC m=+0.086714323 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, container_name=openstack_network_exporter)
Feb 20 09:37:44 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:37:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:45.027 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:37:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:45.033 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:37:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:45.158 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:37:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:45.162 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:37:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:45.162 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:37:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:45.817 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:37:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:45.818 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:37:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:45.818 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:37:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:45.842 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:37:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:45.842 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:37:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:45.843 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:37:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:37:45.843 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:37:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:37:47 np0005625203.localdomain podman[280731]: 2026-02-20 09:37:47.763956381 +0000 UTC m=+0.078783849 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 20 09:37:47 np0005625203.localdomain podman[280731]: 2026-02-20 09:37:47.778293824 +0000 UTC m=+0.093121322 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 20 09:37:47 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:37:51 np0005625203.localdomain sshd[280749]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:37:51 np0005625203.localdomain sshd[280749]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:37:52 np0005625203.localdomain sshd[280751]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:37:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:37:53 np0005625203.localdomain sshd[280751]: Invalid user bot from 103.48.192.48 port 48749
Feb 20 09:37:54 np0005625203.localdomain podman[280753]: 2026-02-20 09:37:54.043408578 +0000 UTC m=+0.084186855 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:37:54 np0005625203.localdomain podman[280753]: 2026-02-20 09:37:54.056216734 +0000 UTC m=+0.096995031 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:37:54 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:37:54 np0005625203.localdomain sshd[280751]: Received disconnect from 103.48.192.48 port 48749:11: Bye Bye [preauth]
Feb 20 09:37:54 np0005625203.localdomain sshd[280751]: Disconnected from invalid user bot 103.48.192.48 port 48749 [preauth]
Feb 20 09:37:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:37:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:37:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:37:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:37:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16321 "" "Go-http-client/1.1"
Feb 20 09:37:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33845 DF PROTO=TCP SPT=60116 DPT=9102 SEQ=247509142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0CC5250000000001030307) 
Feb 20 09:38:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33846 DF PROTO=TCP SPT=60116 DPT=9102 SEQ=247509142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0CC9400000000001030307) 
Feb 20 09:38:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:38:00 np0005625203.localdomain podman[280776]: 2026-02-20 09:38:00.767513788 +0000 UTC m=+0.084772953 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:38:00 np0005625203.localdomain podman[280776]: 2026-02-20 09:38:00.779300364 +0000 UTC m=+0.096559539 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:38:00 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:38:01 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49220 DF PROTO=TCP SPT=60342 DPT=9102 SEQ=1882577805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0CCC800000000001030307) 
Feb 20 09:38:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33847 DF PROTO=TCP SPT=60116 DPT=9102 SEQ=247509142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0CD1400000000001030307) 
Feb 20 09:38:03 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=57639 DF PROTO=TCP SPT=42548 DPT=9102 SEQ=930238988 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0CD4800000000001030307) 
Feb 20 09:38:04 np0005625203.localdomain sshd[280801]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:38:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:38:04 np0005625203.localdomain podman[280803]: 2026-02-20 09:38:04.76608397 +0000 UTC m=+0.082247946 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:38:04 np0005625203.localdomain podman[280803]: 2026-02-20 09:38:04.772087015 +0000 UTC m=+0.088250941 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:38:04 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:38:04 np0005625203.localdomain sshd[280801]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:38:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33848 DF PROTO=TCP SPT=60116 DPT=9102 SEQ=247509142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0CE1010000000001030307) 
Feb 20 09:38:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:38:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:38:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:38:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:38:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:38:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:38:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:38:07.651 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:38:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:38:07.652 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:38:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:38:07.652 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:38:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:38:12 np0005625203.localdomain systemd[1]: tmp-crun.D5WKRg.mount: Deactivated successfully.
Feb 20 09:38:12 np0005625203.localdomain podman[280821]: 2026-02-20 09:38:12.771040755 +0000 UTC m=+0.084869986 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:38:12 np0005625203.localdomain podman[280821]: 2026-02-20 09:38:12.85331212 +0000 UTC m=+0.167141341 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 20 09:38:12 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:38:14 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33849 DF PROTO=TCP SPT=60116 DPT=9102 SEQ=247509142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0D00810000000001030307) 
Feb 20 09:38:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:38:15 np0005625203.localdomain podman[280844]: 2026-02-20 09:38:15.759814956 +0000 UTC m=+0.076981702 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, distribution-scope=public, version=9.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Feb 20 09:38:15 np0005625203.localdomain podman[280844]: 2026-02-20 09:38:15.775470921 +0000 UTC m=+0.092637677 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., version=9.7, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container)
Feb 20 09:38:15 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:38:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:38:18 np0005625203.localdomain podman[280864]: 2026-02-20 09:38:18.768575574 +0000 UTC m=+0.085805135 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 20 09:38:18 np0005625203.localdomain podman[280864]: 2026-02-20 09:38:18.780307947 +0000 UTC m=+0.097537498 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:38:18 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:38:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:38:24 np0005625203.localdomain podman[280883]: 2026-02-20 09:38:24.769894648 +0000 UTC m=+0.086414292 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:38:24 np0005625203.localdomain podman[280883]: 2026-02-20 09:38:24.806252403 +0000 UTC m=+0.122772087 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:38:24 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:38:25 np0005625203.localdomain sshd[280905]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:38:26 np0005625203.localdomain sshd[280905]: Received disconnect from 103.61.123.132 port 36872:11: Bye Bye [preauth]
Feb 20 09:38:26 np0005625203.localdomain sshd[280905]: Disconnected from authenticating user root 103.61.123.132 port 36872 [preauth]
Feb 20 09:38:27 np0005625203.localdomain sudo[280907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:38:27 np0005625203.localdomain sudo[280907]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:38:27 np0005625203.localdomain sudo[280907]: pam_unix(sudo:session): session closed for user root
Feb 20 09:38:27 np0005625203.localdomain sudo[280925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:38:27 np0005625203.localdomain sudo[280925]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:38:28 np0005625203.localdomain sudo[280925]: pam_unix(sudo:session): session closed for user root
Feb 20 09:38:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:38:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:38:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:38:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:38:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16322 "" "Go-http-client/1.1"
Feb 20 09:38:29 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23680 DF PROTO=TCP SPT=33856 DPT=9102 SEQ=1673503311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0D3A550000000001030307) 
Feb 20 09:38:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23681 DF PROTO=TCP SPT=33856 DPT=9102 SEQ=1673503311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0D3E400000000001030307) 
Feb 20 09:38:30 np0005625203.localdomain sshd[280972]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:38:30 np0005625203.localdomain sshd[280972]: Accepted publickey for zuul from 38.102.83.114 port 44434 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:38:30 np0005625203.localdomain systemd-logind[759]: New session 60 of user zuul.
Feb 20 09:38:30 np0005625203.localdomain systemd[1]: Started Session 60 of User zuul.
Feb 20 09:38:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:38:30 np0005625203.localdomain sshd[280972]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 09:38:30 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33850 DF PROTO=TCP SPT=60116 DPT=9102 SEQ=247509142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0D40800000000001030307) 
Feb 20 09:38:30 np0005625203.localdomain podman[280975]: 2026-02-20 09:38:30.909367286 +0000 UTC m=+0.088828068 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:38:30 np0005625203.localdomain podman[280975]: 2026-02-20 09:38:30.919162939 +0000 UTC m=+0.098623731 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:38:30 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:38:30 np0005625203.localdomain sudo[281009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-izttsjgudobzsoaegamykhluuwmlbfbd ; /usr/bin/python3
Feb 20 09:38:30 np0005625203.localdomain sudo[281009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 09:38:31 np0005625203.localdomain python3[281017]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:38:31 np0005625203.localdomain subscription-manager[281018]: Unregistered machine with identity: 7c76e446-72cd-40ed-9df9-5912b758c267
Feb 20 09:38:31 np0005625203.localdomain sudo[281009]: pam_unix(sudo:session): session closed for user root
Feb 20 09:38:31 np0005625203.localdomain sudo[281020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:38:31 np0005625203.localdomain sudo[281020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:38:31 np0005625203.localdomain sudo[281020]: pam_unix(sudo:session): session closed for user root
Feb 20 09:38:32 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23682 DF PROTO=TCP SPT=33856 DPT=9102 SEQ=1673503311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0D46410000000001030307) 
Feb 20 09:38:33 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=49221 DF PROTO=TCP SPT=60342 DPT=9102 SEQ=1882577805 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0D4A800000000001030307) 
Feb 20 09:38:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:38:35 np0005625203.localdomain podman[281038]: 2026-02-20 09:38:35.768794054 +0000 UTC m=+0.082265486 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 20 09:38:35 np0005625203.localdomain podman[281038]: 2026-02-20 09:38:35.774661545 +0000 UTC m=+0.088133007 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:38:35 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:38:36 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23683 DF PROTO=TCP SPT=33856 DPT=9102 SEQ=1673503311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0D56000000000001030307) 
Feb 20 09:38:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:38:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:38:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:38:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:38:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:38:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:38:37 np0005625203.localdomain sshd[281056]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:38:38 np0005625203.localdomain sshd[281056]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:38:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:42.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:38:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:42.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:38:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:38:43 np0005625203.localdomain podman[281058]: 2026-02-20 09:38:43.772645465 +0000 UTC m=+0.088115037 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Feb 20 09:38:43 np0005625203.localdomain podman[281058]: 2026-02-20 09:38:43.857469448 +0000 UTC m=+0.172939060 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:38:43 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:38:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:44.338 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:38:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:44.340 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:38:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:44.340 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:38:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:44.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:38:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:44.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:38:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:44.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:38:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:44.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:38:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:44.362 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:38:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:44.362 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:38:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:44.363 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:38:44 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23684 DF PROTO=TCP SPT=33856 DPT=9102 SEQ=1673503311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0D76810000000001030307) 
Feb 20 09:38:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:44.831 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:38:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:45.018 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:38:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:45.020 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12912MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:38:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:45.021 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:38:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:45.021 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:38:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:45.165 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:38:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:45.166 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:38:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:45.182 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:38:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:45.621 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:38:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:45.628 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:38:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:45.650 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:38:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:45.652 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:38:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:45.653 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:38:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:46.654 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:38:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:46.655 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:38:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:46.656 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:38:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:46.670 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:38:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:46.670 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:38:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:38:46.671 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:38:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:38:46 np0005625203.localdomain systemd[1]: tmp-crun.oXfWgv.mount: Deactivated successfully.
Feb 20 09:38:46 np0005625203.localdomain podman[281127]: 2026-02-20 09:38:46.768389841 +0000 UTC m=+0.077680374 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, release=1770267347, version=9.7, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, architecture=x86_64, build-date=2026-02-05T04:57:10Z, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 09:38:46 np0005625203.localdomain podman[281127]: 2026-02-20 09:38:46.781736034 +0000 UTC m=+0.091026587 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, distribution-scope=public, config_id=openstack_network_exporter, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, vcs-type=git, vendor=Red Hat, Inc.)
Feb 20 09:38:46 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:38:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:38:49 np0005625203.localdomain podman[281147]: 2026-02-20 09:38:49.242406119 +0000 UTC m=+0.088871049 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:38:49 np0005625203.localdomain podman[281147]: 2026-02-20 09:38:49.258275061 +0000 UTC m=+0.104739991 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute)
Feb 20 09:38:49 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:38:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:38:55 np0005625203.localdomain sshd[281180]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:38:55 np0005625203.localdomain systemd[1]: tmp-crun.c699uZ.mount: Deactivated successfully.
Feb 20 09:38:55 np0005625203.localdomain podman[281168]: 2026-02-20 09:38:55.768227707 +0000 UTC m=+0.083071560 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:38:55 np0005625203.localdomain podman[281168]: 2026-02-20 09:38:55.775703618 +0000 UTC m=+0.090547451 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:38:55 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:38:55 np0005625203.localdomain sshd[281180]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:38:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:38:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:38:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:38:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:38:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16324 "" "Go-http-client/1.1"
Feb 20 09:38:59 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23687 DF PROTO=TCP SPT=60732 DPT=9102 SEQ=235363546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0DAF850000000001030307) 
Feb 20 09:38:59 np0005625203.localdomain sshd[281192]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:39:00 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23688 DF PROTO=TCP SPT=60732 DPT=9102 SEQ=235363546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0DB3810000000001030307) 
Feb 20 09:39:00 np0005625203.localdomain sshd[281192]: Invalid user firebird from 34.131.211.42 port 53762
Feb 20 09:39:01 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23685 DF PROTO=TCP SPT=33856 DPT=9102 SEQ=1673503311 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0DB6800000000001030307) 
Feb 20 09:39:01 np0005625203.localdomain sshd[281192]: Received disconnect from 34.131.211.42 port 53762:11: Bye Bye [preauth]
Feb 20 09:39:01 np0005625203.localdomain sshd[281192]: Disconnected from invalid user firebird 34.131.211.42 port 53762 [preauth]
Feb 20 09:39:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:39:01 np0005625203.localdomain podman[281194]: 2026-02-20 09:39:01.265347548 +0000 UTC m=+0.089298533 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:39:01 np0005625203.localdomain podman[281194]: 2026-02-20 09:39:01.276067469 +0000 UTC m=+0.100018444 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:39:01 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:39:02 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23689 DF PROTO=TCP SPT=60732 DPT=9102 SEQ=235363546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0DBB810000000001030307) 
Feb 20 09:39:02 np0005625203.localdomain sshd[281217]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:39:02 np0005625203.localdomain sudo[281219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:39:02 np0005625203.localdomain sudo[281219]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:02 np0005625203.localdomain sudo[281219]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:02 np0005625203.localdomain sshd[281217]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:39:03 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=33851 DF PROTO=TCP SPT=60116 DPT=9102 SEQ=247509142 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0DBE800000000001030307) 
Feb 20 09:39:04 np0005625203.localdomain sudo[281237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:39:04 np0005625203.localdomain sudo[281237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:04 np0005625203.localdomain sudo[281237]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:05 np0005625203.localdomain sudo[281255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:39:05 np0005625203.localdomain sudo[281255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:05 np0005625203.localdomain sudo[281255]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:06 np0005625203.localdomain kernel: DROPPING: IN=br-ex OUT= MACSRC=fa:16:3e:b0:bd:ac MACDST=fa:16:3e:cd:74:b1 MACPROTO=0800 SRC=192.168.122.10 DST=192.168.122.107 LEN=60 TOS=0x00 PREC=0x00 TTL=62 ID=23690 DF PROTO=TCP SPT=60732 DPT=9102 SEQ=235363546 ACK=0 WINDOW=32640 RES=0x00 SYN URGP=0 OPT (020405500402080AEA0DCB400000000001030307) 
Feb 20 09:39:06 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:39:06 np0005625203.localdomain systemd[1]: tmp-crun.Ueb6Tw.mount: Deactivated successfully.
Feb 20 09:39:06 np0005625203.localdomain podman[281273]: 2026-02-20 09:39:06.772663164 +0000 UTC m=+0.087290541 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 20 09:39:06 np0005625203.localdomain podman[281273]: 2026-02-20 09:39:06.809242725 +0000 UTC m=+0.123870092 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:39:06 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:39:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:39:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:39:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:39:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:39:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:39:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:39:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:39:07.652 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:39:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:39:07.653 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:39:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:39:07.653 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:39:09 np0005625203.localdomain sshd[281292]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:39:09 np0005625203.localdomain sshd[281292]: Accepted publickey for tripleo-admin from 192.168.122.11 port 44082 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:39:09 np0005625203.localdomain systemd-logind[759]: New session 61 of user tripleo-admin.
Feb 20 09:39:09 np0005625203.localdomain systemd[1]: Created slice User Slice of UID 1003.
Feb 20 09:39:09 np0005625203.localdomain systemd[1]: Starting User Runtime Directory /run/user/1003...
Feb 20 09:39:09 np0005625203.localdomain systemd[1]: Finished User Runtime Directory /run/user/1003.
Feb 20 09:39:09 np0005625203.localdomain systemd[1]: Starting User Manager for UID 1003...
Feb 20 09:39:09 np0005625203.localdomain systemd[281296]: pam_unix(systemd-user:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Feb 20 09:39:09 np0005625203.localdomain systemd[281296]: Queued start job for default target Main User Target.
Feb 20 09:39:09 np0005625203.localdomain systemd[281296]: Created slice User Application Slice.
Feb 20 09:39:09 np0005625203.localdomain systemd[281296]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 20 09:39:09 np0005625203.localdomain systemd[281296]: Started Daily Cleanup of User's Temporary Directories.
Feb 20 09:39:09 np0005625203.localdomain systemd[281296]: Reached target Paths.
Feb 20 09:39:09 np0005625203.localdomain systemd[281296]: Reached target Timers.
Feb 20 09:39:09 np0005625203.localdomain systemd[281296]: Starting D-Bus User Message Bus Socket...
Feb 20 09:39:09 np0005625203.localdomain systemd[281296]: Starting Create User's Volatile Files and Directories...
Feb 20 09:39:10 np0005625203.localdomain systemd[281296]: Finished Create User's Volatile Files and Directories.
Feb 20 09:39:10 np0005625203.localdomain systemd[281296]: Listening on D-Bus User Message Bus Socket.
Feb 20 09:39:10 np0005625203.localdomain systemd[281296]: Reached target Sockets.
Feb 20 09:39:10 np0005625203.localdomain systemd[281296]: Reached target Basic System.
Feb 20 09:39:10 np0005625203.localdomain systemd[281296]: Reached target Main User Target.
Feb 20 09:39:10 np0005625203.localdomain systemd[281296]: Startup finished in 129ms.
Feb 20 09:39:10 np0005625203.localdomain systemd[1]: Started User Manager for UID 1003.
Feb 20 09:39:10 np0005625203.localdomain systemd[1]: Started Session 61 of User tripleo-admin.
Feb 20 09:39:10 np0005625203.localdomain sshd[281292]: pam_unix(sshd:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Feb 20 09:39:10 np0005625203.localdomain sudo[281437]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-sxoewglifcmcjqgwvuropcwmazunefvi ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771580350.118748-62794-71296699163379/AnsiballZ_blockinfile.py
Feb 20 09:39:10 np0005625203.localdomain sudo[281437]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Feb 20 09:39:10 np0005625203.localdomain python3[281439]: ansible-ansible.builtin.blockinfile Invoked with marker_begin=BEGIN ceph firewall rules marker_end=END ceph firewall rules path=/etc/nftables/edpm-rules.nft mode=0644 block=# 100 ceph_alertmanager (9093)
                                                          add rule inet filter EDPM_INPUT tcp dport { 9093 } ct state new counter accept comment "100 ceph_alertmanager"
                                                          # 100 ceph_dashboard (8443)
                                                          add rule inet filter EDPM_INPUT tcp dport { 8443 } ct state new counter accept comment "100 ceph_dashboard"
                                                          # 100 ceph_grafana (3100)
                                                          add rule inet filter EDPM_INPUT tcp dport { 3100 } ct state new counter accept comment "100 ceph_grafana"
                                                          # 100 ceph_prometheus (9092)
                                                          add rule inet filter EDPM_INPUT tcp dport { 9092 } ct state new counter accept comment "100 ceph_prometheus"
                                                          # 100 ceph_rgw (8080)
                                                          add rule inet filter EDPM_INPUT tcp dport { 8080 } ct state new counter accept comment "100 ceph_rgw"
                                                          # 110 ceph_mon (6789, 3300, 9100)
                                                          add rule inet filter EDPM_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon"
                                                          # 112 ceph_mds (6800-7300, 9100)
                                                          add rule inet filter EDPM_INPUT tcp dport { 6800-7300,9100 } ct state new counter accept comment "112 ceph_mds"
                                                          # 113 ceph_mgr (6800-7300, 8444)
                                                          add rule inet filter EDPM_INPUT tcp dport { 6800-7300,8444 } ct state new counter accept comment "113 ceph_mgr"
                                                          # 120 ceph_nfs (2049, 12049)
                                                          add rule inet filter EDPM_INPUT tcp dport { 2049,12049 } ct state new counter accept comment "120 ceph_nfs"
                                                          # 123 ceph_dashboard (9090, 9094, 9283)
                                                          add rule inet filter EDPM_INPUT tcp dport { 9090,9094,9283 } ct state new counter accept comment "123 ceph_dashboard"
                                                           insertbefore=^# Lock down INPUT chains state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False unsafe_writes=False insertafter=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:39:10 np0005625203.localdomain sudo[281437]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:10 np0005625203.localdomain systemd-journald[48285]: Field hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 79.3 (264 of 333 items), suggesting rotation.
Feb 20 09:39:10 np0005625203.localdomain systemd-journald[48285]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating.
Feb 20 09:39:10 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:39:10 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:39:10 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 09:39:11 np0005625203.localdomain sudo[281582]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-jctztaugywyemwktibkltzlhtmdevvhj ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771580350.9747455-62808-31212542196279/AnsiballZ_systemd.py
Feb 20 09:39:11 np0005625203.localdomain sudo[281582]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Feb 20 09:39:11 np0005625203.localdomain python3[281584]: ansible-ansible.builtin.systemd Invoked with name=nftables state=restarted enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Feb 20 09:39:12 np0005625203.localdomain systemd[1]: Stopping Netfilter Tables...
Feb 20 09:39:12 np0005625203.localdomain systemd[1]: nftables.service: Deactivated successfully.
Feb 20 09:39:12 np0005625203.localdomain systemd[1]: Stopped Netfilter Tables.
Feb 20 09:39:12 np0005625203.localdomain systemd[1]: Starting Netfilter Tables...
Feb 20 09:39:12 np0005625203.localdomain systemd[1]: Finished Netfilter Tables.
Feb 20 09:39:12 np0005625203.localdomain sudo[281582]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:14 np0005625203.localdomain sshd[281609]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:39:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:39:14 np0005625203.localdomain podman[281611]: 2026-02-20 09:39:14.795693158 +0000 UTC m=+0.102059748 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Feb 20 09:39:14 np0005625203.localdomain podman[281611]: 2026-02-20 09:39:14.876192318 +0000 UTC m=+0.182558908 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Feb 20 09:39:14 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:39:15 np0005625203.localdomain sshd[281609]: Invalid user n8n from 152.32.129.236 port 47606
Feb 20 09:39:15 np0005625203.localdomain sshd[281609]: Received disconnect from 152.32.129.236 port 47606:11: Bye Bye [preauth]
Feb 20 09:39:15 np0005625203.localdomain sshd[281609]: Disconnected from invalid user n8n 152.32.129.236 port 47606 [preauth]
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.182 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.183 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.184 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.185 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:39:17.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:39:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:39:17 np0005625203.localdomain podman[281635]: 2026-02-20 09:39:17.76066003 +0000 UTC m=+0.075163266 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.7, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 20 09:39:17 np0005625203.localdomain podman[281635]: 2026-02-20 09:39:17.802323789 +0000 UTC m=+0.116827065 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.buildah.version=1.33.7, managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 20 09:39:17 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:39:18 np0005625203.localdomain sudo[281655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:39:18 np0005625203.localdomain sudo[281655]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:18 np0005625203.localdomain sudo[281655]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:19 np0005625203.localdomain sudo[281673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Feb 20 09:39:19 np0005625203.localdomain sudo[281673]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:19 np0005625203.localdomain sudo[281673]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:39:19 np0005625203.localdomain systemd[1]: tmp-crun.d5dyPW.mount: Deactivated successfully.
Feb 20 09:39:19 np0005625203.localdomain podman[281709]: 2026-02-20 09:39:19.785909849 +0000 UTC m=+0.098607880 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Feb 20 09:39:19 np0005625203.localdomain podman[281709]: 2026-02-20 09:39:19.800199301 +0000 UTC m=+0.112897312 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 20 09:39:19 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:39:22 np0005625203.localdomain sudo[281726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:39:22 np0005625203.localdomain sudo[281726]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:22 np0005625203.localdomain sudo[281726]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:22 np0005625203.localdomain sudo[281744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:39:22 np0005625203.localdomain sudo[281744]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:22 np0005625203.localdomain sudo[281744]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:23 np0005625203.localdomain sudo[281762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:39:23 np0005625203.localdomain sudo[281762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:23 np0005625203.localdomain sudo[281762]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:24 np0005625203.localdomain sudo[281780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:39:24 np0005625203.localdomain sudo[281780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:24 np0005625203.localdomain sudo[281780]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:26 np0005625203.localdomain sudo[281798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:39:26 np0005625203.localdomain sudo[281798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:39:26 np0005625203.localdomain sudo[281798]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:26 np0005625203.localdomain systemd[1]: tmp-crun.gbaTjA.mount: Deactivated successfully.
Feb 20 09:39:26 np0005625203.localdomain podman[281816]: 2026-02-20 09:39:26.139433147 +0000 UTC m=+0.082403109 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:39:26 np0005625203.localdomain podman[281816]: 2026-02-20 09:39:26.153166402 +0000 UTC m=+0.096136354 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:39:26 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:39:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:39:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:39:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:39:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 147345 "" "Go-http-client/1.1"
Feb 20 09:39:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16329 "" "Go-http-client/1.1"
Feb 20 09:39:29 np0005625203.localdomain sudo[281839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:39:29 np0005625203.localdomain sudo[281839]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:29 np0005625203.localdomain sudo[281839]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:29 np0005625203.localdomain sudo[281857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:39:29 np0005625203.localdomain sudo[281857]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:30 np0005625203.localdomain podman[281918]: 
Feb 20 09:39:30 np0005625203.localdomain podman[281918]: 2026-02-20 09:39:30.404589263 +0000 UTC m=+0.079856111 container create 76940c29a3ba438dfbcf6a9c6c4281af7c7c11aca9e5db9d236fa7131ab255c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_lamarr, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, release=1770267347, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, RELEASE=main, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: Started libpod-conmon-76940c29a3ba438dfbcf6a9c6c4281af7c7c11aca9e5db9d236fa7131ab255c0.scope.
Feb 20 09:39:30 np0005625203.localdomain podman[281918]: 2026-02-20 09:39:30.374822542 +0000 UTC m=+0.050089430 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:39:30 np0005625203.localdomain podman[281918]: 2026-02-20 09:39:30.490805319 +0000 UTC m=+0.166072157 container init 76940c29a3ba438dfbcf6a9c6c4281af7c7c11aca9e5db9d236fa7131ab255c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_lamarr, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-type=git, ceph=True, release=1770267347, GIT_BRANCH=main, io.buildah.version=1.42.2, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:39:30 np0005625203.localdomain podman[281918]: 2026-02-20 09:39:30.501772838 +0000 UTC m=+0.177039686 container start 76940c29a3ba438dfbcf6a9c6c4281af7c7c11aca9e5db9d236fa7131ab255c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_lamarr, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., RELEASE=main, name=rhceph, vcs-type=git, GIT_CLEAN=True, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, io.openshift.tags=rhceph ceph, release=1770267347, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7)
Feb 20 09:39:30 np0005625203.localdomain podman[281918]: 2026-02-20 09:39:30.502074167 +0000 UTC m=+0.177341015 container attach 76940c29a3ba438dfbcf6a9c6c4281af7c7c11aca9e5db9d236fa7131ab255c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_lamarr, release=1770267347, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, RELEASE=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, description=Red Hat Ceph Storage 7)
Feb 20 09:39:30 np0005625203.localdomain priceless_lamarr[281932]: 167 167
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: libpod-76940c29a3ba438dfbcf6a9c6c4281af7c7c11aca9e5db9d236fa7131ab255c0.scope: Deactivated successfully.
Feb 20 09:39:30 np0005625203.localdomain podman[281918]: 2026-02-20 09:39:30.50635785 +0000 UTC m=+0.181624718 container died 76940c29a3ba438dfbcf6a9c6c4281af7c7c11aca9e5db9d236fa7131ab255c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_lamarr, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.expose-services=, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, release=1770267347, name=rhceph, vcs-type=git, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:39:30 np0005625203.localdomain podman[281937]: 2026-02-20 09:39:30.615323271 +0000 UTC m=+0.096158885 container remove 76940c29a3ba438dfbcf6a9c6c4281af7c7c11aca9e5db9d236fa7131ab255c0 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_lamarr, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=rhceph-container, ceph=True, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, release=1770267347, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, version=7, GIT_CLEAN=True, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: libpod-conmon-76940c29a3ba438dfbcf6a9c6c4281af7c7c11aca9e5db9d236fa7131ab255c0.scope: Deactivated successfully.
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:39:30 np0005625203.localdomain systemd-rc-local-generator[281975]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:39:30 np0005625203.localdomain systemd-sysv-generator[281981]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:30 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c36b52cbd00bccb788bbae4cd01540838a3a098c3b0ce30ca763735f62627c6a-merged.mount: Deactivated successfully.
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: Starting dnf makecache...
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:39:31 np0005625203.localdomain systemd-sysv-generator[282021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:39:31 np0005625203.localdomain systemd-rc-local-generator[282018]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:39:31 np0005625203.localdomain dnf[281990]: Updating Subscription Management repositories.
Feb 20 09:39:31 np0005625203.localdomain dnf[281990]: Unable to read consumer identity
Feb 20 09:39:31 np0005625203.localdomain dnf[281990]: This system is not registered with an entitlement server. You can use subscription-manager to register.
Feb 20 09:39:31 np0005625203.localdomain dnf[281990]: Metadata cache refreshed recently.
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: dnf-makecache.service: Deactivated successfully.
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: Finished dnf makecache.
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: Starting Ceph mds.mds.np0005625203.zsrwgk for a8557ee9-b55d-5519-942c-cf8f6172f1d8...
Feb 20 09:39:31 np0005625203.localdomain podman[282034]: 2026-02-20 09:39:31.523164629 +0000 UTC m=+0.077402925 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:39:31 np0005625203.localdomain podman[282034]: 2026-02-20 09:39:31.539193795 +0000 UTC m=+0.093432111 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:39:31 np0005625203.localdomain podman[282109]: 
Feb 20 09:39:31 np0005625203.localdomain sshd[280979]: Received disconnect from 38.102.83.114 port 44434:11: disconnected by user
Feb 20 09:39:31 np0005625203.localdomain sshd[280979]: Disconnected from user zuul 38.102.83.114 port 44434
Feb 20 09:39:31 np0005625203.localdomain sshd[280972]: pam_unix(sshd:session): session closed for user zuul
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: session-60.scope: Deactivated successfully.
Feb 20 09:39:31 np0005625203.localdomain systemd-logind[759]: Session 60 logged out. Waiting for processes to exit.
Feb 20 09:39:31 np0005625203.localdomain podman[282109]: 2026-02-20 09:39:31.814659475 +0000 UTC m=+0.087683493 container create 637cfe2711167c14ad633c0cf03972f1e8198c40888f249bead0a49236bfd9de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mds-mds-np0005625203-zsrwgk, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main, ceph=True, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, com.redhat.component=rhceph-container, vcs-type=git)
Feb 20 09:39:31 np0005625203.localdomain systemd-logind[759]: Removed session 60.
Feb 20 09:39:31 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5a262b0f7f58d3e8285307b8fe8cb8ba6aa1b1a2c1af7899e3f6e2167bd0c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 09:39:31 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5a262b0f7f58d3e8285307b8fe8cb8ba6aa1b1a2c1af7899e3f6e2167bd0c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 09:39:31 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5a262b0f7f58d3e8285307b8fe8cb8ba6aa1b1a2c1af7899e3f6e2167bd0c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 09:39:31 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f5a262b0f7f58d3e8285307b8fe8cb8ba6aa1b1a2c1af7899e3f6e2167bd0c8/merged/var/lib/ceph/mds/ceph-mds.np0005625203.zsrwgk supports timestamps until 2038 (0x7fffffff)
Feb 20 09:39:31 np0005625203.localdomain podman[282109]: 2026-02-20 09:39:31.778974891 +0000 UTC m=+0.051998909 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:39:31 np0005625203.localdomain podman[282109]: 2026-02-20 09:39:31.877978764 +0000 UTC m=+0.151002752 container init 637cfe2711167c14ad633c0cf03972f1e8198c40888f249bead0a49236bfd9de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mds-mds-np0005625203-zsrwgk, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, release=1770267347, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, version=7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:39:31 np0005625203.localdomain podman[282109]: 2026-02-20 09:39:31.887137687 +0000 UTC m=+0.160161675 container start 637cfe2711167c14ad633c0cf03972f1e8198c40888f249bead0a49236bfd9de (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mds-mds-np0005625203-zsrwgk, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, version=7, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1770267347, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public)
Feb 20 09:39:31 np0005625203.localdomain bash[282109]: 637cfe2711167c14ad633c0cf03972f1e8198c40888f249bead0a49236bfd9de
Feb 20 09:39:31 np0005625203.localdomain systemd[1]: Started Ceph mds.mds.np0005625203.zsrwgk for a8557ee9-b55d-5519-942c-cf8f6172f1d8.
Feb 20 09:39:31 np0005625203.localdomain sudo[281857]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:31 np0005625203.localdomain ceph-mds[282126]: set uid:gid to 167:167 (ceph:ceph)
Feb 20 09:39:31 np0005625203.localdomain ceph-mds[282126]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mds, pid 2
Feb 20 09:39:31 np0005625203.localdomain ceph-mds[282126]: main not setting numa affinity
Feb 20 09:39:31 np0005625203.localdomain ceph-mds[282126]: pidfile_write: ignore empty --pid-file
Feb 20 09:39:31 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mds-mds-np0005625203-zsrwgk[282122]: starting mds.mds.np0005625203.zsrwgk at 
Feb 20 09:39:31 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk Updating MDS map to version 8 from mon.1
Feb 20 09:39:32 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk Updating MDS map to version 9 from mon.1
Feb 20 09:39:32 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk Monitors have assigned me to become a standby.
Feb 20 09:39:34 np0005625203.localdomain sudo[282146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:39:34 np0005625203.localdomain sudo[282146]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:34 np0005625203.localdomain sudo[282146]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:34 np0005625203.localdomain sudo[282164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:39:34 np0005625203.localdomain sudo[282164]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:34 np0005625203.localdomain sudo[282164]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:34 np0005625203.localdomain sudo[282182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:39:34 np0005625203.localdomain sudo[282182]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:35 np0005625203.localdomain podman[282274]: 2026-02-20 09:39:35.911379982 +0000 UTC m=+0.104609486 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, vendor=Red Hat, Inc., name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git)
Feb 20 09:39:36 np0005625203.localdomain podman[282274]: 2026-02-20 09:39:36.047314887 +0000 UTC m=+0.240544401 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.buildah.version=1.42.2, architecture=x86_64, version=7)
Feb 20 09:39:36 np0005625203.localdomain sudo[282182]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:36 np0005625203.localdomain sudo[282359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:39:36 np0005625203.localdomain sudo[282359]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:36 np0005625203.localdomain sudo[282359]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:36 np0005625203.localdomain sudo[282377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:39:36 np0005625203.localdomain sudo[282377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:39:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:39:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:39:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:39:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:39:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:39:37 np0005625203.localdomain sudo[282377]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:37 np0005625203.localdomain sshd[282427]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:39:37 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:39:37 np0005625203.localdomain podman[282429]: 2026-02-20 09:39:37.795116115 +0000 UTC m=+0.091833022 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:39:37 np0005625203.localdomain podman[282429]: 2026-02-20 09:39:37.800539723 +0000 UTC m=+0.097256660 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 20 09:39:37 np0005625203.localdomain sudo[282439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:39:37 np0005625203.localdomain sudo[282439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:37 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:39:37 np0005625203.localdomain sudo[282439]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:37 np0005625203.localdomain sshd[282427]: Invalid user admin from 5.253.59.68 port 55352
Feb 20 09:39:37 np0005625203.localdomain sshd[282427]: Received disconnect from 5.253.59.68 port 55352:11: Bye Bye [preauth]
Feb 20 09:39:37 np0005625203.localdomain sshd[282427]: Disconnected from invalid user admin 5.253.59.68 port 55352 [preauth]
Feb 20 09:39:38 np0005625203.localdomain sudo[282464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:39:38 np0005625203.localdomain sudo[282464]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:39:38 np0005625203.localdomain sudo[282464]: pam_unix(sudo:session): session closed for user root
Feb 20 09:39:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:42.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:39:42 np0005625203.localdomain sshd[282482]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:39:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:43.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:39:43 np0005625203.localdomain sshd[282482]: Invalid user ali from 194.107.115.2 port 20324
Feb 20 09:39:43 np0005625203.localdomain sshd[282482]: Received disconnect from 194.107.115.2 port 20324:11: Bye Bye [preauth]
Feb 20 09:39:43 np0005625203.localdomain sshd[282482]: Disconnected from invalid user ali 194.107.115.2 port 20324 [preauth]
Feb 20 09:39:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:44.337 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:39:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:44.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:39:44 np0005625203.localdomain sshd[282484]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:39:45 np0005625203.localdomain sshd[282484]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:39:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:39:45 np0005625203.localdomain podman[282486]: 2026-02-20 09:39:45.186015231 +0000 UTC m=+0.085461227 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Feb 20 09:39:45 np0005625203.localdomain podman[282486]: 2026-02-20 09:39:45.252655594 +0000 UTC m=+0.152101570 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:39:45 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:39:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:45.340 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:39:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:45.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:39:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:45.359 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:39:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:45.360 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:39:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:45.360 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:39:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:45.360 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:39:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:45.361 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:39:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:45.784 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:39:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:45.998 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:39:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:46.000 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12876MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:39:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:46.000 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:39:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:46.001 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:39:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:46.070 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:39:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:46.070 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:39:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:46.094 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:39:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:46.567 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:39:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:46.574 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:39:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:46.627 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:39:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:46.630 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:39:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:46.630 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:39:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7200.1 total, 600.0 interval
                                                          Cumulative writes: 4904 writes, 22K keys, 4904 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 4904 writes, 689 syncs, 7.12 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 60 writes, 188 keys, 60 commit groups, 1.0 writes per commit group, ingest: 0.33 MB, 0.00 MB/s
                                                          Interval WAL: 60 writes, 29 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 09:39:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:47.627 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:39:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:47.646 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:39:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:47.646 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:39:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:47.647 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:39:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:47.659 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:39:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:47.659 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:39:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:48.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:39:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:39:48.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:39:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:39:48 np0005625203.localdomain systemd[1]: tmp-crun.rXaCzy.mount: Deactivated successfully.
Feb 20 09:39:48 np0005625203.localdomain podman[282557]: 2026-02-20 09:39:48.773733209 +0000 UTC m=+0.089834873 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.7, io.buildah.version=1.33.7, managed_by=edpm_ansible)
Feb 20 09:39:48 np0005625203.localdomain podman[282557]: 2026-02-20 09:39:48.810334772 +0000 UTC m=+0.126436426 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, version=9.7, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, container_name=openstack_network_exporter)
Feb 20 09:39:48 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:39:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:39:50 np0005625203.localdomain podman[282577]: 2026-02-20 09:39:50.764344144 +0000 UTC m=+0.085077195 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute)
Feb 20 09:39:50 np0005625203.localdomain podman[282577]: 2026-02-20 09:39:50.775556551 +0000 UTC m=+0.096289572 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute)
Feb 20 09:39:50 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:39:50 np0005625203.localdomain rsyslogd[758]: imjournal: 1677 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Feb 20 09:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:39:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7200.1 total, 600.0 interval
                                                          Cumulative writes: 5927 writes, 25K keys, 5927 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5927 writes, 797 syncs, 7.44 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 84 writes, 243 keys, 84 commit groups, 1.0 writes per commit group, ingest: 0.26 MB, 0.00 MB/s
                                                          Interval WAL: 84 writes, 33 syncs, 2.55 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 09:39:54 np0005625203.localdomain sshd[282596]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:39:54 np0005625203.localdomain sshd[282596]: Invalid user default from 185.196.11.208 port 48886
Feb 20 09:39:55 np0005625203.localdomain sshd[282596]: Received disconnect from 185.196.11.208 port 48886:11: Bye Bye [preauth]
Feb 20 09:39:55 np0005625203.localdomain sshd[282596]: Disconnected from invalid user default 185.196.11.208 port 48886 [preauth]
Feb 20 09:39:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:39:56 np0005625203.localdomain podman[282598]: 2026-02-20 09:39:56.766239815 +0000 UTC m=+0.081228486 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:39:56 np0005625203.localdomain podman[282598]: 2026-02-20 09:39:56.775769009 +0000 UTC m=+0.090757660 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:39:56 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:39:57 np0005625203.localdomain sshd[282621]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:39:57 np0005625203.localdomain sshd[282621]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:39:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:39:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:39:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:39:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149551 "" "Go-http-client/1.1"
Feb 20 09:39:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16810 "" "Go-http-client/1.1"
Feb 20 09:40:00 np0005625203.localdomain sshd[282623]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:40:00 np0005625203.localdomain sshd[282623]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:40:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:40:01 np0005625203.localdomain podman[282625]: 2026-02-20 09:40:01.758450331 +0000 UTC m=+0.071757074 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:40:01 np0005625203.localdomain podman[282625]: 2026-02-20 09:40:01.766282383 +0000 UTC m=+0.079589096 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:40:01 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:40:03 np0005625203.localdomain sshd[282648]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:40:05 np0005625203.localdomain sshd[282648]: Received disconnect from 118.99.80.29 port 21030:11: Bye Bye [preauth]
Feb 20 09:40:05 np0005625203.localdomain sshd[282648]: Disconnected from authenticating user root 118.99.80.29 port 21030 [preauth]
Feb 20 09:40:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:40:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:40:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:40:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:40:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:40:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:40:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:40:07.654 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:40:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:40:07.655 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:40:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:40:07.656 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:40:08 np0005625203.localdomain sudo[282650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:40:08 np0005625203.localdomain sudo[282650]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:40:08 np0005625203.localdomain sudo[282650]: pam_unix(sudo:session): session closed for user root
Feb 20 09:40:08 np0005625203.localdomain systemd[1]: tmp-crun.Rxw4F1.mount: Deactivated successfully.
Feb 20 09:40:08 np0005625203.localdomain podman[282667]: 2026-02-20 09:40:08.764096881 +0000 UTC m=+0.088238264 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 20 09:40:08 np0005625203.localdomain podman[282667]: 2026-02-20 09:40:08.797483364 +0000 UTC m=+0.121624747 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 09:40:08 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:40:11 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk Updating MDS map to version 13 from mon.1
Feb 20 09:40:11 np0005625203.localdomain ceph-mds[282126]: mds.0.13 handle_mds_map i am now mds.0.13
Feb 20 09:40:11 np0005625203.localdomain ceph-mds[282126]: mds.0.13 handle_mds_map state change up:standby --> up:replay
Feb 20 09:40:11 np0005625203.localdomain ceph-mds[282126]: mds.0.13 replay_start
Feb 20 09:40:11 np0005625203.localdomain ceph-mds[282126]: mds.0.13  waiting for osdmap 83 (which blocklists prior instance)
Feb 20 09:40:11 np0005625203.localdomain ceph-mds[282126]: mds.0.cache creating system inode with ino:0x100
Feb 20 09:40:11 np0005625203.localdomain ceph-mds[282126]: mds.0.cache creating system inode with ino:0x1
Feb 20 09:40:11 np0005625203.localdomain ceph-mds[282126]: mds.0.13 Finished replaying journal
Feb 20 09:40:11 np0005625203.localdomain ceph-mds[282126]: mds.0.13 making mds journal writeable
Feb 20 09:40:12 np0005625203.localdomain sshd[281312]: Received disconnect from 192.168.122.11 port 44082:11: disconnected by user
Feb 20 09:40:12 np0005625203.localdomain sshd[281312]: Disconnected from user tripleo-admin 192.168.122.11 port 44082
Feb 20 09:40:12 np0005625203.localdomain sshd[281292]: pam_unix(sshd:session): session closed for user tripleo-admin
Feb 20 09:40:12 np0005625203.localdomain systemd[1]: session-61.scope: Deactivated successfully.
Feb 20 09:40:12 np0005625203.localdomain systemd[1]: session-61.scope: Consumed 1.356s CPU time.
Feb 20 09:40:12 np0005625203.localdomain systemd-logind[759]: Session 61 logged out. Waiting for processes to exit.
Feb 20 09:40:12 np0005625203.localdomain systemd-logind[759]: Removed session 61.
Feb 20 09:40:12 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk Updating MDS map to version 14 from mon.1
Feb 20 09:40:12 np0005625203.localdomain ceph-mds[282126]: mds.0.13 handle_mds_map i am now mds.0.13
Feb 20 09:40:12 np0005625203.localdomain ceph-mds[282126]: mds.0.13 handle_mds_map state change up:replay --> up:reconnect
Feb 20 09:40:12 np0005625203.localdomain ceph-mds[282126]: mds.0.13 reconnect_start
Feb 20 09:40:12 np0005625203.localdomain ceph-mds[282126]: mds.0.13 reopen_log
Feb 20 09:40:12 np0005625203.localdomain ceph-mds[282126]: mds.0.13 reconnect_done
Feb 20 09:40:13 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk Updating MDS map to version 15 from mon.1
Feb 20 09:40:13 np0005625203.localdomain ceph-mds[282126]: mds.0.13 handle_mds_map i am now mds.0.13
Feb 20 09:40:13 np0005625203.localdomain ceph-mds[282126]: mds.0.13 handle_mds_map state change up:reconnect --> up:rejoin
Feb 20 09:40:13 np0005625203.localdomain ceph-mds[282126]: mds.0.13 rejoin_start
Feb 20 09:40:13 np0005625203.localdomain ceph-mds[282126]: mds.0.13 rejoin_joint_start
Feb 20 09:40:13 np0005625203.localdomain ceph-mds[282126]: mds.0.13 rejoin_done
Feb 20 09:40:14 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk Updating MDS map to version 16 from mon.1
Feb 20 09:40:14 np0005625203.localdomain ceph-mds[282126]: mds.0.13 handle_mds_map i am now mds.0.13
Feb 20 09:40:14 np0005625203.localdomain ceph-mds[282126]: mds.0.13 handle_mds_map state change up:rejoin --> up:active
Feb 20 09:40:14 np0005625203.localdomain ceph-mds[282126]: mds.0.13 recovery_done -- successful recovery!
Feb 20 09:40:14 np0005625203.localdomain ceph-mds[282126]: mds.0.13 active_start
Feb 20 09:40:14 np0005625203.localdomain ceph-mds[282126]: mds.0.13 cluster recovered.
Feb 20 09:40:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:40:15 np0005625203.localdomain podman[282699]: 2026-02-20 09:40:15.769095959 +0000 UTC m=+0.084543649 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:40:15 np0005625203.localdomain podman[282699]: 2026-02-20 09:40:15.832776101 +0000 UTC m=+0.148223751 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller)
Feb 20 09:40:15 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:40:17 np0005625203.localdomain ceph-mds[282126]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb 20 09:40:17 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mds-mds-np0005625203-zsrwgk[282122]: 2026-02-20T09:40:17.995+0000 7fbc25595640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Feb 20 09:40:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:40:19 np0005625203.localdomain podman[282725]: 2026-02-20 09:40:19.235188079 +0000 UTC m=+0.073527847 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, architecture=x86_64, name=ubi9/ubi-minimal)
Feb 20 09:40:19 np0005625203.localdomain podman[282725]: 2026-02-20 09:40:19.271206995 +0000 UTC m=+0.109546703 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter)
Feb 20 09:40:19 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:40:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:40:21 np0005625203.localdomain podman[282746]: 2026-02-20 09:40:21.761171092 +0000 UTC m=+0.082889487 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Feb 20 09:40:21 np0005625203.localdomain podman[282746]: 2026-02-20 09:40:21.802770701 +0000 UTC m=+0.124489096 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:40:21 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:40:22 np0005625203.localdomain sudo[282765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:40:22 np0005625203.localdomain sudo[282765]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:22 np0005625203.localdomain systemd[1]: Stopping User Manager for UID 1003...
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Activating special unit Exit the Session...
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Stopped target Main User Target.
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Stopped target Basic System.
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Stopped target Paths.
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Stopped target Sockets.
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Stopped target Timers.
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Closed D-Bus User Message Bus Socket.
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Stopped Create User's Volatile Files and Directories.
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Removed slice User Application Slice.
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Reached target Shutdown.
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Finished Exit the Session.
Feb 20 09:40:22 np0005625203.localdomain systemd[281296]: Reached target Exit the Session.
Feb 20 09:40:22 np0005625203.localdomain sudo[282765]: pam_unix(sudo:session): session closed for user root
Feb 20 09:40:22 np0005625203.localdomain systemd[1]: user@1003.service: Deactivated successfully.
Feb 20 09:40:22 np0005625203.localdomain systemd[1]: Stopped User Manager for UID 1003.
Feb 20 09:40:22 np0005625203.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1003...
Feb 20 09:40:22 np0005625203.localdomain systemd[1]: run-user-1003.mount: Deactivated successfully.
Feb 20 09:40:22 np0005625203.localdomain systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Feb 20 09:40:22 np0005625203.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1003.
Feb 20 09:40:22 np0005625203.localdomain systemd[1]: Removed slice User Slice of UID 1003.
Feb 20 09:40:22 np0005625203.localdomain systemd[1]: user-1003.slice: Consumed 1.708s CPU time.
Feb 20 09:40:22 np0005625203.localdomain sudo[282784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:40:22 np0005625203.localdomain sudo[282784]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:22 np0005625203.localdomain sudo[282784]: pam_unix(sudo:session): session closed for user root
Feb 20 09:40:22 np0005625203.localdomain sudo[282802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ceph-volume --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 -- inventory --format=json-pretty --filter-for-batch
Feb 20 09:40:22 np0005625203.localdomain sudo[282802]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:23 np0005625203.localdomain podman[282860]: 
Feb 20 09:40:23 np0005625203.localdomain podman[282860]: 2026-02-20 09:40:23.543061766 +0000 UTC m=+0.080063820 container create a13e1fd3243fa865007d2ee25dfbb70174b889367514c7849be9830e3276843d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_ishizaka, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, version=7, release=1770267347)
Feb 20 09:40:23 np0005625203.localdomain systemd[1]: Started libpod-conmon-a13e1fd3243fa865007d2ee25dfbb70174b889367514c7849be9830e3276843d.scope.
Feb 20 09:40:23 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:40:23 np0005625203.localdomain podman[282860]: 2026-02-20 09:40:23.509497317 +0000 UTC m=+0.046499421 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:40:23 np0005625203.localdomain podman[282860]: 2026-02-20 09:40:23.618282175 +0000 UTC m=+0.155284239 container init a13e1fd3243fa865007d2ee25dfbb70174b889367514c7849be9830e3276843d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_ishizaka, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.component=rhceph-container, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, architecture=x86_64, description=Red Hat Ceph Storage 7)
Feb 20 09:40:23 np0005625203.localdomain podman[282860]: 2026-02-20 09:40:23.62845155 +0000 UTC m=+0.165453604 container start a13e1fd3243fa865007d2ee25dfbb70174b889367514c7849be9830e3276843d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_ishizaka, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347)
Feb 20 09:40:23 np0005625203.localdomain podman[282860]: 2026-02-20 09:40:23.62877796 +0000 UTC m=+0.165780014 container attach a13e1fd3243fa865007d2ee25dfbb70174b889367514c7849be9830e3276843d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_ishizaka, name=rhceph, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, version=7, RELEASE=main, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:40:23 np0005625203.localdomain objective_ishizaka[282875]: 167 167
Feb 20 09:40:23 np0005625203.localdomain systemd[1]: libpod-a13e1fd3243fa865007d2ee25dfbb70174b889367514c7849be9830e3276843d.scope: Deactivated successfully.
Feb 20 09:40:23 np0005625203.localdomain podman[282860]: 2026-02-20 09:40:23.632303999 +0000 UTC m=+0.169306073 container died a13e1fd3243fa865007d2ee25dfbb70174b889367514c7849be9830e3276843d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_ishizaka, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, build-date=2026-02-09T10:25:24Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, name=rhceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2)
Feb 20 09:40:23 np0005625203.localdomain podman[282880]: 2026-02-20 09:40:23.735516595 +0000 UTC m=+0.089844273 container remove a13e1fd3243fa865007d2ee25dfbb70174b889367514c7849be9830e3276843d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_ishizaka, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, vcs-type=git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Feb 20 09:40:23 np0005625203.localdomain systemd[1]: libpod-conmon-a13e1fd3243fa865007d2ee25dfbb70174b889367514c7849be9830e3276843d.scope: Deactivated successfully.
Feb 20 09:40:23 np0005625203.localdomain podman[282901]: 
Feb 20 09:40:23 np0005625203.localdomain podman[282901]: 2026-02-20 09:40:23.946271651 +0000 UTC m=+0.075173979 container create 2150580e5e3199e19f6668b2c3d60831ab53df2e9a4221abd67fdd4cb57845c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_mccarthy, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, release=1770267347, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.42.2, distribution-scope=public, vcs-type=git, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:40:23 np0005625203.localdomain systemd[1]: Started libpod-conmon-2150580e5e3199e19f6668b2c3d60831ab53df2e9a4221abd67fdd4cb57845c6.scope.
Feb 20 09:40:24 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:40:24 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa5efb0fc0779c58d07fee419a73556336974ebcaa3c8f9e49bbb9414ae75293/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Feb 20 09:40:24 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa5efb0fc0779c58d07fee419a73556336974ebcaa3c8f9e49bbb9414ae75293/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 09:40:24 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa5efb0fc0779c58d07fee419a73556336974ebcaa3c8f9e49bbb9414ae75293/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 09:40:24 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa5efb0fc0779c58d07fee419a73556336974ebcaa3c8f9e49bbb9414ae75293/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 09:40:24 np0005625203.localdomain podman[282901]: 2026-02-20 09:40:24.0159977 +0000 UTC m=+0.144900018 container init 2150580e5e3199e19f6668b2c3d60831ab53df2e9a4221abd67fdd4cb57845c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_mccarthy, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, architecture=x86_64, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, vendor=Red Hat, Inc., version=7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:40:24 np0005625203.localdomain podman[282901]: 2026-02-20 09:40:23.918754188 +0000 UTC m=+0.047656586 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:40:24 np0005625203.localdomain podman[282901]: 2026-02-20 09:40:24.025590166 +0000 UTC m=+0.154492524 container start 2150580e5e3199e19f6668b2c3d60831ab53df2e9a4221abd67fdd4cb57845c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_mccarthy, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.expose-services=, name=rhceph, GIT_BRANCH=main, GIT_CLEAN=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:40:24 np0005625203.localdomain podman[282901]: 2026-02-20 09:40:24.025824675 +0000 UTC m=+0.154727003 container attach 2150580e5e3199e19f6668b2c3d60831ab53df2e9a4221abd67fdd4cb57845c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_mccarthy, io.buildah.version=1.42.2, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, name=rhceph)
Feb 20 09:40:24 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d1d593fe43a07b81f36524297260f9e5fe890ce5a73be53a29f6e1750fff68a0-merged.mount: Deactivated successfully.
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]: [
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:     {
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:         "available": false,
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:         "ceph_device": false,
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:         "device_id": "QEMU_DVD-ROM_QM00001",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:         "lsm_data": {},
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:         "lvs": [],
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:         "path": "/dev/sr0",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:         "rejected_reasons": [
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "Has a FileSystem",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "Insufficient space (<5GB)"
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:         ],
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:         "sys_api": {
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "actuators": null,
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "device_nodes": "sr0",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "human_readable_size": "482.00 KB",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "id_bus": "ata",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "model": "QEMU DVD-ROM",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "nr_requests": "2",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "partitions": {},
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "path": "/dev/sr0",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "removable": "1",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "rev": "2.5+",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "ro": "0",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "rotational": "1",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "sas_address": "",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "sas_device_handle": "",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "scheduler_mode": "mq-deadline",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "sectors": 0,
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "sectorsize": "2048",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "size": 493568.0,
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "support_discard": "0",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "type": "disk",
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:             "vendor": "QEMU"
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:         }
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]:     }
Feb 20 09:40:24 np0005625203.localdomain fervent_mccarthy[282916]: ]
Feb 20 09:40:24 np0005625203.localdomain systemd[1]: libpod-2150580e5e3199e19f6668b2c3d60831ab53df2e9a4221abd67fdd4cb57845c6.scope: Deactivated successfully.
Feb 20 09:40:24 np0005625203.localdomain podman[282901]: 2026-02-20 09:40:24.981464004 +0000 UTC m=+1.110366352 container died 2150580e5e3199e19f6668b2c3d60831ab53df2e9a4221abd67fdd4cb57845c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_mccarthy, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, distribution-scope=public, name=rhceph, com.redhat.component=rhceph-container, release=1770267347, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7)
Feb 20 09:40:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-aa5efb0fc0779c58d07fee419a73556336974ebcaa3c8f9e49bbb9414ae75293-merged.mount: Deactivated successfully.
Feb 20 09:40:25 np0005625203.localdomain podman[284830]: 2026-02-20 09:40:25.096077103 +0000 UTC m=+0.100387459 container remove 2150580e5e3199e19f6668b2c3d60831ab53df2e9a4221abd67fdd4cb57845c6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=fervent_mccarthy, io.openshift.tags=rhceph ceph, ceph=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.openshift.expose-services=, release=1770267347, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, RELEASE=main, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=)
Feb 20 09:40:25 np0005625203.localdomain systemd[1]: libpod-conmon-2150580e5e3199e19f6668b2c3d60831ab53df2e9a4221abd67fdd4cb57845c6.scope: Deactivated successfully.
Feb 20 09:40:25 np0005625203.localdomain sudo[282802]: pam_unix(sudo:session): session closed for user root
Feb 20 09:40:25 np0005625203.localdomain sudo[284844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:40:25 np0005625203.localdomain sudo[284844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:25 np0005625203.localdomain sudo[284844]: pam_unix(sudo:session): session closed for user root
Feb 20 09:40:26 np0005625203.localdomain sudo[284862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:40:26 np0005625203.localdomain sudo[284862]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:26 np0005625203.localdomain sudo[284862]: pam_unix(sudo:session): session closed for user root
Feb 20 09:40:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:40:27 np0005625203.localdomain podman[284880]: 2026-02-20 09:40:27.761151033 +0000 UTC m=+0.074820908 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:40:27 np0005625203.localdomain podman[284880]: 2026-02-20 09:40:27.773212166 +0000 UTC m=+0.086882031 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:40:27 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:40:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:40:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:40:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:40:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149551 "" "Go-http-client/1.1"
Feb 20 09:40:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16812 "" "Go-http-client/1.1"
Feb 20 09:40:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:40:32 np0005625203.localdomain podman[284905]: 2026-02-20 09:40:32.754210226 +0000 UTC m=+0.074855868 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:40:32 np0005625203.localdomain podman[284905]: 2026-02-20 09:40:32.760998927 +0000 UTC m=+0.081644579 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:40:32 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:40:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:40:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:40:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:40:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:40:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:40:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:40:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:40:39 np0005625203.localdomain podman[284929]: 2026-02-20 09:40:39.757840473 +0000 UTC m=+0.079415030 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:40:39 np0005625203.localdomain podman[284929]: 2026-02-20 09:40:39.793462146 +0000 UTC m=+0.115036703 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 09:40:39 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:40:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:42.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:40:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:42.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 20 09:40:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:42.364 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 20 09:40:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:42.366 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:40:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:42.366 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 20 09:40:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:42.382 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:40:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:44.394 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:40:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:45.338 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:40:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:45.340 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:40:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:45.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:40:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:45.363 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:40:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:45.364 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:40:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:45.364 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:40:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:45.364 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:40:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:45.365 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:40:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:45.798 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.002 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.003 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12885MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.004 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.004 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.109 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.109 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.162 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Refreshing inventories for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.205 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Updating ProviderTree inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.206 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.222 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Refreshing aggregate associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.251 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Refreshing trait associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, traits: COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.277 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:40:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.754 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.760 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:40:46 np0005625203.localdomain systemd[1]: tmp-crun.eQ77fj.mount: Deactivated successfully.
Feb 20 09:40:46 np0005625203.localdomain podman[284990]: 2026-02-20 09:40:46.769226119 +0000 UTC m=+0.088452129 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.776 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.778 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:40:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:46.778 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:40:46 np0005625203.localdomain podman[284990]: 2026-02-20 09:40:46.862503647 +0000 UTC m=+0.181729617 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:40:46 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:40:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:47.779 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:40:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:47.780 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:40:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:47.780 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:40:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:47.804 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:40:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:47.805 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:40:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:47.806 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:40:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:47.806 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:40:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:40:49 np0005625203.localdomain podman[285016]: 2026-02-20 09:40:49.777473045 +0000 UTC m=+0.090232615 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal)
Feb 20 09:40:49 np0005625203.localdomain podman[285016]: 2026-02-20 09:40:49.789345902 +0000 UTC m=+0.102105472 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, version=9.7, distribution-scope=public, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 09:40:49 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:40:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:50.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:40:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:40:50.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:40:51 np0005625203.localdomain sshd[285035]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:40:52 np0005625203.localdomain sshd[285035]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:40:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:40:52 np0005625203.localdomain podman[285037]: 2026-02-20 09:40:52.244514253 +0000 UTC m=+0.080435622 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:40:52 np0005625203.localdomain podman[285037]: 2026-02-20 09:40:52.258401343 +0000 UTC m=+0.094322722 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute)
Feb 20 09:40:52 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:40:52 np0005625203.localdomain sudo[285056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:40:52 np0005625203.localdomain sudo[285056]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:52 np0005625203.localdomain sudo[285056]: pam_unix(sudo:session): session closed for user root
Feb 20 09:40:52 np0005625203.localdomain sudo[285074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:40:52 np0005625203.localdomain sudo[285074]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:53 np0005625203.localdomain sudo[285074]: pam_unix(sudo:session): session closed for user root
Feb 20 09:40:53 np0005625203.localdomain sudo[285125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:40:53 np0005625203.localdomain sudo[285125]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:53 np0005625203.localdomain sudo[285125]: pam_unix(sudo:session): session closed for user root
Feb 20 09:40:54 np0005625203.localdomain sudo[285143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:40:54 np0005625203.localdomain sudo[285143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:54 np0005625203.localdomain sudo[285143]: pam_unix(sudo:session): session closed for user root
Feb 20 09:40:55 np0005625203.localdomain sudo[285161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:40:55 np0005625203.localdomain sudo[285161]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:55 np0005625203.localdomain sudo[285161]: pam_unix(sudo:session): session closed for user root
Feb 20 09:40:55 np0005625203.localdomain sshd[285179]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:40:55 np0005625203.localdomain sshd[285179]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:40:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:40:58 np0005625203.localdomain sudo[285181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:40:58 np0005625203.localdomain sudo[285181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:58 np0005625203.localdomain sudo[285181]: pam_unix(sudo:session): session closed for user root
Feb 20 09:40:58 np0005625203.localdomain podman[285197]: 2026-02-20 09:40:58.769972653 +0000 UTC m=+0.077551202 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:40:58 np0005625203.localdomain podman[285197]: 2026-02-20 09:40:58.782400138 +0000 UTC m=+0.089978687 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:40:58 np0005625203.localdomain sudo[285211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:40:58 np0005625203.localdomain sudo[285211]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:40:58 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:40:59 np0005625203.localdomain podman[240359]: time="2026-02-20T09:40:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:40:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:40:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 149551 "" "Go-http-client/1.1"
Feb 20 09:40:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 16813 "" "Go-http-client/1.1"
Feb 20 09:40:59 np0005625203.localdomain podman[285284]: 
Feb 20 09:40:59 np0005625203.localdomain podman[285284]: 2026-02-20 09:40:59.298989554 +0000 UTC m=+0.074451547 container create f09667faa87b24a84b9cf1ce207fcfb41d2f292703a41c2447a3ff86c6138e14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_swirles, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, distribution-scope=public, ceph=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.openshift.expose-services=, version=7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: Started libpod-conmon-f09667faa87b24a84b9cf1ce207fcfb41d2f292703a41c2447a3ff86c6138e14.scope.
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:40:59 np0005625203.localdomain podman[285284]: 2026-02-20 09:40:59.266821507 +0000 UTC m=+0.042283540 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:40:59 np0005625203.localdomain podman[285284]: 2026-02-20 09:40:59.373635864 +0000 UTC m=+0.149097857 container init f09667faa87b24a84b9cf1ce207fcfb41d2f292703a41c2447a3ff86c6138e14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_swirles, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, distribution-scope=public, vcs-type=git, io.openshift.tags=rhceph ceph, ceph=True, architecture=x86_64, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True)
Feb 20 09:40:59 np0005625203.localdomain podman[285284]: 2026-02-20 09:40:59.384949936 +0000 UTC m=+0.160411929 container start f09667faa87b24a84b9cf1ce207fcfb41d2f292703a41c2447a3ff86c6138e14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_swirles, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.buildah.version=1.42.2, vcs-type=git, ceph=True, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, CEPH_POINT_RELEASE=, name=rhceph, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=)
Feb 20 09:40:59 np0005625203.localdomain podman[285284]: 2026-02-20 09:40:59.385240715 +0000 UTC m=+0.160702718 container attach f09667faa87b24a84b9cf1ce207fcfb41d2f292703a41c2447a3ff86c6138e14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_swirles, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, RELEASE=main, vcs-type=git, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7)
Feb 20 09:40:59 np0005625203.localdomain relaxed_swirles[285300]: 167 167
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: libpod-f09667faa87b24a84b9cf1ce207fcfb41d2f292703a41c2447a3ff86c6138e14.scope: Deactivated successfully.
Feb 20 09:40:59 np0005625203.localdomain podman[285284]: 2026-02-20 09:40:59.389465495 +0000 UTC m=+0.164927498 container died f09667faa87b24a84b9cf1ce207fcfb41d2f292703a41c2447a3ff86c6138e14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_swirles, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., release=1770267347, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, build-date=2026-02-09T10:25:24Z)
Feb 20 09:40:59 np0005625203.localdomain podman[285305]: 2026-02-20 09:40:59.47453677 +0000 UTC m=+0.077223023 container remove f09667faa87b24a84b9cf1ce207fcfb41d2f292703a41c2447a3ff86c6138e14 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=relaxed_swirles, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, ceph=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, vcs-type=git, architecture=x86_64, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, io.openshift.tags=rhceph ceph, RELEASE=main, description=Red Hat Ceph Storage 7)
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: libpod-conmon-f09667faa87b24a84b9cf1ce207fcfb41d2f292703a41c2447a3ff86c6138e14.scope: Deactivated successfully.
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:40:59 np0005625203.localdomain systemd-rc-local-generator[285348]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:40:59 np0005625203.localdomain systemd-sysv-generator[285353]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-cabe458963cbb7e8ccc357cef2fe8ab20b9812ee415d4c72103faf35dc8d42c7-merged.mount: Deactivated successfully.
Feb 20 09:40:59 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:41:00 np0005625203.localdomain systemd-sysv-generator[285387]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:41:00 np0005625203.localdomain systemd-rc-local-generator[285380]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:41:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:41:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:00 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:00 np0005625203.localdomain systemd[1]: Starting Ceph mgr.np0005625203.lonygy for a8557ee9-b55d-5519-942c-cf8f6172f1d8...
Feb 20 09:41:00 np0005625203.localdomain podman[285453]: 
Feb 20 09:41:00 np0005625203.localdomain podman[285453]: 2026-02-20 09:41:00.661992236 +0000 UTC m=+0.082995680 container create 29d47ad2f9162a35e0f5206f0da11ab2d31dfd6292a72e8f40994dca3dd033a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., name=rhceph, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, release=1770267347, version=7, architecture=x86_64, io.buildah.version=1.42.2, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 09:41:00 np0005625203.localdomain podman[285453]: 2026-02-20 09:41:00.627187169 +0000 UTC m=+0.048190643 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:41:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe0fb641a7d2ed731aeb77e30a33503b6e780e833e312d3c8327b6546ec7b90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 09:41:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe0fb641a7d2ed731aeb77e30a33503b6e780e833e312d3c8327b6546ec7b90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 09:41:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe0fb641a7d2ed731aeb77e30a33503b6e780e833e312d3c8327b6546ec7b90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 09:41:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbe0fb641a7d2ed731aeb77e30a33503b6e780e833e312d3c8327b6546ec7b90/merged/var/lib/ceph/mgr/ceph-np0005625203.lonygy supports timestamps until 2038 (0x7fffffff)
Feb 20 09:41:00 np0005625203.localdomain podman[285453]: 2026-02-20 09:41:00.778155074 +0000 UTC m=+0.199158518 container init 29d47ad2f9162a35e0f5206f0da11ab2d31dfd6292a72e8f40994dca3dd033a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy, ceph=True, name=rhceph, CEPH_POINT_RELEASE=, vcs-type=git, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, architecture=x86_64, GIT_BRANCH=main)
Feb 20 09:41:00 np0005625203.localdomain podman[285453]: 2026-02-20 09:41:00.78611786 +0000 UTC m=+0.207121294 container start 29d47ad2f9162a35e0f5206f0da11ab2d31dfd6292a72e8f40994dca3dd033a6 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., RELEASE=main, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:41:00 np0005625203.localdomain bash[285453]: 29d47ad2f9162a35e0f5206f0da11ab2d31dfd6292a72e8f40994dca3dd033a6
Feb 20 09:41:00 np0005625203.localdomain systemd[1]: Started Ceph mgr.np0005625203.lonygy for a8557ee9-b55d-5519-942c-cf8f6172f1d8.
Feb 20 09:41:00 np0005625203.localdomain sudo[285211]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:00 np0005625203.localdomain ceph-mgr[285471]: set uid:gid to 167:167 (ceph:ceph)
Feb 20 09:41:00 np0005625203.localdomain ceph-mgr[285471]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mgr, pid 2
Feb 20 09:41:00 np0005625203.localdomain ceph-mgr[285471]: pidfile_write: ignore empty --pid-file
Feb 20 09:41:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'alerts'
Feb 20 09:41:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 20 09:41:00 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:00.983+0000 7f5853ba2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 20 09:41:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'balancer'
Feb 20 09:41:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 20 09:41:01 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:01.054+0000 7f5853ba2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 20 09:41:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'cephadm'
Feb 20 09:41:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'crash'
Feb 20 09:41:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 20 09:41:01 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:01.790+0000 7f5853ba2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 20 09:41:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'dashboard'
Feb 20 09:41:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'devicehealth'
Feb 20 09:41:02 np0005625203.localdomain sshd[285501]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:41:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 20 09:41:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'diskprediction_local'
Feb 20 09:41:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:02.406+0000 7f5853ba2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 20 09:41:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 20 09:41:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 20 09:41:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]:   from numpy import show_config as show_numpy_config
Feb 20 09:41:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 20 09:41:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:02.546+0000 7f5853ba2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 20 09:41:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'influx'
Feb 20 09:41:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:02.605+0000 7f5853ba2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 20 09:41:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 20 09:41:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'insights'
Feb 20 09:41:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'iostat'
Feb 20 09:41:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 20 09:41:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:02.717+0000 7f5853ba2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 20 09:41:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'k8sevents'
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'localpool'
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'mds_autoscaler'
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'mirroring'
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'nfs'
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'orchestrator'
Feb 20 09:41:03 np0005625203.localdomain sudo[285503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'osd_perf_query'
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'osd_support'
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'pg_autoscaler'
Feb 20 09:41:03 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:03.540+0000 7f5853ba2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 20 09:41:03 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:03.691+0000 7f5853ba2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 20 09:41:03 np0005625203.localdomain sshd[285501]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:41:03 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:03.756+0000 7f5853ba2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 20 09:41:03 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:03.811+0000 7f5853ba2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 20 09:41:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:41:03 np0005625203.localdomain sudo[285503]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:03 np0005625203.localdomain sudo[285503]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'progress'
Feb 20 09:41:03 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:03.888+0000 7f5853ba2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 20 09:41:03 np0005625203.localdomain systemd[1]: tmp-crun.iN2uZm.mount: Deactivated successfully.
Feb 20 09:41:03 np0005625203.localdomain podman[285520]: 2026-02-20 09:41:03.945800055 +0000 UTC m=+0.100048649 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 20 09:41:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'prometheus'
Feb 20 09:41:03 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:03.956+0000 7f5853ba2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 20 09:41:03 np0005625203.localdomain podman[285520]: 2026-02-20 09:41:03.961231762 +0000 UTC m=+0.115480356 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:41:03 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:41:04 np0005625203.localdomain sudo[285542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:41:04 np0005625203.localdomain sudo[285542]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:04 np0005625203.localdomain sudo[285542]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:04 np0005625203.localdomain sudo[285563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:41:04 np0005625203.localdomain sudo[285563]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:04 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 20 09:41:04 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'rbd_support'
Feb 20 09:41:04 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:04.263+0000 7f5853ba2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 20 09:41:04 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 20 09:41:04 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:04.349+0000 7f5853ba2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 20 09:41:04 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'restful'
Feb 20 09:41:04 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'rgw'
Feb 20 09:41:04 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 20 09:41:04 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:04.692+0000 7f5853ba2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 20 09:41:04 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'rook'
Feb 20 09:41:04 np0005625203.localdomain systemd[1]: tmp-crun.HXw8wf.mount: Deactivated successfully.
Feb 20 09:41:04 np0005625203.localdomain podman[285651]: 2026-02-20 09:41:04.943195238 +0000 UTC m=+0.112351860 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, ceph=True, io.openshift.expose-services=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:41:05 np0005625203.localdomain podman[285651]: 2026-02-20 09:41:05.038327124 +0000 UTC m=+0.207483796 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.buildah.version=1.42.2, version=7, RELEASE=main, ceph=True, distribution-scope=public, architecture=x86_64, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.component=rhceph-container)
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'selftest'
Feb 20 09:41:05 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:05.125+0000 7f5853ba2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:05.189+0000 7f5853ba2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'snap_schedule'
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'stats'
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'status'
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:05.391+0000 7f5853ba2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'telegraf'
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'telemetry'
Feb 20 09:41:05 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:05.453+0000 7f5853ba2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain sudo[285563]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:05.597+0000 7f5853ba2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'test_orchestrator'
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:05.748+0000 7f5853ba2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'volumes'
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:05.942+0000 7f5853ba2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 20 09:41:05 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'zabbix'
Feb 20 09:41:06 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:41:06.004+0000 7f5853ba2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 20 09:41:06 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 20 09:41:06 np0005625203.localdomain ceph-mgr[285471]: ms_deliver_dispatch: unhandled message 0x557a2fb01600 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Feb 20 09:41:06 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.103:6800/1308191220
Feb 20 09:41:06 np0005625203.localdomain sudo[285751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:41:06 np0005625203.localdomain sudo[285751]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:06 np0005625203.localdomain sudo[285751]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:06 np0005625203.localdomain sudo[285769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:41:06 np0005625203.localdomain sudo[285769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:06 np0005625203.localdomain sudo[285769]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:41:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:41:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:41:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:41:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:41:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:41:07 np0005625203.localdomain sudo[285787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:41:07 np0005625203.localdomain sudo[285787]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:07 np0005625203.localdomain sudo[285787]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:41:07.655 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:41:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:41:07.655 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:41:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:41:07.656 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:41:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:41:10 np0005625203.localdomain systemd[1]: tmp-crun.sFc3ud.mount: Deactivated successfully.
Feb 20 09:41:10 np0005625203.localdomain podman[285805]: 2026-02-20 09:41:10.804550636 +0000 UTC m=+0.119623665 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:41:10 np0005625203.localdomain podman[285805]: 2026-02-20 09:41:10.843276875 +0000 UTC m=+0.158349904 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:41:10 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:41:11 np0005625203.localdomain sudo[285822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:41:11 np0005625203.localdomain sudo[285822]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:11 np0005625203.localdomain sudo[285822]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:11 np0005625203.localdomain sudo[285840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:41:11 np0005625203.localdomain sudo[285840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:11 np0005625203.localdomain sudo[285840]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:11 np0005625203.localdomain sudo[285858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:41:11 np0005625203.localdomain sudo[285858]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:11 np0005625203.localdomain sudo[285858]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:11 np0005625203.localdomain sudo[285876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:41:11 np0005625203.localdomain sudo[285876]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:11 np0005625203.localdomain sudo[285876]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:11 np0005625203.localdomain sudo[285894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:11 np0005625203.localdomain sudo[285894]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:11 np0005625203.localdomain sudo[285894]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:11 np0005625203.localdomain sudo[285912]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:41:11 np0005625203.localdomain sudo[285912]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:11 np0005625203.localdomain sudo[285912]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:12 np0005625203.localdomain sudo[285946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:41:12 np0005625203.localdomain sudo[285946]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:12 np0005625203.localdomain sudo[285946]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:12 np0005625203.localdomain sudo[285964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:41:12 np0005625203.localdomain sudo[285964]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:12 np0005625203.localdomain sudo[285964]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:12 np0005625203.localdomain sudo[285982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:41:12 np0005625203.localdomain sudo[285982]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:12 np0005625203.localdomain sudo[285982]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:12 np0005625203.localdomain sudo[286000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:41:12 np0005625203.localdomain sudo[286000]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:12 np0005625203.localdomain sudo[286000]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:12 np0005625203.localdomain sudo[286018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:41:12 np0005625203.localdomain sudo[286018]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:12 np0005625203.localdomain sudo[286018]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:12 np0005625203.localdomain sudo[286036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:41:12 np0005625203.localdomain sudo[286036]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:12 np0005625203.localdomain sudo[286036]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:12 np0005625203.localdomain sudo[286054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:12 np0005625203.localdomain sudo[286054]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:12 np0005625203.localdomain sudo[286054]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:12 np0005625203.localdomain sudo[286072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:41:12 np0005625203.localdomain sudo[286072]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:12 np0005625203.localdomain sudo[286072]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:12 np0005625203.localdomain sudo[286106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:41:12 np0005625203.localdomain sudo[286106]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:12 np0005625203.localdomain sudo[286106]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:12 np0005625203.localdomain sudo[286124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:41:12 np0005625203.localdomain sudo[286124]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:12 np0005625203.localdomain sudo[286124]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:12 np0005625203.localdomain sudo[286142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:12 np0005625203.localdomain sudo[286142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:12 np0005625203.localdomain sudo[286142]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:12 np0005625203.localdomain sudo[286160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:41:12 np0005625203.localdomain sudo[286160]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:13 np0005625203.localdomain sudo[286160]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:13 np0005625203.localdomain sudo[286178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:41:13 np0005625203.localdomain sudo[286178]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:13 np0005625203.localdomain sudo[286178]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:13 np0005625203.localdomain sudo[286196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:41:13 np0005625203.localdomain sudo[286196]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:13 np0005625203.localdomain sudo[286196]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:13 np0005625203.localdomain sudo[286214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:13 np0005625203.localdomain sudo[286214]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:13 np0005625203.localdomain sudo[286214]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:13 np0005625203.localdomain sudo[286232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:41:13 np0005625203.localdomain sudo[286232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:13 np0005625203.localdomain sudo[286232]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:13 np0005625203.localdomain sudo[286266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:41:13 np0005625203.localdomain sudo[286266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:13 np0005625203.localdomain sudo[286266]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:13 np0005625203.localdomain sudo[286284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:41:13 np0005625203.localdomain sudo[286284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:13 np0005625203.localdomain sudo[286284]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:13 np0005625203.localdomain sudo[286302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 20 09:41:13 np0005625203.localdomain sudo[286302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:13 np0005625203.localdomain sudo[286302]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:13 np0005625203.localdomain sudo[286320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:41:13 np0005625203.localdomain sudo[286320]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:13 np0005625203.localdomain sudo[286320]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:13 np0005625203.localdomain sshd[286339]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:41:13 np0005625203.localdomain sudo[286338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:41:13 np0005625203.localdomain sudo[286338]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:13 np0005625203.localdomain sudo[286338]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:13 np0005625203.localdomain sudo[286358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:41:13 np0005625203.localdomain sudo[286358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:13 np0005625203.localdomain sudo[286358]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:13 np0005625203.localdomain sudo[286376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:13 np0005625203.localdomain sudo[286376]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:13 np0005625203.localdomain sudo[286376]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:14 np0005625203.localdomain sudo[286394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:41:14 np0005625203.localdomain sudo[286394]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:14 np0005625203.localdomain sudo[286394]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:14 np0005625203.localdomain sudo[286428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:41:14 np0005625203.localdomain sudo[286428]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:14 np0005625203.localdomain sudo[286428]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:14 np0005625203.localdomain sudo[286446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:41:14 np0005625203.localdomain sudo[286446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:14 np0005625203.localdomain sudo[286446]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:14 np0005625203.localdomain sudo[286464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:41:14 np0005625203.localdomain sudo[286464]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:14 np0005625203.localdomain sudo[286464]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:14 np0005625203.localdomain sudo[286482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:41:14 np0005625203.localdomain sudo[286482]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:14 np0005625203.localdomain sudo[286482]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:15 np0005625203.localdomain sshd[286339]: Invalid user dhkim from 103.48.192.48 port 22992
Feb 20 09:41:15 np0005625203.localdomain sshd[286339]: Received disconnect from 103.48.192.48 port 22992:11: Bye Bye [preauth]
Feb 20 09:41:15 np0005625203.localdomain sshd[286339]: Disconnected from invalid user dhkim 103.48.192.48 port 22992 [preauth]
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:41:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:41:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:41:17 np0005625203.localdomain systemd[1]: tmp-crun.8azLZ4.mount: Deactivated successfully.
Feb 20 09:41:17 np0005625203.localdomain podman[286500]: 2026-02-20 09:41:17.775043057 +0000 UTC m=+0.091929387 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:41:17 np0005625203.localdomain podman[286500]: 2026-02-20 09:41:17.843627151 +0000 UTC m=+0.160513481 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 20 09:41:17 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:41:20 np0005625203.localdomain sudo[286525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:41:20 np0005625203.localdomain sudo[286525]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:41:20 np0005625203.localdomain sudo[286525]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:20 np0005625203.localdomain sudo[286549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:20 np0005625203.localdomain sudo[286549]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:20 np0005625203.localdomain podman[286543]: 2026-02-20 09:41:20.362723541 +0000 UTC m=+0.082263999 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, container_name=openstack_network_exporter, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, distribution-scope=public, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:41:20 np0005625203.localdomain podman[286543]: 2026-02-20 09:41:20.380389608 +0000 UTC m=+0.099930006 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Feb 20 09:41:20 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:41:20 np0005625203.localdomain ceph-mgr[285471]: ms_deliver_dispatch: unhandled message 0x557a2fb00f20 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Feb 20 09:41:20 np0005625203.localdomain podman[286623]: 
Feb 20 09:41:20 np0005625203.localdomain podman[286623]: 2026-02-20 09:41:20.95983675 +0000 UTC m=+0.081518915 container create b9ffa99e9cdd37b61936703512393f899056a75bba767fdd4fdea5ab3883a795 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_herschel, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, GIT_CLEAN=True, ceph=True, name=rhceph, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: Started libpod-conmon-b9ffa99e9cdd37b61936703512393f899056a75bba767fdd4fdea5ab3883a795.scope.
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:41:21 np0005625203.localdomain podman[286623]: 2026-02-20 09:41:20.92786676 +0000 UTC m=+0.049548975 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:41:21 np0005625203.localdomain podman[286623]: 2026-02-20 09:41:21.030282291 +0000 UTC m=+0.151964486 container init b9ffa99e9cdd37b61936703512393f899056a75bba767fdd4fdea5ab3883a795 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_herschel, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.openshift.expose-services=, GIT_CLEAN=True, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, version=7, name=rhceph, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.tags=rhceph ceph)
Feb 20 09:41:21 np0005625203.localdomain podman[286623]: 2026-02-20 09:41:21.04023878 +0000 UTC m=+0.161920975 container start b9ffa99e9cdd37b61936703512393f899056a75bba767fdd4fdea5ab3883a795 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_herschel, release=1770267347, io.buildah.version=1.42.2, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=rhceph-container, architecture=x86_64, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, version=7, io.openshift.expose-services=)
Feb 20 09:41:21 np0005625203.localdomain podman[286623]: 2026-02-20 09:41:21.040666853 +0000 UTC m=+0.162349118 container attach b9ffa99e9cdd37b61936703512393f899056a75bba767fdd4fdea5ab3883a795 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_herschel, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, ceph=True, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, RELEASE=main, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Feb 20 09:41:21 np0005625203.localdomain wonderful_herschel[286639]: 167 167
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: libpod-b9ffa99e9cdd37b61936703512393f899056a75bba767fdd4fdea5ab3883a795.scope: Deactivated successfully.
Feb 20 09:41:21 np0005625203.localdomain podman[286623]: 2026-02-20 09:41:21.045154901 +0000 UTC m=+0.166837096 container died b9ffa99e9cdd37b61936703512393f899056a75bba767fdd4fdea5ab3883a795 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_herschel, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., RELEASE=main, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, architecture=x86_64, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, ceph=True, CEPH_POINT_RELEASE=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347)
Feb 20 09:41:21 np0005625203.localdomain podman[286644]: 2026-02-20 09:41:21.145006493 +0000 UTC m=+0.085760107 container remove b9ffa99e9cdd37b61936703512393f899056a75bba767fdd4fdea5ab3883a795 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=wonderful_herschel, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z)
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: libpod-conmon-b9ffa99e9cdd37b61936703512393f899056a75bba767fdd4fdea5ab3883a795.scope: Deactivated successfully.
Feb 20 09:41:21 np0005625203.localdomain podman[286661]: 
Feb 20 09:41:21 np0005625203.localdomain podman[286661]: 2026-02-20 09:41:21.248843648 +0000 UTC m=+0.068699638 container create b281389bc1e545adc6ceaa24d57e9ffe80bd01cda3fc12bfed0717213b94d85c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_varahamihira, vendor=Red Hat, Inc., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, GIT_CLEAN=True, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, release=1770267347, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, name=rhceph, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph)
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: Started libpod-conmon-b281389bc1e545adc6ceaa24d57e9ffe80bd01cda3fc12bfed0717213b94d85c.scope.
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:41:21 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d1e680298188532edda2edd3d235c55aff7fa55b622fc644e19e45108d4242/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Feb 20 09:41:21 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d1e680298188532edda2edd3d235c55aff7fa55b622fc644e19e45108d4242/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Feb 20 09:41:21 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d1e680298188532edda2edd3d235c55aff7fa55b622fc644e19e45108d4242/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 09:41:21 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d1e680298188532edda2edd3d235c55aff7fa55b622fc644e19e45108d4242/merged/var/lib/ceph/mon/ceph-np0005625203 supports timestamps until 2038 (0x7fffffff)
Feb 20 09:41:21 np0005625203.localdomain podman[286661]: 2026-02-20 09:41:21.300097655 +0000 UTC m=+0.119953585 container init b281389bc1e545adc6ceaa24d57e9ffe80bd01cda3fc12bfed0717213b94d85c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_varahamihira, release=1770267347, io.openshift.expose-services=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:41:21 np0005625203.localdomain podman[286661]: 2026-02-20 09:41:21.306350069 +0000 UTC m=+0.126206019 container start b281389bc1e545adc6ceaa24d57e9ffe80bd01cda3fc12bfed0717213b94d85c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_varahamihira, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, GIT_CLEAN=True, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, name=rhceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, release=1770267347, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:41:21 np0005625203.localdomain podman[286661]: 2026-02-20 09:41:21.306563646 +0000 UTC m=+0.126419576 container attach b281389bc1e545adc6ceaa24d57e9ffe80bd01cda3fc12bfed0717213b94d85c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_varahamihira, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph)
Feb 20 09:41:21 np0005625203.localdomain podman[286661]: 2026-02-20 09:41:21.225243068 +0000 UTC m=+0.045099008 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-2c27b82907796026d3adda2a13dbb241aa51fccf240ff982e25ff01cdd4f4b57-merged.mount: Deactivated successfully.
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: libpod-b281389bc1e545adc6ceaa24d57e9ffe80bd01cda3fc12bfed0717213b94d85c.scope: Deactivated successfully.
Feb 20 09:41:21 np0005625203.localdomain podman[286661]: 2026-02-20 09:41:21.413961791 +0000 UTC m=+0.233817771 container died b281389bc1e545adc6ceaa24d57e9ffe80bd01cda3fc12bfed0717213b94d85c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_varahamihira, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-type=git, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, io.buildah.version=1.42.2, architecture=x86_64, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: tmp-crun.y5Hx7m.mount: Deactivated successfully.
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-47d1e680298188532edda2edd3d235c55aff7fa55b622fc644e19e45108d4242-merged.mount: Deactivated successfully.
Feb 20 09:41:21 np0005625203.localdomain podman[286702]: 2026-02-20 09:41:21.491947806 +0000 UTC m=+0.070580287 container remove b281389bc1e545adc6ceaa24d57e9ffe80bd01cda3fc12bfed0717213b94d85c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=cool_varahamihira, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, release=1770267347, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, distribution-scope=public)
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: libpod-conmon-b281389bc1e545adc6ceaa24d57e9ffe80bd01cda3fc12bfed0717213b94d85c.scope: Deactivated successfully.
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:41:21 np0005625203.localdomain systemd-rc-local-generator[286742]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:41:21 np0005625203.localdomain systemd-sysv-generator[286748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:21 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:41:22 np0005625203.localdomain systemd-sysv-generator[286790]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:41:22 np0005625203.localdomain systemd-rc-local-generator[286786]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: Starting Ceph mon.np0005625203 for a8557ee9-b55d-5519-942c-cf8f6172f1d8...
Feb 20 09:41:22 np0005625203.localdomain podman[286798]: 2026-02-20 09:41:22.38830646 +0000 UTC m=+0.090312507 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:41:22 np0005625203.localdomain podman[286798]: 2026-02-20 09:41:22.401310193 +0000 UTC m=+0.103316230 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:41:22 np0005625203.localdomain podman[286870]: 
Feb 20 09:41:22 np0005625203.localdomain podman[286870]: 2026-02-20 09:41:22.665911305 +0000 UTC m=+0.087356475 container create 2e1f82df8ee48e4b2aedd8e09eb89a2296747266e4afbe8e1e657e76b746335f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625203, name=rhceph, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=1770267347, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.expose-services=, architecture=x86_64)
Feb 20 09:41:22 np0005625203.localdomain podman[286870]: 2026-02-20 09:41:22.630121997 +0000 UTC m=+0.051567187 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:41:22 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c484d01ca7293d66e415b4d69c3f9078767a0d1b377dd3c014b20011e57a376/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 09:41:22 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c484d01ca7293d66e415b4d69c3f9078767a0d1b377dd3c014b20011e57a376/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 09:41:22 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c484d01ca7293d66e415b4d69c3f9078767a0d1b377dd3c014b20011e57a376/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 09:41:22 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c484d01ca7293d66e415b4d69c3f9078767a0d1b377dd3c014b20011e57a376/merged/var/lib/ceph/mon/ceph-np0005625203 supports timestamps until 2038 (0x7fffffff)
Feb 20 09:41:22 np0005625203.localdomain podman[286870]: 2026-02-20 09:41:22.738559385 +0000 UTC m=+0.160004555 container init 2e1f82df8ee48e4b2aedd8e09eb89a2296747266e4afbe8e1e657e76b746335f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625203, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7)
Feb 20 09:41:22 np0005625203.localdomain podman[286870]: 2026-02-20 09:41:22.748638018 +0000 UTC m=+0.170083178 container start 2e1f82df8ee48e4b2aedd8e09eb89a2296747266e4afbe8e1e657e76b746335f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625203, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, ceph=True, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, release=1770267347, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, io.openshift.expose-services=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7)
Feb 20 09:41:22 np0005625203.localdomain bash[286870]: 2e1f82df8ee48e4b2aedd8e09eb89a2296747266e4afbe8e1e657e76b746335f
Feb 20 09:41:22 np0005625203.localdomain systemd[1]: Started Ceph mon.np0005625203 for a8557ee9-b55d-5519-942c-cf8f6172f1d8.
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: set uid:gid to 167:167 (ceph:ceph)
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mon, pid 2
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: pidfile_write: ignore empty --pid-file
Feb 20 09:41:22 np0005625203.localdomain sudo[286549]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: load: jerasure load: lrc 
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: RocksDB version: 7.9.2
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Git sha 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Compile date 2026-02-06 00:00:00
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: DB SUMMARY
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: DB Session ID:  IVJC5Q80ONGS9Z85L01D
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: CURRENT file:  CURRENT
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: IDENTITY file:  IDENTITY
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005625203/store.db dir, Total Num: 0, files: 
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005625203/store.db: 000004.log size: 761 ; 
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                         Options.error_if_exists: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                       Options.create_if_missing: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                         Options.paranoid_checks: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                                     Options.env: 0x5627d96d3a20
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                                      Options.fs: PosixFileSystem
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                                Options.info_log: 0x5627db258d20
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                Options.max_file_opening_threads: 16
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                              Options.statistics: (nil)
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                               Options.use_fsync: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                       Options.max_log_file_size: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                         Options.allow_fallocate: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                        Options.use_direct_reads: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:          Options.create_missing_column_families: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                              Options.db_log_dir: 
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                                 Options.wal_dir: 
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                   Options.advise_random_on_open: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                    Options.write_buffer_manager: 0x5627db269540
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                            Options.rate_limiter: (nil)
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                  Options.unordered_write: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                               Options.row_cache: None
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                              Options.wal_filter: None
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.allow_ingest_behind: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.two_write_queues: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.manual_wal_flush: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.wal_compression: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.atomic_flush: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                 Options.log_readahead_size: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.allow_data_in_errors: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.db_host_id: __hostname__
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.max_background_jobs: 2
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.max_background_compactions: -1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.max_subcompactions: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.max_total_wal_size: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                          Options.max_open_files: -1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                          Options.bytes_per_sync: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:       Options.compaction_readahead_size: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                  Options.max_background_flushes: -1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Compression algorithms supported:
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         kZSTD supported: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         kXpressCompression supported: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         kBZip2Compression supported: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         kLZ4Compression supported: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         kZlibCompression supported: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         kLZ4HCCompression supported: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         kSnappyCompression supported: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005625203/store.db/MANIFEST-000005
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:           Options.merge_operator: 
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:        Options.compaction_filter: None
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5627db258980)
                                                             cache_index_and_filter_blocks: 1
                                                             cache_index_and_filter_blocks_with_high_priority: 0
                                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                                             pin_top_level_index_and_filter: 1
                                                             index_type: 0
                                                             data_block_index_type: 0
                                                             index_shortening: 1
                                                             data_block_hash_table_util_ratio: 0.750000
                                                             checksum: 4
                                                             no_block_cache: 0
                                                             block_cache: 0x5627db255350
                                                             block_cache_name: BinnedLRUCache
                                                             block_cache_options:
                                                               capacity : 536870912
                                                               num_shard_bits : 4
                                                               strict_capacity_limit : 0
                                                               high_pri_pool_ratio: 0.000
                                                             block_cache_compressed: (nil)
                                                             persistent_cache: (nil)
                                                             block_size: 4096
                                                             block_size_deviation: 10
                                                             block_restart_interval: 16
                                                             index_block_restart_interval: 1
                                                             metadata_block_size: 4096
                                                             partition_filters: 0
                                                             use_delta_encoding: 1
                                                             filter_policy: bloomfilter
                                                             whole_key_filtering: 1
                                                             verify_compression: 0
                                                             read_amp_bytes_per_bit: 0
                                                             format_version: 5
                                                             enable_index_compression: 1
                                                             block_align: 0
                                                             max_auto_readahead_size: 262144
                                                             prepopulate_block_cache: 0
                                                             initial_auto_readahead_size: 8192
                                                             num_file_reads_for_auto_readahead: 2
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:        Options.write_buffer_size: 33554432
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:  Options.max_write_buffer_number: 2
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:          Options.compression: NoCompression
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.num_levels: 7
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                   Options.table_properties_collectors: 
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                           Options.bloom_locality: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                               Options.ttl: 2592000
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                       Options.enable_blob_files: false
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                           Options.min_blob_size: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005625203/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a18f0433-5302-412e-a730-8e4f9cc01661
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580482806310, "job": 1, "event": "recovery_started", "wal_files": [4]}
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580482809064, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580482, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a18f0433-5302-412e-a730-8e4f9cc01661", "db_session_id": "IVJC5Q80ONGS9Z85L01D", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580482809190, "job": 1, "event": "recovery_finished"}
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5627db27ce00
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: DB pointer 0x5627db372000
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203 does not exist in monmap, will attempt to join an existing cluster
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      1/0    1.84 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                            Sum      1/0    1.84 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Flush(GB): cumulative 0.000, interval 0.000
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x5627db255350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.1e-05 secs_since: 0
                                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: using public_addr v2:172.18.0.107:0/0 -> [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0]
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: starting mon.np0005625203 rank -1 at public addrs [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] at bind addrs [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005625203 fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(???) e0 preinit fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(synchronizing) e4 sync_obtain_latest_monmap
Feb 20 09:41:22 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(synchronizing) e4 sync_obtain_latest_monmap obtained monmap e4
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(synchronizing).mds e17 new map
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(synchronizing).mds e17 print_map
                                                           e17
                                                           enable_multiple, ever_enabled_multiple: 1,1
                                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           legacy client fscid: 1
                                                            
                                                           Filesystem 'cephfs' (1)
                                                           fs_name        cephfs
                                                           epoch        16
                                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                                           created        2026-02-20T07:58:28.398421+0000
                                                           modified        2026-02-20T09:40:14.722031+0000
                                                           tableserver        0
                                                           root        0
                                                           session_timeout        60
                                                           session_autoclose        300
                                                           max_file_size        1099511627776
                                                           required_client_features        {}
                                                           last_failure        0
                                                           last_failure_osd_epoch        83
                                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           max_mds        1
                                                           in        0
                                                           up        {0=26854}
                                                           failed        
                                                           damaged        
                                                           stopped        
                                                           data_pools        [6]
                                                           metadata_pool        7
                                                           inline_data        disabled
                                                           balancer        
                                                           bal_rank_mask        -1
                                                           standby_count_wanted        1
                                                           qdb_cluster        leader: 26854 members: 26854
                                                           [mds.mds.np0005625203.zsrwgk{0:26854} state up:active seq 13 addr [v2:172.18.0.107:6808/3334119751,v1:172.18.0.107:6809/3334119751] compat {c=[1],r=[1],i=[17ff]}]
                                                            
                                                            
                                                           Standby daemons:
                                                            
                                                           [mds.mds.np0005625202.akhmop{-1:17124} state up:standby seq 1 addr [v2:172.18.0.106:6808/3865978972,v1:172.18.0.106:6809/3865978972] compat {c=[1],r=[1],i=[17ff]}]
                                                           [mds.mds.np0005625204.wnsphl{-1:26848} state up:standby seq 1 addr [v2:172.18.0.108:6808/2508223371,v1:172.18.0.108:6809/2508223371] compat {c=[1],r=[1],i=[17ff]}]
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(synchronizing).osd e84 crush map has features 3314933000852226048, adjusting msgr requires
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(synchronizing).osd e84 crush map has features 288514051259236352, adjusting msgr requires
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(synchronizing).osd e84 crush map has features 288514051259236352, adjusting msgr requires
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(synchronizing).osd e84 crush map has features 288514051259236352, adjusting msgr requires
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3781: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3782: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17250 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625202.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label mgr to host np0005625202.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17256 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625203.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label mgr to host np0005625203.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3783: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17262 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625204.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label mgr to host np0005625204.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17268 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Saving service mgr spec with placement label:mgr
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3784: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Deploying daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17274 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mgr", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3785: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Deploying daemon mgr.np0005625203.lonygy on np0005625203.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17286 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625199.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label mon to host np0005625199.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3786: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17292 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625199.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label _admin to host np0005625199.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Deploying daemon mgr.np0005625204.exgrzx on np0005625204.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17304 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625200.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label mon to host np0005625200.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.32:0/1275459803' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.32:0/1275459803' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3787: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.26855 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625200.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label _admin to host np0005625200.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Standby manager daemon np0005625202.arwxwo started
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: mgrmap e12: np0005625199.ileebh(active, since 2h), standbys: np0005625201.mtnyvu, np0005625200.ypbkax, np0005625202.arwxwo
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3788: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17331 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625201.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label mon to host np0005625201.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Standby manager daemon np0005625203.lonygy started
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17337 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625201.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label _admin to host np0005625201.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3789: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: mgrmap e13: np0005625199.ileebh(active, since 2h), standbys: np0005625201.mtnyvu, np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17343 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625202.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label mon to host np0005625202.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Standby manager daemon np0005625204.exgrzx started
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17349 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625202.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label _admin to host np0005625202.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3790: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: mgrmap e14: np0005625199.ileebh(active, since 2h), standbys: np0005625201.mtnyvu, np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17355 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625203.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label mon to host np0005625203.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3791: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17361 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625203.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label _admin to host np0005625203.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17367 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625204.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label mon to host np0005625204.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3792: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17373 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "np0005625204.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Added label _admin to host np0005625204.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3793: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17379 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Saving service mon spec with placement label:mon
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3794: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='client.17385 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625202", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: Deploying daemon mon.np0005625204 on np0005625204.localdomain
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: pgmap v3795: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:23 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(synchronizing).paxosservice(auth 1..34) refresh upgraded, format 0 -> 3
Feb 20 09:41:23 np0005625203.localdomain systemd[1]: tmp-crun.ejynXk.mount: Deactivated successfully.
Feb 20 09:41:27 np0005625203.localdomain ceph-mgr[285471]: ms_deliver_dispatch: unhandled message 0x557a2fb01080 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Feb 20 09:41:28 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(probing) e4  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Feb 20 09:41:28 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(probing) e4  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Feb 20 09:41:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:41:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:41:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:41:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:41:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17772 "" "Go-http-client/1.1"
Feb 20 09:41:29 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@-1(probing) e5  my rank is now 4 (was -1)
Feb 20 09:41:29 np0005625203.localdomain ceph-mon[286888]: log_channel(cluster) log [INF] : mon.np0005625203 calling monitor election
Feb 20 09:41:29 np0005625203.localdomain ceph-mon[286888]: paxos.4).electionLogic(0) init, first boot, initializing epoch at 1 
Feb 20 09:41:29 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(electing) e5 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:41:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:41:29 np0005625203.localdomain podman[286928]: 2026-02-20 09:41:29.779770476 +0000 UTC m=+0.094518537 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:41:29 np0005625203.localdomain podman[286928]: 2026-02-20 09:41:29.793264684 +0000 UTC m=+0.108012735 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:41:29 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:41:30 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(electing) e5  adding peer [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] to list of hints
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(electing).elector(1) mon.0 v2:172.18.0.103:3300/0 has newer monmap epoch 6 > my epoch 5, taking it
Feb 20 09:41:32 np0005625203.localdomain ceph-mgr[285471]: ms_deliver_dispatch: unhandled message 0x557a2fb011e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Feb 20 09:41:32 np0005625203.localdomain sudo[286953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:41:32 np0005625203.localdomain sudo[286953]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:32 np0005625203.localdomain sudo[286953]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: pgmap v3796: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625199"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625200"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 calling monitor election
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625199 calling monitor election
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625200 calling monitor election
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: pgmap v3797: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625204 calling monitor election
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: pgmap v3798: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625199 is new leader, mons np0005625199,np0005625201,np0005625200,np0005625204 in quorum (ranks 0,1,2,3)
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: monmap epoch 4
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: last_changed 2026-02-20T09:41:20.444808+0000
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: min_mon_release 18 (reef)
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: election_strategy: 1
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625199
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625200
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: osdmap e84: 6 total, 6 up, 6 in
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mgrmap e14: np0005625199.ileebh(active, since 2h), standbys: np0005625201.mtnyvu, np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: overall HEALTH_OK
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: Deploying daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='client.27061 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625202", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: pgmap v3799: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625199"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625200 calling monitor election
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625200"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 calling monitor election
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625199 calling monitor election
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625204 calling monitor election
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: pgmap v3800: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: pgmap v3801: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625199 is new leader, mons np0005625199,np0005625201,np0005625200,np0005625204 in quorum (ranks 0,1,2,3)
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: monmap epoch 5
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: last_changed 2026-02-20T09:41:27.125231+0000
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: min_mon_release 18 (reef)
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: election_strategy: 1
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625199
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625200
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: 4: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005625203
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: osdmap e84: 6 total, 6 up, 6 in
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: mgrmap e14: np0005625199.ileebh(active, since 2h), standbys: np0005625201.mtnyvu, np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: Health check failed: 1/5 mons down, quorum np0005625199,np0005625201,np0005625200,np0005625204 (MON_DOWN)
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: Health detail: HEALTH_WARN 1/5 mons down, quorum np0005625199,np0005625201,np0005625200,np0005625204
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: [WRN] MON_DOWN: 1/5 mons down, quorum np0005625199,np0005625201,np0005625200,np0005625204
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]:     mon.np0005625203 (rank 4) addr [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] is down (out of quorum)
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:33 np0005625203.localdomain ceph-mon[286888]: log_channel(cluster) log [INF] : mon.np0005625203 calling monitor election
Feb 20 09:41:33 np0005625203.localdomain ceph-mon[286888]: paxos.4).electionLogic(0) init, first boot, initializing epoch at 1 
Feb 20 09:41:33 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:41:33 np0005625203.localdomain sudo[286971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:41:33 np0005625203.localdomain sudo[286971]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:33 np0005625203.localdomain sudo[286971]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:33 np0005625203.localdomain sudo[286989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:41:33 np0005625203.localdomain sudo[286989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:41:34 np0005625203.localdomain podman[287051]: 2026-02-20 09:41:34.293439474 +0000 UTC m=+0.077691497 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:41:34 np0005625203.localdomain podman[287051]: 2026-02-20 09:41:34.377322052 +0000 UTC m=+0.161574115 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:41:34 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:41:34 np0005625203.localdomain podman[287105]: 2026-02-20 09:41:34.456628507 +0000 UTC m=+0.077431309 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public, vcs-type=git, release=1770267347, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, name=rhceph, build-date=2026-02-09T10:25:24Z, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph)
Feb 20 09:41:34 np0005625203.localdomain podman[287105]: 2026-02-20 09:41:34.58430762 +0000 UTC m=+0.205110422 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, name=rhceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, version=7, RELEASE=main, vcs-type=git, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_BRANCH=main, io.buildah.version=1.42.2)
Feb 20 09:41:35 np0005625203.localdomain sudo[286989]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:41:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:41:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:41:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:41:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:41:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:41:37 np0005625203.localdomain sshd[287228]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(electing) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(peon) e6 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Feb 20 09:41:37 np0005625203.localdomain ceph-mds[282126]: mds.beacon.mds.np0005625203.zsrwgk missed beacon ack from the monitors
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(peon) e6 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(peon) e6 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mgrc update_daemon_metadata mon.np0005625203 metadata {addrs=[v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable),ceph_version_short=18.2.1-381.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005625203.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005625203.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux}
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203 calling monitor election
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: pgmap v3802: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625199"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625200"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 calling monitor election
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mon.np0005625199 calling monitor election
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mon.np0005625204 calling monitor election
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mon.np0005625200 calling monitor election
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203 calling monitor election
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: pgmap v3803: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mon.np0005625202 calling monitor election
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: pgmap v3804: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mon.np0005625199 is new leader, mons np0005625199,np0005625201,np0005625200,np0005625204,np0005625203,np0005625202 in quorum (ranks 0,1,2,3,4,5)
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: monmap epoch 6
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: last_changed 2026-02-20T09:41:32.466876+0000
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: min_mon_release 18 (reef)
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: election_strategy: 1
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625199
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: 1: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625200
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: 3: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: 4: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005625203
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: 5: [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon.np0005625202
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: osdmap e84: 6 total, 6 up, 6 in
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: mgrmap e14: np0005625199.ileebh(active, since 2h), standbys: np0005625201.mtnyvu, np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: Health check cleared: MON_DOWN (was: 1/5 mons down, quorum np0005625199,np0005625201,np0005625200,np0005625204)
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: Cluster is now healthy
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: overall HEALTH_OK
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:37 np0005625203.localdomain sudo[287230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:41:38 np0005625203.localdomain sudo[287230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:38 np0005625203.localdomain sudo[287230]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:38 np0005625203.localdomain sudo[287248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:41:38 np0005625203.localdomain sudo[287248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:38 np0005625203.localdomain sudo[287248]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: Updating np0005625199.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: from='client.17400 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625202", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:41:39 np0005625203.localdomain sudo[287266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:41:39 np0005625203.localdomain sudo[287266]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:39 np0005625203.localdomain sudo[287266]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:39 np0005625203.localdomain sudo[287284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:39 np0005625203.localdomain sudo[287284]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:39 np0005625203.localdomain sudo[287284]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:39 np0005625203.localdomain sudo[287302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:41:39 np0005625203.localdomain sudo[287302]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:39 np0005625203.localdomain sudo[287302]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:39 np0005625203.localdomain sudo[287336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:41:39 np0005625203.localdomain sudo[287336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:39 np0005625203.localdomain sudo[287336]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:39 np0005625203.localdomain sudo[287354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:41:39 np0005625203.localdomain sudo[287354]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:39 np0005625203.localdomain sudo[287354]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:39 np0005625203.localdomain sudo[287372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:41:39 np0005625203.localdomain sudo[287372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:39 np0005625203.localdomain sudo[287372]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:39 np0005625203.localdomain sudo[287390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:41:39 np0005625203.localdomain sudo[287390]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:39 np0005625203.localdomain sudo[287390]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:39 np0005625203.localdomain sudo[287408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:41:39 np0005625203.localdomain sudo[287408]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:39 np0005625203.localdomain sudo[287408]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:39 np0005625203.localdomain sudo[287426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:41:39 np0005625203.localdomain sudo[287426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:39 np0005625203.localdomain sudo[287426]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:39 np0005625203.localdomain sudo[287444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:39 np0005625203.localdomain sudo[287444]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:39 np0005625203.localdomain sudo[287444]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:39 np0005625203.localdomain sudo[287462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:41:39 np0005625203.localdomain sudo[287462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:39 np0005625203.localdomain sudo[287462]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:39 np0005625203.localdomain sudo[287496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:41:39 np0005625203.localdomain sudo[287496]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:39 np0005625203.localdomain sudo[287496]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:40 np0005625203.localdomain sudo[287514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:41:40 np0005625203.localdomain sudo[287514]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:40 np0005625203.localdomain sudo[287514]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:40 np0005625203.localdomain sudo[287532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:40 np0005625203.localdomain sudo[287532]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:40 np0005625203.localdomain sudo[287532]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: Updating np0005625199.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: pgmap v3805: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:40 np0005625203.localdomain sudo[287550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:41:40 np0005625203.localdomain sudo[287550]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:40 np0005625203.localdomain sudo[287550]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:41 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:41 np0005625203.localdomain ceph-mon[286888]: from='client.34103 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625203", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:41:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:41:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:41:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:41:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:41:42 np0005625203.localdomain podman[287568]: 2026-02-20 09:41:42.665923813 +0000 UTC m=+0.075761037 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:41:42 np0005625203.localdomain podman[287568]: 2026-02-20 09:41:42.67711636 +0000 UTC m=+0.086953584 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:41:42 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:41:42 np0005625203.localdomain ceph-mon[286888]: pgmap v3806: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:42 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625199 (monmap changed)...
Feb 20 09:41:42 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625199 on np0005625199.localdomain
Feb 20 09:41:42 np0005625203.localdomain ceph-mon[286888]: from='client.34109 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625204", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:41:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:43 np0005625203.localdomain ceph-mon[286888]: pgmap v3807: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:43 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625199.ileebh (monmap changed)...
Feb 20 09:41:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625199.ileebh", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:41:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:41:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:43 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625199.ileebh on np0005625199.localdomain
Feb 20 09:41:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:43 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.103:0/2264224357' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 20 09:41:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:44 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625199 (monmap changed)...
Feb 20 09:41:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625199", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:41:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:44 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625199 on np0005625199.localdomain
Feb 20 09:41:44 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.103:0/1826137495' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Feb 20 09:41:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:41:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:45.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:41:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:45.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:41:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:45.362 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:41:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:45.362 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:41:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:45.363 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:41:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:45.363 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:41:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:45.363 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(peon) e6 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2554763741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:41:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:45.815 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: pgmap v3808: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625200 (monmap changed)...
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.106:0/3313861519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.107:0/2554763741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.106:0/3613321036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' 
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:41:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.14120 172.18.0.103:0/1462291611' entity='mgr.np0005625199.ileebh' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:45.971 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:41:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:45.972 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12420MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:41:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:45.973 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:41:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:45.973 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(peon).osd e84 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(peon).osd e84 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(peon).osd e85 e85: 6 total, 6 up, 6 in
Feb 20 09:41:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:46.036 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:41:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:46.037 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:41:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:46.059 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:41:46 np0005625203.localdomain sshd[26776]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:41:46 np0005625203.localdomain sshd[26640]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:41:46 np0005625203.localdomain sshd[26738]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:41:46 np0005625203.localdomain sshd[26719]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:41:46 np0005625203.localdomain sshd[26831]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:41:46 np0005625203.localdomain sshd[26700]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:41:46 np0005625203.localdomain sshd[26757]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-25.scope: Deactivated successfully.
Feb 20 09:41:46 np0005625203.localdomain sshd[26657]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:41:46 np0005625203.localdomain sshd[26795]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:41:46 np0005625203.localdomain sshd[26679]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-21.scope: Deactivated successfully.
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-18.scope: Deactivated successfully.
Feb 20 09:41:46 np0005625203.localdomain sshd[26850]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:41:46 np0005625203.localdomain sshd[26814]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-17.scope: Deactivated successfully.
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-16.scope: Deactivated successfully.
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-23.scope: Deactivated successfully.
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-14.scope: Deactivated successfully.
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-24.scope: Deactivated successfully.
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-20.scope: Deactivated successfully.
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-19.scope: Deactivated successfully.
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-26.scope: Deactivated successfully.
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-26.scope: Consumed 3min 27.555s CPU time.
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: session-22.scope: Deactivated successfully.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Session 22 logged out. Waiting for processes to exit.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Session 20 logged out. Waiting for processes to exit.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Session 14 logged out. Waiting for processes to exit.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Session 18 logged out. Waiting for processes to exit.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Session 23 logged out. Waiting for processes to exit.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Session 19 logged out. Waiting for processes to exit.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Session 26 logged out. Waiting for processes to exit.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Session 24 logged out. Waiting for processes to exit.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Session 17 logged out. Waiting for processes to exit.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Session 16 logged out. Waiting for processes to exit.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Session 21 logged out. Waiting for processes to exit.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Session 25 logged out. Waiting for processes to exit.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Removed session 25.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Removed session 21.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Removed session 18.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Removed session 17.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Removed session 16.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Removed session 23.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Removed session 14.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Removed session 24.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Removed session 20.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Removed session 19.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Removed session 26.
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: Removed session 22.
Feb 20 09:41:46 np0005625203.localdomain sshd[287628]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:41:46 np0005625203.localdomain sshd[287628]: Accepted publickey for ceph-admin from 192.168.122.105 port 59576 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 09:41:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:46.478 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:41:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:46.482 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:41:46 np0005625203.localdomain systemd-logind[759]: New session 63 of user ceph-admin.
Feb 20 09:41:46 np0005625203.localdomain systemd[1]: Started Session 63 of User ceph-admin.
Feb 20 09:41:46 np0005625203.localdomain sshd[287628]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 09:41:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:46.515 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:41:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:46.517 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:41:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:46.517 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:41:46 np0005625203.localdomain sudo[287634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:41:46 np0005625203.localdomain sudo[287634]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:46 np0005625203.localdomain sudo[287634]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:46 np0005625203.localdomain sudo[287652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:41:46 np0005625203.localdomain sudo[287652]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.103:0/2662030267' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: Activating manager daemon np0005625201.mtnyvu
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: osdmap e85: 6 total, 6 up, 6 in
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.103:0/2662030267' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: mgrmap e15: np0005625201.mtnyvu(active, starting, since 0.0872194s), standbys: np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625199"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625200"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mds metadata", "who": "mds.np0005625202.akhmop"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mds metadata", "who": "mds.np0005625204.wnsphl"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mds metadata", "who": "mds.np0005625203.zsrwgk"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr metadata", "who": "np0005625201.mtnyvu", "id": "np0005625201.mtnyvu"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr metadata", "who": "np0005625200.ypbkax", "id": "np0005625200.ypbkax"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mds metadata"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd metadata"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: Manager daemon np0005625201.mtnyvu is now available
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625201.mtnyvu/mirror_snapshot_schedule"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625201.mtnyvu/mirror_snapshot_schedule"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.108:0/191250644' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625201.mtnyvu/trash_purge_schedule"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625201.mtnyvu/trash_purge_schedule"} : dispatch
Feb 20 09:41:46 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.107:0/262602945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:41:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:47.513 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:41:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:47.514 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:41:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:47.515 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:41:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:47.515 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:41:47 np0005625203.localdomain podman[287743]: 2026-02-20 09:41:47.532452089 +0000 UTC m=+0.092462225 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, com.redhat.component=rhceph-container, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, ceph=True, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:41:47 np0005625203.localdomain podman[287743]: 2026-02-20 09:41:47.668355026 +0000 UTC m=+0.228365182 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main)
Feb 20 09:41:47 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(peon).osd e85 _set_new_cache_sizes cache_size:1019612353 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:41:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:41:48 np0005625203.localdomain podman[287812]: 2026-02-20 09:41:48.033393439 +0000 UTC m=+0.086161848 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:41:48 np0005625203.localdomain ceph-mon[286888]: mgrmap e16: np0005625201.mtnyvu(active, since 1.08829s), standbys: np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:41:48 np0005625203.localdomain ceph-mon[286888]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:48 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.108:0/1988365290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:41:48 np0005625203.localdomain podman[287812]: 2026-02-20 09:41:48.086093191 +0000 UTC m=+0.138861560 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb 20 09:41:48 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:41:48 np0005625203.localdomain sudo[287652]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:48 np0005625203.localdomain sudo[287890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:41:48 np0005625203.localdomain sudo[287890]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:48 np0005625203.localdomain sudo[287890]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:48 np0005625203.localdomain sudo[287908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:41:48 np0005625203.localdomain sudo[287908]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:41:47] ENGINE Bus STARTING
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:41:47] ENGINE Serving on http://172.18.0.105:8765
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: mgrmap e17: np0005625201.mtnyvu(active, since 2s), standbys: np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:41:48] ENGINE Serving on https://172.18.0.105:7150
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:41:48] ENGINE Bus STARTED
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:41:48] ENGINE Client ('172.18.0.105', 35862) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:49 np0005625203.localdomain sudo[287908]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:49.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:41:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:49.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:41:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:49.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:41:49 np0005625203.localdomain sudo[287959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:41:49 np0005625203.localdomain sshd[287976]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:41:49 np0005625203.localdomain sudo[287959]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:49.370 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:41:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:49.371 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:41:49 np0005625203.localdomain sudo[287959]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:49 np0005625203.localdomain sudo[287979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Feb 20 09:41:49 np0005625203.localdomain sudo[287979]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:49 np0005625203.localdomain sshd[287976]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:41:49 np0005625203.localdomain sudo[287979]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:50 np0005625203.localdomain sudo[288016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:41:50 np0005625203.localdomain sudo[288016]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:50 np0005625203.localdomain sudo[288016]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:50 np0005625203.localdomain sudo[288034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:41:50 np0005625203.localdomain sudo[288034]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:50 np0005625203.localdomain sudo[288034]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:50 np0005625203.localdomain sudo[288052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:41:50 np0005625203.localdomain sudo[288052]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:50 np0005625203.localdomain sudo[288052]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:50 np0005625203.localdomain sudo[288070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:50 np0005625203.localdomain sudo[288070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:50 np0005625203.localdomain sudo[288070]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:50 np0005625203.localdomain sudo[288088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:41:50 np0005625203.localdomain sudo[288088]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:50 np0005625203.localdomain sudo[288088]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:50 np0005625203.localdomain sudo[288122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:41:50 np0005625203.localdomain sudo[288122]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:41:50 np0005625203.localdomain sudo[288122]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:50 np0005625203.localdomain sudo[288143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:41:50 np0005625203.localdomain sudo[288143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:50 np0005625203.localdomain sudo[288143]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:50 np0005625203.localdomain podman[288140]: 2026-02-20 09:41:50.649616506 +0000 UTC m=+0.093785065 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9/ubi-minimal, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, distribution-scope=public, release=1770267347, vcs-type=git, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:41:50 np0005625203.localdomain podman[288140]: 2026-02-20 09:41:50.693332265 +0000 UTC m=+0.137500824 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, io.openshift.tags=minimal rhel9)
Feb 20 09:41:50 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:41:50 np0005625203.localdomain sudo[288175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:41:50 np0005625203.localdomain sudo[288175]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:50 np0005625203.localdomain sudo[288175]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd/host:np0005625199", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd/host:np0005625199", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:41:50 np0005625203.localdomain ceph-mon[286888]: mgrmap e18: np0005625201.mtnyvu(active, since 4s), standbys: np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:41:50 np0005625203.localdomain sudo[288197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:41:50 np0005625203.localdomain sudo[288197]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:50 np0005625203.localdomain sudo[288197]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:50 np0005625203.localdomain sudo[288215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:41:50 np0005625203.localdomain sudo[288215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:50 np0005625203.localdomain sudo[288215]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:50 np0005625203.localdomain sudo[288233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:41:50 np0005625203.localdomain sudo[288233]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:50 np0005625203.localdomain sudo[288233]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:51 np0005625203.localdomain sudo[288251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:51 np0005625203.localdomain sudo[288251]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:51 np0005625203.localdomain sudo[288251]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:51 np0005625203.localdomain sudo[288269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:41:51 np0005625203.localdomain sudo[288269]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:51 np0005625203.localdomain sudo[288269]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:51 np0005625203.localdomain sudo[288303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:41:51 np0005625203.localdomain sudo[288303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:51 np0005625203.localdomain sudo[288303]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:51 np0005625203.localdomain sudo[288321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:41:51 np0005625203.localdomain sudo[288321]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:51 np0005625203.localdomain sudo[288321]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:51.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:41:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:51.360 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:41:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:41:51.361 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:41:51 np0005625203.localdomain sudo[288339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain sudo[288339]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:51 np0005625203.localdomain sudo[288339]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:51 np0005625203.localdomain sudo[288357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:41:51 np0005625203.localdomain sudo[288357]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:51 np0005625203.localdomain sudo[288357]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:51 np0005625203.localdomain sudo[288375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:41:51 np0005625203.localdomain sudo[288375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:51 np0005625203.localdomain sudo[288375]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:51 np0005625203.localdomain sudo[288393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:41:51 np0005625203.localdomain sudo[288393]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:51 np0005625203.localdomain sudo[288393]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:51 np0005625203.localdomain sudo[288411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:51 np0005625203.localdomain sudo[288411]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:51 np0005625203.localdomain sudo[288411]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Updating np0005625199.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Updating np0005625199.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:41:51 np0005625203.localdomain ceph-mon[286888]: Standby manager daemon np0005625199.ileebh started
Feb 20 09:41:51 np0005625203.localdomain sudo[288429]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:41:51 np0005625203.localdomain sudo[288429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:51 np0005625203.localdomain sudo[288429]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:51 np0005625203.localdomain sudo[288463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:41:51 np0005625203.localdomain sudo[288463]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:51 np0005625203.localdomain sudo[288463]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:52 np0005625203.localdomain sudo[288481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:41:52 np0005625203.localdomain sudo[288481]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:52 np0005625203.localdomain sudo[288481]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:52 np0005625203.localdomain sudo[288499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 20 09:41:52 np0005625203.localdomain sudo[288499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:52 np0005625203.localdomain sudo[288499]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:52 np0005625203.localdomain sudo[288517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:41:52 np0005625203.localdomain sudo[288517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:52 np0005625203.localdomain sudo[288517]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:52 np0005625203.localdomain sudo[288535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:41:52 np0005625203.localdomain sudo[288535]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:52 np0005625203.localdomain sudo[288535]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:52 np0005625203.localdomain sudo[288553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:41:52 np0005625203.localdomain sudo[288553]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:52 np0005625203.localdomain sudo[288553]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:52 np0005625203.localdomain sudo[288571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:41:52 np0005625203.localdomain sudo[288571]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:41:52 np0005625203.localdomain sudo[288571]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:52 np0005625203.localdomain sudo[288595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:41:52 np0005625203.localdomain sudo[288595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:52 np0005625203.localdomain sudo[288595]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:52 np0005625203.localdomain podman[288589]: 2026-02-20 09:41:52.540144617 +0000 UTC m=+0.079201130 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 09:41:52 np0005625203.localdomain podman[288589]: 2026-02-20 09:41:52.550290375 +0000 UTC m=+0.089346858 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3)
Feb 20 09:41:52 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:41:52 np0005625203.localdomain sudo[288640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:41:52 np0005625203.localdomain sudo[288640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:52 np0005625203.localdomain sudo[288640]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:52 np0005625203.localdomain sudo[288658]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:41:52 np0005625203.localdomain sudo[288658]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:52 np0005625203.localdomain sudo[288658]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:52 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:41:52 np0005625203.localdomain ceph-mon[286888]: Updating np0005625199.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:41:52 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:41:52 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:41:52 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:41:52 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:41:52 np0005625203.localdomain ceph-mon[286888]: mgrmap e19: np0005625201.mtnyvu(active, since 6s), standbys: np0005625199.ileebh, np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:41:52 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr metadata", "who": "np0005625199.ileebh", "id": "np0005625199.ileebh"} : dispatch
Feb 20 09:41:52 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:52 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:52 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:52 np0005625203.localdomain sudo[288676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:41:52 np0005625203.localdomain sudo[288676]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:52 np0005625203.localdomain sudo[288676]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:52 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(peon).osd e85 _set_new_cache_sizes cache_size:1020044309 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:41:53 np0005625203.localdomain sudo[288694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:41:53 np0005625203.localdomain sudo[288694]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:41:53 np0005625203.localdomain sudo[288694]: pam_unix(sudo:session): session closed for user root
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: Updating np0005625199.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625200 (monmap changed)...
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:53 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625200 on np0005625200.localdomain
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 0 B/s wr, 16 op/s
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625200.ypbkax (monmap changed)...
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625200.ypbkax on np0005625200.localdomain
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:41:55 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:56 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625201 (monmap changed)...
Feb 20 09:41:56 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain
Feb 20 09:41:56 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:56 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:56 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:41:56 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:41:56 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:41:56 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:56 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:57 np0005625203.localdomain ceph-mon[286888]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Feb 20 09:41:57 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)...
Feb 20 09:41:57 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain
Feb 20 09:41:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:41:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:41:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:57 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(peon).osd e85 _set_new_cache_sizes cache_size:1020054480 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:41:58 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625201 (monmap changed)...
Feb 20 09:41:58 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain
Feb 20 09:41:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:41:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:41:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:41:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:41:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:41:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:41:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17779 "" "Go-http-client/1.1"
Feb 20 09:41:59 np0005625203.localdomain sshd[288713]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:41:59 np0005625203.localdomain ceph-mon[286888]: from='client.27032 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:41:59 np0005625203.localdomain ceph-mon[286888]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Feb 20 09:41:59 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:41:59 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:41:59 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:59 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:41:59 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 20 09:41:59 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:41:59 np0005625203.localdomain sshd[288713]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:42:00 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.2 (monmap changed)...
Feb 20 09:42:00 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:42:00 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:00 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:00 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Feb 20 09:42:00 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:42:00 np0005625203.localdomain systemd[1]: tmp-crun.rEjrX9.mount: Deactivated successfully.
Feb 20 09:42:00 np0005625203.localdomain podman[288715]: 2026-02-20 09:42:00.76882317 +0000 UTC m=+0.089881834 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:42:00 np0005625203.localdomain podman[288715]: 2026-02-20 09:42:00.805316822 +0000 UTC m=+0.126375496 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:42:00 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:42:01 np0005625203.localdomain ceph-mon[286888]: pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:42:01 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.5 (monmap changed)...
Feb 20 09:42:01 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:42:01 np0005625203.localdomain ceph-mon[286888]: from='client.17490 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625199", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:42:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:42:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 ' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:42:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:02 np0005625203.localdomain sshd[288739]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:42:02 np0005625203.localdomain ceph-mgr[285471]: ms_deliver_dispatch: unhandled message 0x557a2fb011e0 mon_map magic: 0 from mon.1 v2:172.18.0.105:3300/0
Feb 20 09:42:02 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0
Feb 20 09:42:02 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.105:3300/0
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@4(peon) e7  my rank is now 3 (was 4)
Feb 20 09:42:02 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0
Feb 20 09:42:02 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.107:3300/0
Feb 20 09:42:02 np0005625203.localdomain ceph-mgr[285471]: ms_deliver_dispatch: unhandled message 0x557a2fb01600 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: log_channel(cluster) log [INF] : mon.np0005625203 calling monitor election
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: paxos.3).electionLogic(24) init, last seen epoch 24
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:02 np0005625203.localdomain ceph-osd[32924]: --2- [v2:172.18.0.107:6804/1308148804,v1:172.18.0.107:6805/1308148804] >> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] conn(0x558a63f2ec00 0x558a620ba000 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2
Feb 20 09:42:02 np0005625203.localdomain ceph-osd[31970]: --2- [v2:172.18.0.107:6800/1356076611,v1:172.18.0.107:6801/1356076611] >> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] conn(0x55f8e8f26400 0x55f8e8a4db80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon) e7 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: from='client.17496 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005625199"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: Remove daemons mon.np0005625199
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "quorum_status"} : dispatch
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: Safe to remove mon.np0005625199: new quorum should be ['np0005625201', 'np0005625200', 'np0005625204', 'np0005625203', 'np0005625202'] (from ['np0005625201', 'np0005625200', 'np0005625204', 'np0005625203', 'np0005625202'])
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: Removing monitor np0005625199 from monmap...
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon rm", "name": "np0005625199"} : dispatch
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: Removing daemon mon.np0005625199 from np0005625199.localdomain -- ports []
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625200"} : dispatch
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mon.np0005625200 calling monitor election
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mon.np0005625202 calling monitor election
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mon.np0005625204 calling monitor election
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203 calling monitor election
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 calling monitor election
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 is new leader, mons np0005625201,np0005625200,np0005625204,np0005625203,np0005625202 in quorum (ranks 0,1,2,3,4)
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: monmap epoch 7
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: last_changed 2026-02-20T09:42:02.105420+0000
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: min_mon_release 18 (reef)
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: election_strategy: 1
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625200
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005625203
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: 4: [v2:172.18.0.106:3300/0,v1:172.18.0.106:6789/0] mon.np0005625202
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: osdmap e85: 6 total, 6 up, 6 in
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mgrmap e19: np0005625201.mtnyvu(active, since 16s), standbys: np0005625199.ileebh, np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: overall HEALTH_OK
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:02 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054726 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:42:03 np0005625203.localdomain sshd[288739]: Invalid user ts3 from 103.61.123.132 port 46546
Feb 20 09:42:03 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:42:03 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:03 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:03 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:42:03 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:42:03 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:03 np0005625203.localdomain sshd[288739]: Received disconnect from 103.61.123.132 port 46546:11: Bye Bye [preauth]
Feb 20 09:42:03 np0005625203.localdomain sshd[288739]: Disconnected from invalid user ts3 103.61.123.132 port 46546 [preauth]
Feb 20 09:42:04 np0005625203.localdomain sudo[288741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:42:04 np0005625203.localdomain sudo[288741]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:04 np0005625203.localdomain sudo[288741]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:04 np0005625203.localdomain sudo[288759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:04 np0005625203.localdomain sudo[288759]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:42:04 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625202 (monmap changed)...
Feb 20 09:42:04 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:42:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:42:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:04 np0005625203.localdomain podman[288777]: 2026-02-20 09:42:04.550475388 +0000 UTC m=+0.085886289 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:42:04 np0005625203.localdomain podman[288777]: 2026-02-20 09:42:04.559486371 +0000 UTC m=+0.094897272 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:42:04 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:42:04 np0005625203.localdomain podman[288816]: 
Feb 20 09:42:04 np0005625203.localdomain podman[288816]: 2026-02-20 09:42:04.937513331 +0000 UTC m=+0.076452583 container create e5fe6f3cd80ed7f6438b40a4dc5ba9b30caad83a0f7d1c7f26e1e80f6c51c61e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_torvalds, ceph=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, RELEASE=main, name=rhceph, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.buildah.version=1.42.2, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z)
Feb 20 09:42:04 np0005625203.localdomain systemd[1]: Started libpod-conmon-e5fe6f3cd80ed7f6438b40a4dc5ba9b30caad83a0f7d1c7f26e1e80f6c51c61e.scope.
Feb 20 09:42:05 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:42:05 np0005625203.localdomain podman[288816]: 2026-02-20 09:42:04.906257813 +0000 UTC m=+0.045197105 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:42:05 np0005625203.localdomain podman[288816]: 2026-02-20 09:42:05.022819342 +0000 UTC m=+0.161758584 container init e5fe6f3cd80ed7f6438b40a4dc5ba9b30caad83a0f7d1c7f26e1e80f6c51c61e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_torvalds, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, version=7, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, com.redhat.component=rhceph-container)
Feb 20 09:42:05 np0005625203.localdomain podman[288816]: 2026-02-20 09:42:05.036844321 +0000 UTC m=+0.175783573 container start e5fe6f3cd80ed7f6438b40a4dc5ba9b30caad83a0f7d1c7f26e1e80f6c51c61e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_torvalds, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, build-date=2026-02-09T10:25:24Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, architecture=x86_64, RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, version=7, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:42:05 np0005625203.localdomain podman[288816]: 2026-02-20 09:42:05.037221702 +0000 UTC m=+0.176160984 container attach e5fe6f3cd80ed7f6438b40a4dc5ba9b30caad83a0f7d1c7f26e1e80f6c51c61e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_torvalds, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_CLEAN=True, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, release=1770267347, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:42:05 np0005625203.localdomain systemd[1]: libpod-e5fe6f3cd80ed7f6438b40a4dc5ba9b30caad83a0f7d1c7f26e1e80f6c51c61e.scope: Deactivated successfully.
Feb 20 09:42:05 np0005625203.localdomain modest_torvalds[288831]: 167 167
Feb 20 09:42:05 np0005625203.localdomain podman[288816]: 2026-02-20 09:42:05.043592002 +0000 UTC m=+0.182531274 container died e5fe6f3cd80ed7f6438b40a4dc5ba9b30caad83a0f7d1c7f26e1e80f6c51c61e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_torvalds, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, GIT_BRANCH=main, CEPH_POINT_RELEASE=, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.buildah.version=1.42.2, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, RELEASE=main)
Feb 20 09:42:05 np0005625203.localdomain podman[288836]: 2026-02-20 09:42:05.131443872 +0000 UTC m=+0.076036791 container remove e5fe6f3cd80ed7f6438b40a4dc5ba9b30caad83a0f7d1c7f26e1e80f6c51c61e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_torvalds, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, name=rhceph, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, distribution-scope=public)
Feb 20 09:42:05 np0005625203.localdomain systemd[1]: libpod-conmon-e5fe6f3cd80ed7f6438b40a4dc5ba9b30caad83a0f7d1c7f26e1e80f6c51c61e.scope: Deactivated successfully.
Feb 20 09:42:05 np0005625203.localdomain sudo[288759]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:05 np0005625203.localdomain sudo[288853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:42:05 np0005625203.localdomain sudo[288853]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:05 np0005625203.localdomain sudo[288853]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:05 np0005625203.localdomain sudo[288871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:05 np0005625203.localdomain sudo[288871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:05 np0005625203.localdomain ceph-mon[286888]: pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:42:05 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:42:05 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:42:05 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:05 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:05 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 20 09:42:05 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:05 np0005625203.localdomain systemd[1]: tmp-crun.lPHNSm.mount: Deactivated successfully.
Feb 20 09:42:05 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-343773bf3cb244a138895d0c8569b89a8d2a3113f3c136b75b0da7603410049b-merged.mount: Deactivated successfully.
Feb 20 09:42:05 np0005625203.localdomain podman[288905]: 
Feb 20 09:42:05 np0005625203.localdomain podman[288905]: 2026-02-20 09:42:05.903902228 +0000 UTC m=+0.078295171 container create 256bc656f23fa713059ee3e72766c3a5874e7df80ee145772eea802bbc97339c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_chatterjee, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, build-date=2026-02-09T10:25:24Z, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.component=rhceph-container, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.42.2, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:42:05 np0005625203.localdomain systemd[1]: Started libpod-conmon-256bc656f23fa713059ee3e72766c3a5874e7df80ee145772eea802bbc97339c.scope.
Feb 20 09:42:05 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:42:05 np0005625203.localdomain podman[288905]: 2026-02-20 09:42:05.964854156 +0000 UTC m=+0.139247089 container init 256bc656f23fa713059ee3e72766c3a5874e7df80ee145772eea802bbc97339c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_chatterjee, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.openshift.expose-services=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., GIT_CLEAN=True, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, RELEASE=main, version=7, architecture=x86_64, distribution-scope=public, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:42:05 np0005625203.localdomain podman[288905]: 2026-02-20 09:42:05.871972529 +0000 UTC m=+0.046365502 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:42:05 np0005625203.localdomain podman[288905]: 2026-02-20 09:42:05.974993123 +0000 UTC m=+0.149386026 container start 256bc656f23fa713059ee3e72766c3a5874e7df80ee145772eea802bbc97339c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_chatterjee, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, RELEASE=main)
Feb 20 09:42:05 np0005625203.localdomain podman[288905]: 2026-02-20 09:42:05.975350474 +0000 UTC m=+0.149743408 container attach 256bc656f23fa713059ee3e72766c3a5874e7df80ee145772eea802bbc97339c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_chatterjee, com.redhat.component=rhceph-container, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, RELEASE=main, version=7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Feb 20 09:42:05 np0005625203.localdomain stupefied_chatterjee[288920]: 167 167
Feb 20 09:42:05 np0005625203.localdomain systemd[1]: libpod-256bc656f23fa713059ee3e72766c3a5874e7df80ee145772eea802bbc97339c.scope: Deactivated successfully.
Feb 20 09:42:05 np0005625203.localdomain podman[288905]: 2026-02-20 09:42:05.988674902 +0000 UTC m=+0.163067805 container died 256bc656f23fa713059ee3e72766c3a5874e7df80ee145772eea802bbc97339c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_chatterjee, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, version=7, GIT_CLEAN=True, name=rhceph, GIT_BRANCH=main, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, RELEASE=main, ceph=True, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:42:06 np0005625203.localdomain podman[288925]: 2026-02-20 09:42:06.08224191 +0000 UTC m=+0.082061829 container remove 256bc656f23fa713059ee3e72766c3a5874e7df80ee145772eea802bbc97339c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_chatterjee, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, ceph=True, distribution-scope=public, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:42:06 np0005625203.localdomain systemd[1]: libpod-conmon-256bc656f23fa713059ee3e72766c3a5874e7df80ee145772eea802bbc97339c.scope: Deactivated successfully.
Feb 20 09:42:06 np0005625203.localdomain sudo[288871]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:06 np0005625203.localdomain sudo[288948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:42:06 np0005625203.localdomain sudo[288948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:06 np0005625203.localdomain sudo[288948]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:06 np0005625203.localdomain sudo[288966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:06 np0005625203.localdomain sudo[288966]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:06 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-9389d94743773840798ec8d682fd89d8e3aa724a0bb29b30f9ee17a0fbb9a41b-merged.mount: Deactivated successfully.
Feb 20 09:42:06 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.1 (monmap changed)...
Feb 20 09:42:06 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:42:06 np0005625203.localdomain ceph-mon[286888]: from='client.27205 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625199.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:42:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:06 np0005625203.localdomain ceph-mon[286888]: Removed label mon from host np0005625199.localdomain
Feb 20 09:42:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Feb 20 09:42:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:06 np0005625203.localdomain podman[289000]: 
Feb 20 09:42:06 np0005625203.localdomain podman[289000]: 2026-02-20 09:42:06.921984223 +0000 UTC m=+0.076423813 container create a44891f8a5256d49afe5184acd435a69ccde6637b09ce4dbabfa982222f800c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_elion, distribution-scope=public, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, ceph=True, GIT_CLEAN=True, version=7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, CEPH_POINT_RELEASE=, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git)
Feb 20 09:42:06 np0005625203.localdomain systemd[1]: Started libpod-conmon-a44891f8a5256d49afe5184acd435a69ccde6637b09ce4dbabfa982222f800c1.scope.
Feb 20 09:42:06 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:42:06 np0005625203.localdomain podman[289000]: 2026-02-20 09:42:06.978732528 +0000 UTC m=+0.133172118 container init a44891f8a5256d49afe5184acd435a69ccde6637b09ce4dbabfa982222f800c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_elion, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-type=git, GIT_CLEAN=True, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.buildah.version=1.42.2, architecture=x86_64, RELEASE=main, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 09:42:06 np0005625203.localdomain podman[289000]: 2026-02-20 09:42:06.890646562 +0000 UTC m=+0.045086172 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:42:06 np0005625203.localdomain compassionate_elion[289015]: 167 167
Feb 20 09:42:06 np0005625203.localdomain podman[289000]: 2026-02-20 09:42:06.994318377 +0000 UTC m=+0.148757967 container start a44891f8a5256d49afe5184acd435a69ccde6637b09ce4dbabfa982222f800c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_elion, RELEASE=main, CEPH_POINT_RELEASE=, distribution-scope=public, GIT_CLEAN=True, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, name=rhceph, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, GIT_BRANCH=main)
Feb 20 09:42:06 np0005625203.localdomain podman[289000]: 2026-02-20 09:42:06.994950906 +0000 UTC m=+0.149390546 container attach a44891f8a5256d49afe5184acd435a69ccde6637b09ce4dbabfa982222f800c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_elion, io.buildah.version=1.42.2, GIT_BRANCH=main, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, name=rhceph, io.openshift.tags=rhceph ceph, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, RELEASE=main, ceph=True, version=7, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git)
Feb 20 09:42:06 np0005625203.localdomain systemd[1]: libpod-a44891f8a5256d49afe5184acd435a69ccde6637b09ce4dbabfa982222f800c1.scope: Deactivated successfully.
Feb 20 09:42:06 np0005625203.localdomain podman[289000]: 2026-02-20 09:42:06.996688751 +0000 UTC m=+0.151128341 container died a44891f8a5256d49afe5184acd435a69ccde6637b09ce4dbabfa982222f800c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_elion, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, GIT_BRANCH=main, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, ceph=True, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, GIT_CLEAN=True, CEPH_POINT_RELEASE=)
Feb 20 09:42:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:42:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:42:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:42:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:42:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:42:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:42:07 np0005625203.localdomain podman[289020]: 2026-02-20 09:42:07.113591109 +0000 UTC m=+0.104417579 container remove a44891f8a5256d49afe5184acd435a69ccde6637b09ce4dbabfa982222f800c1 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=compassionate_elion, version=7, vendor=Red Hat, Inc., distribution-scope=public, release=1770267347, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, ceph=True, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, name=rhceph, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, vcs-type=git)
Feb 20 09:42:07 np0005625203.localdomain systemd[1]: libpod-conmon-a44891f8a5256d49afe5184acd435a69ccde6637b09ce4dbabfa982222f800c1.scope: Deactivated successfully.
Feb 20 09:42:07 np0005625203.localdomain sudo[288966]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:07 np0005625203.localdomain sudo[289042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:42:07 np0005625203.localdomain sudo[289042]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:07 np0005625203.localdomain sudo[289042]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:07 np0005625203.localdomain sudo[289060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:07 np0005625203.localdomain sudo[289060]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:07 np0005625203.localdomain systemd[1]: tmp-crun.4WGB92.mount: Deactivated successfully.
Feb 20 09:42:07 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-fdb71f6d39314f4133f0757a1671cb0bf3235ece47cdeb36ffa67bd6eb44c1eb-merged.mount: Deactivated successfully.
Feb 20 09:42:07 np0005625203.localdomain ceph-mon[286888]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:07 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.4 (monmap changed)...
Feb 20 09:42:07 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:42:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:42:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:42:07.655 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:42:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:42:07.656 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:42:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:42:07.657 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:42:07 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:42:07 np0005625203.localdomain podman[289096]: 
Feb 20 09:42:07 np0005625203.localdomain podman[289096]: 2026-02-20 09:42:07.943410581 +0000 UTC m=+0.089106949 container create b5d3d3a90846f6851a34584a06e1b96e33fe38396b281227870d70721ad7932f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_dirac, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., ceph=True, version=7, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vcs-type=git, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:42:07 np0005625203.localdomain systemd[1]: Started libpod-conmon-b5d3d3a90846f6851a34584a06e1b96e33fe38396b281227870d70721ad7932f.scope.
Feb 20 09:42:08 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:42:08 np0005625203.localdomain podman[289096]: 2026-02-20 09:42:07.902726478 +0000 UTC m=+0.048422876 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:42:08 np0005625203.localdomain podman[289096]: 2026-02-20 09:42:08.025252653 +0000 UTC m=+0.170949021 container init b5d3d3a90846f6851a34584a06e1b96e33fe38396b281227870d70721ad7932f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_dirac, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, release=1770267347, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7)
Feb 20 09:42:08 np0005625203.localdomain podman[289096]: 2026-02-20 09:42:08.033697817 +0000 UTC m=+0.179394195 container start b5d3d3a90846f6851a34584a06e1b96e33fe38396b281227870d70721ad7932f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_dirac, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., RELEASE=main, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main)
Feb 20 09:42:08 np0005625203.localdomain podman[289096]: 2026-02-20 09:42:08.033934665 +0000 UTC m=+0.179631043 container attach b5d3d3a90846f6851a34584a06e1b96e33fe38396b281227870d70721ad7932f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_dirac, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., ceph=True, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, RELEASE=main, io.openshift.expose-services=, release=1770267347, architecture=x86_64, GIT_CLEAN=True, GIT_BRANCH=main)
Feb 20 09:42:08 np0005625203.localdomain unruffled_dirac[289111]: 167 167
Feb 20 09:42:08 np0005625203.localdomain systemd[1]: libpod-b5d3d3a90846f6851a34584a06e1b96e33fe38396b281227870d70721ad7932f.scope: Deactivated successfully.
Feb 20 09:42:08 np0005625203.localdomain podman[289096]: 2026-02-20 09:42:08.037830966 +0000 UTC m=+0.183527334 container died b5d3d3a90846f6851a34584a06e1b96e33fe38396b281227870d70721ad7932f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_dirac, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, distribution-scope=public, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.buildah.version=1.42.2, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, release=1770267347, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container)
Feb 20 09:42:08 np0005625203.localdomain podman[289116]: 2026-02-20 09:42:08.130239408 +0000 UTC m=+0.082968127 container remove b5d3d3a90846f6851a34584a06e1b96e33fe38396b281227870d70721ad7932f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=unruffled_dirac, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, GIT_BRANCH=main, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, vcs-type=git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, version=7)
Feb 20 09:42:08 np0005625203.localdomain systemd[1]: libpod-conmon-b5d3d3a90846f6851a34584a06e1b96e33fe38396b281227870d70721ad7932f.scope: Deactivated successfully.
Feb 20 09:42:08 np0005625203.localdomain sudo[289060]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:08 np0005625203.localdomain sudo[289132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:42:08 np0005625203.localdomain sudo[289132]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:08 np0005625203.localdomain sudo[289132]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:08 np0005625203.localdomain sudo[289150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:08 np0005625203.localdomain sudo[289150]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1468bc2ecf6048516e9016990905163799f7751518f911c7893c83f72f7d2dad-merged.mount: Deactivated successfully.
Feb 20 09:42:08 np0005625203.localdomain ceph-mon[286888]: from='client.27215 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625199.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:42:08 np0005625203.localdomain ceph-mon[286888]: Removed label mgr from host np0005625199.localdomain
Feb 20 09:42:08 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)...
Feb 20 09:42:08 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain
Feb 20 09:42:08 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:08 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:08 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:42:08 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:42:08 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:08 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:08 np0005625203.localdomain podman[289184]: 
Feb 20 09:42:08 np0005625203.localdomain podman[289184]: 2026-02-20 09:42:08.824085515 +0000 UTC m=+0.072901363 container create c6f91860e799f746982cd4de3b91010a1f7f18e896698c1e727d3f53d88bc903 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_meninsky, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, ceph=True, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, version=7, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:42:08 np0005625203.localdomain systemd[1]: Started libpod-conmon-c6f91860e799f746982cd4de3b91010a1f7f18e896698c1e727d3f53d88bc903.scope.
Feb 20 09:42:08 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:42:08 np0005625203.localdomain podman[289184]: 2026-02-20 09:42:08.889949136 +0000 UTC m=+0.138764964 container init c6f91860e799f746982cd4de3b91010a1f7f18e896698c1e727d3f53d88bc903 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_meninsky, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, CEPH_POINT_RELEASE=, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph)
Feb 20 09:42:08 np0005625203.localdomain podman[289184]: 2026-02-20 09:42:08.794025044 +0000 UTC m=+0.042840912 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:42:08 np0005625203.localdomain podman[289184]: 2026-02-20 09:42:08.909762296 +0000 UTC m=+0.158578144 container start c6f91860e799f746982cd4de3b91010a1f7f18e896698c1e727d3f53d88bc903 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_meninsky, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, ceph=True, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, name=rhceph, CEPH_POINT_RELEASE=, version=7, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, distribution-scope=public)
Feb 20 09:42:08 np0005625203.localdomain podman[289184]: 2026-02-20 09:42:08.910093117 +0000 UTC m=+0.158908975 container attach c6f91860e799f746982cd4de3b91010a1f7f18e896698c1e727d3f53d88bc903 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_meninsky, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, distribution-scope=public, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.expose-services=, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, vcs-type=git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True)
Feb 20 09:42:08 np0005625203.localdomain vigilant_meninsky[289199]: 167 167
Feb 20 09:42:08 np0005625203.localdomain systemd[1]: libpod-c6f91860e799f746982cd4de3b91010a1f7f18e896698c1e727d3f53d88bc903.scope: Deactivated successfully.
Feb 20 09:42:08 np0005625203.localdomain podman[289184]: 2026-02-20 09:42:08.91372616 +0000 UTC m=+0.162542088 container died c6f91860e799f746982cd4de3b91010a1f7f18e896698c1e727d3f53d88bc903 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_meninsky, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, GIT_CLEAN=True, release=1770267347, distribution-scope=public, version=7, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:42:09 np0005625203.localdomain podman[289204]: 2026-02-20 09:42:09.009226819 +0000 UTC m=+0.083515805 container remove c6f91860e799f746982cd4de3b91010a1f7f18e896698c1e727d3f53d88bc903 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigilant_meninsky, RELEASE=main, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, name=rhceph, GIT_CLEAN=True, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.buildah.version=1.42.2, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 09:42:09 np0005625203.localdomain systemd[1]: libpod-conmon-c6f91860e799f746982cd4de3b91010a1f7f18e896698c1e727d3f53d88bc903.scope: Deactivated successfully.
Feb 20 09:42:09 np0005625203.localdomain sshd[289221]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:42:09 np0005625203.localdomain sudo[289150]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:09 np0005625203.localdomain sudo[289222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:42:09 np0005625203.localdomain sudo[289222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:09 np0005625203.localdomain sudo[289222]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:09 np0005625203.localdomain sudo[289241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:09 np0005625203.localdomain sudo[289241]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:09 np0005625203.localdomain sshd[289221]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:42:09 np0005625203.localdomain systemd[1]: tmp-crun.coqQeG.mount: Deactivated successfully.
Feb 20 09:42:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-cbfcfdeb145f44b35c30c4c09fcf6a48d4470c5e4cf8b6d19eb879bed1c0417d-merged.mount: Deactivated successfully.
Feb 20 09:42:09 np0005625203.localdomain ceph-mon[286888]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:09 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)...
Feb 20 09:42:09 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain
Feb 20 09:42:09 np0005625203.localdomain ceph-mon[286888]: from='client.34174 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625199.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:42:09 np0005625203.localdomain ceph-mon[286888]: Removed label _admin from host np0005625199.localdomain
Feb 20 09:42:09 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:09 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:09 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:42:09 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:42:09 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:09 np0005625203.localdomain podman[289276]: 
Feb 20 09:42:09 np0005625203.localdomain podman[289276]: 2026-02-20 09:42:09.749647573 +0000 UTC m=+0.074046098 container create 05f668e811bf8b3133dcfdf46d922ed4e245d331238265776c4b1939d2e8114b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_faraday, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, release=1770267347, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, name=rhceph, io.openshift.expose-services=, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, version=7, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z)
Feb 20 09:42:09 np0005625203.localdomain systemd[1]: Started libpod-conmon-05f668e811bf8b3133dcfdf46d922ed4e245d331238265776c4b1939d2e8114b.scope.
Feb 20 09:42:09 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:42:09 np0005625203.localdomain podman[289276]: 2026-02-20 09:42:09.718207979 +0000 UTC m=+0.042606534 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:42:09 np0005625203.localdomain podman[289276]: 2026-02-20 09:42:09.818619252 +0000 UTC m=+0.143017787 container init 05f668e811bf8b3133dcfdf46d922ed4e245d331238265776c4b1939d2e8114b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_faraday, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, vendor=Red Hat, Inc., ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, distribution-scope=public, vcs-type=git, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:42:09 np0005625203.localdomain podman[289276]: 2026-02-20 09:42:09.827372466 +0000 UTC m=+0.151770991 container start 05f668e811bf8b3133dcfdf46d922ed4e245d331238265776c4b1939d2e8114b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_faraday, name=rhceph, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.42.2, GIT_CLEAN=True, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:42:09 np0005625203.localdomain podman[289276]: 2026-02-20 09:42:09.827650134 +0000 UTC m=+0.152048679 container attach 05f668e811bf8b3133dcfdf46d922ed4e245d331238265776c4b1939d2e8114b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_faraday, io.openshift.tags=rhceph ceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, distribution-scope=public, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, name=rhceph, release=1770267347, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, GIT_BRANCH=main, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 09:42:09 np0005625203.localdomain systemd[1]: libpod-05f668e811bf8b3133dcfdf46d922ed4e245d331238265776c4b1939d2e8114b.scope: Deactivated successfully.
Feb 20 09:42:09 np0005625203.localdomain vigorous_faraday[289291]: 167 167
Feb 20 09:42:09 np0005625203.localdomain podman[289276]: 2026-02-20 09:42:09.830297437 +0000 UTC m=+0.154695972 container died 05f668e811bf8b3133dcfdf46d922ed4e245d331238265776c4b1939d2e8114b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_faraday, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, version=7, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, architecture=x86_64, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.buildah.version=1.42.2, ceph=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=)
Feb 20 09:42:09 np0005625203.localdomain podman[289296]: 2026-02-20 09:42:09.927989905 +0000 UTC m=+0.084210586 container remove 05f668e811bf8b3133dcfdf46d922ed4e245d331238265776c4b1939d2e8114b (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=vigorous_faraday, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, name=rhceph, io.buildah.version=1.42.2, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.openshift.expose-services=, version=7)
Feb 20 09:42:09 np0005625203.localdomain systemd[1]: libpod-conmon-05f668e811bf8b3133dcfdf46d922ed4e245d331238265776c4b1939d2e8114b.scope: Deactivated successfully.
Feb 20 09:42:09 np0005625203.localdomain sudo[289241]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8dedc6ad65aa4e75ae365a95b950dbb77230da50810f19ae5968f21900950869-merged.mount: Deactivated successfully.
Feb 20 09:42:10 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625203 (monmap changed)...
Feb 20 09:42:10 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625203 on np0005625203.localdomain
Feb 20 09:42:10 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:10 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:10 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:42:10 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:11 np0005625203.localdomain ceph-mon[286888]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:11 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625204 (monmap changed)...
Feb 20 09:42:11 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain
Feb 20 09:42:11 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:11 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:11 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 20 09:42:11 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:12 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.0 (monmap changed)...
Feb 20 09:42:12 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.0 on np0005625204.localdomain
Feb 20 09:42:12 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:12 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:12 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Feb 20 09:42:12 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:12 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:42:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:42:13 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.3 (monmap changed)...
Feb 20 09:42:13 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.3 on np0005625204.localdomain
Feb 20 09:42:13 np0005625203.localdomain ceph-mon[286888]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:42:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:13 np0005625203.localdomain podman[289314]: 2026-02-20 09:42:13.76701258 +0000 UTC m=+0.081339987 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:42:13 np0005625203.localdomain podman[289314]: 2026-02-20 09:42:13.802319675 +0000 UTC m=+0.116647082 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:42:13 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:42:14 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)...
Feb 20 09:42:14 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain
Feb 20 09:42:14 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:14 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:14 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:42:14 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:42:14 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:15 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)...
Feb 20 09:42:15 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain
Feb 20 09:42:15 np0005625203.localdomain ceph-mon[286888]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:15 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:15 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:15 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:42:15 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:42:15 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:16 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625204 (monmap changed)...
Feb 20 09:42:16 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain
Feb 20 09:42:16 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:16 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:16 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:16 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:16 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:16 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:17 np0005625203.localdomain ceph-mon[286888]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:17 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:42:18 np0005625203.localdomain sudo[289332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:42:18 np0005625203.localdomain sudo[289332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:18 np0005625203.localdomain sudo[289332]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:18 np0005625203.localdomain sudo[289350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:42:18 np0005625203.localdomain sudo[289350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:42:18 np0005625203.localdomain sudo[289350]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:18 np0005625203.localdomain sudo[289373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:42:18 np0005625203.localdomain sudo[289373]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:18 np0005625203.localdomain sudo[289373]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:18 np0005625203.localdomain podman[289367]: 2026-02-20 09:42:18.251548969 +0000 UTC m=+0.091799945 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 20 09:42:18 np0005625203.localdomain podman[289367]: 2026-02-20 09:42:18.281563218 +0000 UTC m=+0.121814175 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Feb 20 09:42:18 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:42:18 np0005625203.localdomain sudo[289402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:18 np0005625203.localdomain sudo[289402]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:18 np0005625203.localdomain sudo[289402]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:18 np0005625203.localdomain sudo[289426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:42:18 np0005625203.localdomain sudo[289426]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:18 np0005625203.localdomain sudo[289426]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:18 np0005625203.localdomain sudo[289460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:42:18 np0005625203.localdomain sudo[289460]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:18 np0005625203.localdomain sudo[289460]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:18 np0005625203.localdomain sudo[289478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:42:18 np0005625203.localdomain sudo[289478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:18 np0005625203.localdomain sudo[289478]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:18 np0005625203.localdomain sudo[289496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:42:18 np0005625203.localdomain sudo[289496]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:18 np0005625203.localdomain sudo[289496]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:18 np0005625203.localdomain sudo[289514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:42:18 np0005625203.localdomain sudo[289514]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:18 np0005625203.localdomain sudo[289514]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:18 np0005625203.localdomain sudo[289532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:42:18 np0005625203.localdomain sudo[289532]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:18 np0005625203.localdomain sudo[289532]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Removing np0005625199.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Removing np0005625199.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Removing np0005625199.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:42:18 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:42:18 np0005625203.localdomain sudo[289550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:42:18 np0005625203.localdomain sudo[289550]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:18 np0005625203.localdomain sudo[289550]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:19 np0005625203.localdomain sudo[289568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:19 np0005625203.localdomain sudo[289568]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:19 np0005625203.localdomain sudo[289568]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:19 np0005625203.localdomain sudo[289586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:42:19 np0005625203.localdomain sudo[289586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:19 np0005625203.localdomain sudo[289586]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:19 np0005625203.localdomain sudo[289620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:42:19 np0005625203.localdomain sudo[289620]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:19 np0005625203.localdomain sudo[289620]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:19 np0005625203.localdomain sudo[289638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:42:19 np0005625203.localdomain sudo[289638]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:19 np0005625203.localdomain sudo[289638]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:19 np0005625203.localdomain sudo[289656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:42:19 np0005625203.localdomain sudo[289656]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:19 np0005625203.localdomain sudo[289656]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: Removing daemon mgr.np0005625199.ileebh from np0005625199.localdomain -- ports [9283, 8765]
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='client.34184 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005625199.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: Added label _no_schedule to host np0005625199.localdomain
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:20 np0005625203.localdomain ceph-mon[286888]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005625199.localdomain
Feb 20 09:42:21 np0005625203.localdomain ceph-mon[286888]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:42:21 np0005625203.localdomain podman[289674]: 2026-02-20 09:42:21.76811855 +0000 UTC m=+0.081928495 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, architecture=x86_64)
Feb 20 09:42:21 np0005625203.localdomain podman[289674]: 2026-02-20 09:42:21.810463116 +0000 UTC m=+0.124273081 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, maintainer=Red Hat, Inc.)
Feb 20 09:42:21 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:42:22 np0005625203.localdomain sudo[289694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:42:22 np0005625203.localdomain sudo[289694]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:22 np0005625203.localdomain sudo[289694]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:22 np0005625203.localdomain ceph-mon[286888]: from='client.34223 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005625199.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:42:22 np0005625203.localdomain ceph-mon[286888]: Removing key for mgr.np0005625199.ileebh
Feb 20 09:42:22 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth rm", "entity": "mgr.np0005625199.ileebh"} : dispatch
Feb 20 09:42:22 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005625199.ileebh"}]': finished
Feb 20 09:42:22 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:22 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:22 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:42:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:42:22 np0005625203.localdomain podman[289712]: 2026-02-20 09:42:22.776672536 +0000 UTC m=+0.086160507 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:42:22 np0005625203.localdomain podman[289712]: 2026-02-20 09:42:22.820262631 +0000 UTC m=+0.129750582 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Feb 20 09:42:22 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:42:22 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:42:23 np0005625203.localdomain ceph-mon[286888]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain"} : dispatch
Feb 20 09:42:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain"}]': finished
Feb 20 09:42:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:42:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:42:23 np0005625203.localdomain sudo[289729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:42:23 np0005625203.localdomain sudo[289729]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:23 np0005625203.localdomain sudo[289729]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:24 np0005625203.localdomain ceph-mon[286888]: from='client.34233 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005625199.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:42:24 np0005625203.localdomain ceph-mon[286888]: Removed host np0005625199.localdomain
Feb 20 09:42:24 np0005625203.localdomain ceph-mon[286888]: host np0005625199.localdomain `cephadm ls` failed: Cannot decode JSON: 
                                                           Traceback (most recent call last):
                                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 1540, in _run_cephadm_json
                                                               return json.loads(''.join(out))
                                                             File "/lib64/python3.9/json/__init__.py", line 346, in loads
                                                               return _default_decoder.decode(s)
                                                             File "/lib64/python3.9/json/decoder.py", line 337, in decode
                                                               obj, end = self.raw_decode(s, idx=_w(s, 0).end())
                                                             File "/lib64/python3.9/json/decoder.py", line 355, in raw_decode
                                                               raise JSONDecodeError("Expecting value", s, err.value) from None
                                                           json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
Feb 20 09:42:24 np0005625203.localdomain ceph-mon[286888]: executing refresh((['np0005625199.localdomain', 'np0005625200.localdomain', 'np0005625201.localdomain', 'np0005625202.localdomain', 'np0005625203.localdomain', 'np0005625204.localdomain'],)) failed.
                                                           Traceback (most recent call last):
                                                             File "/usr/share/ceph/mgr/cephadm/utils.py", line 94, in do_work
                                                               return f(*arg)
                                                             File "/usr/share/ceph/mgr/cephadm/serve.py", line 317, in refresh
                                                               and not self.mgr.inventory.has_label(host, SpecialHostLabels.NO_MEMORY_AUTOTUNE)
                                                             File "/usr/share/ceph/mgr/cephadm/inventory.py", line 253, in has_label
                                                               host = self._get_stored_name(host)
                                                             File "/usr/share/ceph/mgr/cephadm/inventory.py", line 181, in _get_stored_name
                                                               self.assert_host(host)
                                                             File "/usr/share/ceph/mgr/cephadm/inventory.py", line 209, in assert_host
                                                               raise OrchestratorError('host %s does not exist' % host)
                                                           orchestrator._interface.OrchestratorError: host np0005625199.localdomain does not exist
Feb 20 09:42:24 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625200 (monmap changed)...
Feb 20 09:42:24 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:42:24 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:24 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain
Feb 20 09:42:25 np0005625203.localdomain ceph-mon[286888]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:25 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:25 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:25 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:42:25 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:42:25 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:26 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625200 (monmap changed)...
Feb 20 09:42:26 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625200 on np0005625200.localdomain
Feb 20 09:42:26 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:26 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:26 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:42:26 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:42:26 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:26 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:27 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625200.ypbkax (monmap changed)...
Feb 20 09:42:27 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625200.ypbkax on np0005625200.localdomain
Feb 20 09:42:27 np0005625203.localdomain ceph-mon[286888]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:27 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:27 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:27 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:42:27 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:42:27 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:27 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:42:28 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625201 (monmap changed)...
Feb 20 09:42:28 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain
Feb 20 09:42:28 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:28 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:28 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:42:28 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:42:28 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:42:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:42:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:42:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:42:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17785 "" "Go-http-client/1.1"
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)...
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625201 (monmap changed)...
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:29 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:42:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:42:31 np0005625203.localdomain ceph-mon[286888]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:31 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.2 (monmap changed)...
Feb 20 09:42:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 20 09:42:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:31 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:42:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Feb 20 09:42:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:31 np0005625203.localdomain podman[289747]: 2026-02-20 09:42:31.773170201 +0000 UTC m=+0.082963098 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:42:31 np0005625203.localdomain podman[289747]: 2026-02-20 09:42:31.789633846 +0000 UTC m=+0.099426743 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:42:31 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:42:32 np0005625203.localdomain ceph-mon[286888]: from='client.27071 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:42:32 np0005625203.localdomain ceph-mon[286888]: Saving service mon spec with placement label:mon
Feb 20 09:42:32 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.5 (monmap changed)...
Feb 20 09:42:32 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:42:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:42:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:32 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:42:33 np0005625203.localdomain sshd[289769]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:42:33 np0005625203.localdomain ceph-mgr[285471]: ms_deliver_dispatch: unhandled message 0x557a2fb00f20 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0
Feb 20 09:42:33 np0005625203.localdomain ceph-mon[286888]: log_channel(cluster) log [INF] : mon.np0005625203 calling monitor election
Feb 20 09:42:33 np0005625203.localdomain ceph-mon[286888]: paxos.3).electionLogic(26) init, last seen epoch 26
Feb 20 09:42:33 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:33 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:33 np0005625203.localdomain sudo[289771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:42:33 np0005625203.localdomain sudo[289771]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:33 np0005625203.localdomain sudo[289771]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:34 np0005625203.localdomain sshd[289769]: Received disconnect from 152.32.129.236 port 36580:11: Bye Bye [preauth]
Feb 20 09:42:34 np0005625203.localdomain sshd[289769]: Disconnected from authenticating user root 152.32.129.236 port 36580 [preauth]
Feb 20 09:42:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:42:34 np0005625203.localdomain podman[289789]: 2026-02-20 09:42:34.774252989 +0000 UTC m=+0.085989492 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:42:34 np0005625203.localdomain podman[289789]: 2026-02-20 09:42:34.789555109 +0000 UTC m=+0.101291602 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:42:34 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:42:35 np0005625203.localdomain sshd[289811]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:42:35 np0005625203.localdomain sshd[289811]: Invalid user oracle from 5.253.59.68 port 54320
Feb 20 09:42:35 np0005625203.localdomain sshd[289811]: Received disconnect from 5.253.59.68 port 54320:11: Bye Bye [preauth]
Feb 20 09:42:35 np0005625203.localdomain sshd[289811]: Disconnected from invalid user oracle 5.253.59.68 port 54320 [preauth]
Feb 20 09:42:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:42:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:42:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:42:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:42:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:42:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e8 handle_timecheck drop unexpected msg
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon) e8 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: from='client.27076 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005625202"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: Remove daemons mon.np0005625202
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: Safe to remove mon.np0005625202: new quorum should be ['np0005625201', 'np0005625200', 'np0005625204', 'np0005625203'] (from ['np0005625201', 'np0005625200', 'np0005625204', 'np0005625203'])
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: Removing monitor np0005625202 from monmap...
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: Removing daemon mon.np0005625202 from np0005625202.localdomain -- ports []
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625200"} : dispatch
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625200 calling monitor election
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 calling monitor election
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203 calling monitor election
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625204 calling monitor election
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625200 (monmap changed)...
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 is new leader, mons np0005625201,np0005625200,np0005625203 in quorum (ranks 0,1,3)
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: overall HEALTH_OK
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 calling monitor election
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625200 calling monitor election
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 is new leader, mons np0005625201,np0005625200,np0005625204,np0005625203 in quorum (ranks 0,1,2,3)
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: monmap epoch 8
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: last_changed 2026-02-20T09:42:33.617921+0000
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: min_mon_release 18 (reef)
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: election_strategy: 1
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625200
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005625203
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: osdmap e85: 6 total, 6 up, 6 in
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: mgrmap e19: np0005625201.mtnyvu(active, since 52s), standbys: np0005625199.ileebh, np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: overall HEALTH_OK
Feb 20 09:42:38 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:39 np0005625203.localdomain ceph-mon[286888]: pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:39 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain
Feb 20 09:42:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:42:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:42:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:40 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625200.ypbkax (monmap changed)...
Feb 20 09:42:40 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625200.ypbkax on np0005625200.localdomain
Feb 20 09:42:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:41 np0005625203.localdomain ceph-mon[286888]: pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:41 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)...
Feb 20 09:42:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:42:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:42:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:41 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain
Feb 20 09:42:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625201 (monmap changed)...
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:42 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:42:43 np0005625203.localdomain sshd[289813]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:42:44 np0005625203.localdomain sshd[289813]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:42:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:42:44 np0005625203.localdomain systemd[1]: tmp-crun.JJdpiT.mount: Deactivated successfully.
Feb 20 09:42:44 np0005625203.localdomain podman[289815]: 2026-02-20 09:42:44.407931546 +0000 UTC m=+0.080558032 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 20 09:42:44 np0005625203.localdomain podman[289815]: 2026-02-20 09:42:44.443381316 +0000 UTC m=+0.116007802 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:42:44 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:42:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:44 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.2 (monmap changed)...
Feb 20 09:42:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 20 09:42:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:44 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:42:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:45.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:42:45 np0005625203.localdomain ceph-mon[286888]: pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:45 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.5 (monmap changed)...
Feb 20 09:42:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Feb 20 09:42:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:45 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:42:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:42:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:46.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:42:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:46.364 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:42:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:46.365 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:42:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:46.365 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:42:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:46.365 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:42:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:46.366 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:42:46 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:42:46 np0005625203.localdomain sshd[289854]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:42:46 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:42:46 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.106:0/198306316' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:42:46 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.108:0/3587761072' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:42:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:42:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:42:46 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:46 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon) e8 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:42:46 np0005625203.localdomain ceph-mon[286888]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3696267818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:42:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:46.800 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:42:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:46.949 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:42:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:46.950 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12428MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:42:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:46.950 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:42:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:46.950 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:42:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:47.013 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:42:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:47.014 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:42:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:47.043 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:42:47 np0005625203.localdomain sshd[289854]: Invalid user dixi from 185.196.11.208 port 59902
Feb 20 09:42:47 np0005625203.localdomain sshd[289854]: Received disconnect from 185.196.11.208 port 59902:11: Bye Bye [preauth]
Feb 20 09:42:47 np0005625203.localdomain sshd[289854]: Disconnected from invalid user dixi 185.196.11.208 port 59902 [preauth]
Feb 20 09:42:47 np0005625203.localdomain sudo[289878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:42:47 np0005625203.localdomain sudo[289878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:47 np0005625203.localdomain sudo[289878]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:47.503 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:42:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:47.509 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:42:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:47.538 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:42:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:47.540 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:42:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:47.541 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:42:47 np0005625203.localdomain sudo[289898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:47 np0005625203.localdomain sudo[289898]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:47 np0005625203.localdomain ceph-mon[286888]: pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:47 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:42:47 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:42:47 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.106:0/2963859637' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:42:47 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.107:0/3696267818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:42:47 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.108:0/724040001' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:42:47 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:47 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:47 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:42:47 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:47 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.107:0/4259480945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:42:47 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:42:47 np0005625203.localdomain podman[289934]: 
Feb 20 09:42:48 np0005625203.localdomain podman[289934]: 2026-02-20 09:42:48.000934441 +0000 UTC m=+0.075109602 container create ee95957b49bbf8957eee872bd2210c5c84d041ffea031bdd12d233d849180d6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_cerf, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-02-09T10:25:24Z, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, version=7, description=Red Hat Ceph Storage 7, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, GIT_BRANCH=main, release=1770267347, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph)
Feb 20 09:42:48 np0005625203.localdomain podman[289934]: 2026-02-20 09:42:47.970150848 +0000 UTC m=+0.044326029 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:42:48 np0005625203.localdomain systemd[1]: Started libpod-conmon-ee95957b49bbf8957eee872bd2210c5c84d041ffea031bdd12d233d849180d6c.scope.
Feb 20 09:42:48 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:42:48 np0005625203.localdomain podman[289934]: 2026-02-20 09:42:48.11913254 +0000 UTC m=+0.193307671 container init ee95957b49bbf8957eee872bd2210c5c84d041ffea031bdd12d233d849180d6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_cerf, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2026-02-09T10:25:24Z, release=1770267347, CEPH_POINT_RELEASE=, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., io.buildah.version=1.42.2, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:42:48 np0005625203.localdomain systemd[1]: tmp-crun.0f9Gg5.mount: Deactivated successfully.
Feb 20 09:42:48 np0005625203.localdomain podman[289934]: 2026-02-20 09:42:48.135525173 +0000 UTC m=+0.209700324 container start ee95957b49bbf8957eee872bd2210c5c84d041ffea031bdd12d233d849180d6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_cerf, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, CEPH_POINT_RELEASE=, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, GIT_BRANCH=main, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, RELEASE=main, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Feb 20 09:42:48 np0005625203.localdomain podman[289934]: 2026-02-20 09:42:48.135768641 +0000 UTC m=+0.209943802 container attach ee95957b49bbf8957eee872bd2210c5c84d041ffea031bdd12d233d849180d6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_cerf, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhceph, vendor=Red Hat, Inc., release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.component=rhceph-container, GIT_CLEAN=True, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, CEPH_POINT_RELEASE=, distribution-scope=public, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph)
Feb 20 09:42:48 np0005625203.localdomain serene_cerf[289949]: 167 167
Feb 20 09:42:48 np0005625203.localdomain systemd[1]: libpod-ee95957b49bbf8957eee872bd2210c5c84d041ffea031bdd12d233d849180d6c.scope: Deactivated successfully.
Feb 20 09:42:48 np0005625203.localdomain podman[289934]: 2026-02-20 09:42:48.140185879 +0000 UTC m=+0.214361090 container died ee95957b49bbf8957eee872bd2210c5c84d041ffea031bdd12d233d849180d6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_cerf, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, ceph=True, vcs-type=git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, name=rhceph, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, distribution-scope=public, release=1770267347, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.)
Feb 20 09:42:48 np0005625203.localdomain podman[289954]: 2026-02-20 09:42:48.235939456 +0000 UTC m=+0.082801753 container remove ee95957b49bbf8957eee872bd2210c5c84d041ffea031bdd12d233d849180d6c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=serene_cerf, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, name=rhceph, distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main)
Feb 20 09:42:48 np0005625203.localdomain systemd[1]: libpod-conmon-ee95957b49bbf8957eee872bd2210c5c84d041ffea031bdd12d233d849180d6c.scope: Deactivated successfully.
Feb 20 09:42:48 np0005625203.localdomain sudo[289898]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:48 np0005625203.localdomain sudo[289970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:42:48 np0005625203.localdomain sudo[289970]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:42:48 np0005625203.localdomain sudo[289970]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:48 np0005625203.localdomain sudo[289989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:48 np0005625203.localdomain sudo[289989]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:48.541 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:42:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:48.541 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:42:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:48.542 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:42:48 np0005625203.localdomain podman[289988]: 2026-02-20 09:42:48.541726557 +0000 UTC m=+0.086146188 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:42:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:48.542 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:42:48 np0005625203.localdomain podman[289988]: 2026-02-20 09:42:48.608364022 +0000 UTC m=+0.152783663 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:42:48 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:42:48 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:42:48 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:42:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 20 09:42:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:48 np0005625203.localdomain podman[290045]: 
Feb 20 09:42:48 np0005625203.localdomain podman[290045]: 2026-02-20 09:42:48.961316108 +0000 UTC m=+0.076927958 container create 2142f2ea86b71a203609770aa082256c3f102bac630c79f54b7a2c6bf772af11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_leakey, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.openshift.expose-services=, version=7, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, release=1770267347, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:42:48 np0005625203.localdomain systemd[1]: Started libpod-conmon-2142f2ea86b71a203609770aa082256c3f102bac630c79f54b7a2c6bf772af11.scope.
Feb 20 09:42:49 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-cb99c849c9fb9a32435a37d38c4b1f59e8740d1e6b04102aeb0d6598c61ae6ea-merged.mount: Deactivated successfully.
Feb 20 09:42:49 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:42:49 np0005625203.localdomain podman[290045]: 2026-02-20 09:42:49.027898673 +0000 UTC m=+0.143510523 container init 2142f2ea86b71a203609770aa082256c3f102bac630c79f54b7a2c6bf772af11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_leakey, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, architecture=x86_64, build-date=2026-02-09T10:25:24Z, vcs-type=git, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, description=Red Hat Ceph Storage 7, distribution-scope=public, CEPH_POINT_RELEASE=, io.openshift.expose-services=)
Feb 20 09:42:49 np0005625203.localdomain podman[290045]: 2026-02-20 09:42:48.929914006 +0000 UTC m=+0.045525886 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:42:49 np0005625203.localdomain podman[290045]: 2026-02-20 09:42:49.037857755 +0000 UTC m=+0.153469615 container start 2142f2ea86b71a203609770aa082256c3f102bac630c79f54b7a2c6bf772af11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_leakey, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, version=7)
Feb 20 09:42:49 np0005625203.localdomain podman[290045]: 2026-02-20 09:42:49.038113493 +0000 UTC m=+0.153725373 container attach 2142f2ea86b71a203609770aa082256c3f102bac630c79f54b7a2c6bf772af11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_leakey, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, distribution-scope=public, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.buildah.version=1.42.2, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, version=7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347)
Feb 20 09:42:49 np0005625203.localdomain awesome_leakey[290060]: 167 167
Feb 20 09:42:49 np0005625203.localdomain systemd[1]: libpod-2142f2ea86b71a203609770aa082256c3f102bac630c79f54b7a2c6bf772af11.scope: Deactivated successfully.
Feb 20 09:42:49 np0005625203.localdomain podman[290045]: 2026-02-20 09:42:49.04121672 +0000 UTC m=+0.156828600 container died 2142f2ea86b71a203609770aa082256c3f102bac630c79f54b7a2c6bf772af11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_leakey, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, architecture=x86_64, version=7, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1770267347, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True)
Feb 20 09:42:49 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d3494e4d6a0dcd3fc609b5d92e1df832d15db7d5fc5bdb99b9ca843e23bbe328-merged.mount: Deactivated successfully.
Feb 20 09:42:49 np0005625203.localdomain podman[290065]: 2026-02-20 09:42:49.145778292 +0000 UTC m=+0.093209358 container remove 2142f2ea86b71a203609770aa082256c3f102bac630c79f54b7a2c6bf772af11 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=awesome_leakey, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, version=7, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, ceph=True, description=Red Hat Ceph Storage 7, name=rhceph, vendor=Red Hat, Inc., io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64)
Feb 20 09:42:49 np0005625203.localdomain systemd[1]: libpod-conmon-2142f2ea86b71a203609770aa082256c3f102bac630c79f54b7a2c6bf772af11.scope: Deactivated successfully.
Feb 20 09:42:49 np0005625203.localdomain sudo[289989]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:49 np0005625203.localdomain sudo[290088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:42:49 np0005625203.localdomain sudo[290088]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:49 np0005625203.localdomain sudo[290088]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:49 np0005625203.localdomain sudo[290106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:49 np0005625203.localdomain sudo[290106]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:49 np0005625203.localdomain ceph-mon[286888]: pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:49 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.1 (monmap changed)...
Feb 20 09:42:49 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:42:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Feb 20 09:42:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:42:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:49 np0005625203.localdomain podman[290141]: 
Feb 20 09:42:49 np0005625203.localdomain podman[290141]: 2026-02-20 09:42:49.945383499 +0000 UTC m=+0.073480842 container create ab545147072a67ac34db6a4deaaaf854db9a96a8e2a6ea998f2e7eb0cbc701df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_ptolemy, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, ceph=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vcs-type=git, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7)
Feb 20 09:42:49 np0005625203.localdomain systemd[1]: Started libpod-conmon-ab545147072a67ac34db6a4deaaaf854db9a96a8e2a6ea998f2e7eb0cbc701df.scope.
Feb 20 09:42:49 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:42:50 np0005625203.localdomain podman[290141]: 2026-02-20 09:42:50.00582444 +0000 UTC m=+0.133921783 container init ab545147072a67ac34db6a4deaaaf854db9a96a8e2a6ea998f2e7eb0cbc701df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_ptolemy, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git, ceph=True, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, name=rhceph, io.buildah.version=1.42.2, io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_CLEAN=True)
Feb 20 09:42:50 np0005625203.localdomain podman[290141]: 2026-02-20 09:42:50.015844454 +0000 UTC m=+0.143941797 container start ab545147072a67ac34db6a4deaaaf854db9a96a8e2a6ea998f2e7eb0cbc701df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_ptolemy, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, io.openshift.expose-services=, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git)
Feb 20 09:42:50 np0005625203.localdomain podman[290141]: 2026-02-20 09:42:50.016141623 +0000 UTC m=+0.144238976 container attach ab545147072a67ac34db6a4deaaaf854db9a96a8e2a6ea998f2e7eb0cbc701df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_ptolemy, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=)
Feb 20 09:42:50 np0005625203.localdomain podman[290141]: 2026-02-20 09:42:49.91635468 +0000 UTC m=+0.044452063 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:42:50 np0005625203.localdomain reverent_ptolemy[290156]: 167 167
Feb 20 09:42:50 np0005625203.localdomain systemd[1]: libpod-ab545147072a67ac34db6a4deaaaf854db9a96a8e2a6ea998f2e7eb0cbc701df.scope: Deactivated successfully.
Feb 20 09:42:50 np0005625203.localdomain podman[290141]: 2026-02-20 09:42:50.019257711 +0000 UTC m=+0.147355114 container died ab545147072a67ac34db6a4deaaaf854db9a96a8e2a6ea998f2e7eb0cbc701df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_ptolemy, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, vcs-type=git, name=rhceph, distribution-scope=public, architecture=x86_64, build-date=2026-02-09T10:25:24Z)
Feb 20 09:42:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-133bdb62c9357d36761a27cce68ea249533c674fb58b8074f056798bfd351f93-merged.mount: Deactivated successfully.
Feb 20 09:42:50 np0005625203.localdomain podman[290162]: 2026-02-20 09:42:50.11285427 +0000 UTC m=+0.085094884 container remove ab545147072a67ac34db6a4deaaaf854db9a96a8e2a6ea998f2e7eb0cbc701df (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_ptolemy, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, version=7, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, distribution-scope=public, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7)
Feb 20 09:42:50 np0005625203.localdomain systemd[1]: libpod-conmon-ab545147072a67ac34db6a4deaaaf854db9a96a8e2a6ea998f2e7eb0cbc701df.scope: Deactivated successfully.
Feb 20 09:42:50 np0005625203.localdomain sudo[290106]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:50.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:42:50 np0005625203.localdomain sudo[290185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:42:50 np0005625203.localdomain sudo[290185]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:50 np0005625203.localdomain sudo[290185]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:50 np0005625203.localdomain sudo[290203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:50 np0005625203.localdomain sudo[290203]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:50 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.4 (monmap changed)...
Feb 20 09:42:50 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:42:50 np0005625203.localdomain ceph-mon[286888]: from='client.27288 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005625202.localdomain:172.18.0.103", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:42:50 np0005625203.localdomain ceph-mon[286888]: Deploying daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:42:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:42:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:50 np0005625203.localdomain podman[290238]: 
Feb 20 09:42:50 np0005625203.localdomain podman[290238]: 2026-02-20 09:42:50.918042361 +0000 UTC m=+0.075788993 container create 7d7a282f40ee243c1e2a835a5488d5df474a334416c3fde13fd10cebf366c48a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_curran, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, ceph=True, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-type=git, GIT_BRANCH=main)
Feb 20 09:42:50 np0005625203.localdomain systemd[1]: Started libpod-conmon-7d7a282f40ee243c1e2a835a5488d5df474a334416c3fde13fd10cebf366c48a.scope.
Feb 20 09:42:50 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:42:50 np0005625203.localdomain podman[290238]: 2026-02-20 09:42:50.980927439 +0000 UTC m=+0.138674051 container init 7d7a282f40ee243c1e2a835a5488d5df474a334416c3fde13fd10cebf366c48a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_curran, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_CLEAN=True, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347)
Feb 20 09:42:50 np0005625203.localdomain podman[290238]: 2026-02-20 09:42:50.885933126 +0000 UTC m=+0.043679788 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:42:50 np0005625203.localdomain podman[290238]: 2026-02-20 09:42:50.989842759 +0000 UTC m=+0.147589351 container start 7d7a282f40ee243c1e2a835a5488d5df474a334416c3fde13fd10cebf366c48a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_curran, io.buildah.version=1.42.2, version=7, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, vcs-type=git, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main)
Feb 20 09:42:50 np0005625203.localdomain podman[290238]: 2026-02-20 09:42:50.99021635 +0000 UTC m=+0.147962942 container attach 7d7a282f40ee243c1e2a835a5488d5df474a334416c3fde13fd10cebf366c48a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_curran, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, vcs-type=git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, RELEASE=main, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 09:42:50 np0005625203.localdomain sweet_curran[290253]: 167 167
Feb 20 09:42:50 np0005625203.localdomain systemd[1]: libpod-7d7a282f40ee243c1e2a835a5488d5df474a334416c3fde13fd10cebf366c48a.scope: Deactivated successfully.
Feb 20 09:42:50 np0005625203.localdomain podman[290238]: 2026-02-20 09:42:50.993521944 +0000 UTC m=+0.151268586 container died 7d7a282f40ee243c1e2a835a5488d5df474a334416c3fde13fd10cebf366c48a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_curran, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, architecture=x86_64, release=1770267347, name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., RELEASE=main, ceph=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:42:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-82d7436cd5833ab9d72c080aefcf27ddac7927289618d07e5712556ef734e656-merged.mount: Deactivated successfully.
Feb 20 09:42:51 np0005625203.localdomain podman[290258]: 2026-02-20 09:42:51.093964387 +0000 UTC m=+0.086382044 container remove 7d7a282f40ee243c1e2a835a5488d5df474a334416c3fde13fd10cebf366c48a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_curran, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, distribution-scope=public, com.redhat.component=rhceph-container, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.openshift.tags=rhceph ceph, architecture=x86_64, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-type=git)
Feb 20 09:42:51 np0005625203.localdomain systemd[1]: libpod-conmon-7d7a282f40ee243c1e2a835a5488d5df474a334416c3fde13fd10cebf366c48a.scope: Deactivated successfully.
Feb 20 09:42:51 np0005625203.localdomain sudo[290203]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:51 np0005625203.localdomain sudo[290276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:42:51 np0005625203.localdomain sudo[290276]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:51 np0005625203.localdomain sudo[290276]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:51.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:42:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:51.344 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:42:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:51.344 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:42:51 np0005625203.localdomain sudo[290294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:51 np0005625203.localdomain sudo[290294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:42:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:51.358 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:42:51 np0005625203.localdomain podman[290329]: 
Feb 20 09:42:51 np0005625203.localdomain podman[290329]: 2026-02-20 09:42:51.795999559 +0000 UTC m=+0.072588233 container create 086fe001762cc4758305d0db60326533cc736207fe2e4cb6d33329692d467754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_shaw, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, version=7, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-type=git, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:42:51 np0005625203.localdomain ceph-mon[286888]: pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:51 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)...
Feb 20 09:42:51 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain
Feb 20 09:42:51 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:51 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:51 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:42:51 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:42:51 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:51 np0005625203.localdomain systemd[1]: Started libpod-conmon-086fe001762cc4758305d0db60326533cc736207fe2e4cb6d33329692d467754.scope.
Feb 20 09:42:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:42:51 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:42:51 np0005625203.localdomain podman[290329]: 2026-02-20 09:42:51.767091335 +0000 UTC m=+0.043680049 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:42:51 np0005625203.localdomain podman[290329]: 2026-02-20 09:42:51.877142699 +0000 UTC m=+0.153731363 container init 086fe001762cc4758305d0db60326533cc736207fe2e4cb6d33329692d467754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_shaw, version=7, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, RELEASE=main, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:42:51 np0005625203.localdomain podman[290329]: 2026-02-20 09:42:51.890447906 +0000 UTC m=+0.167036570 container start 086fe001762cc4758305d0db60326533cc736207fe2e4cb6d33329692d467754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_shaw, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_BRANCH=main, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, release=1770267347, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, ceph=True)
Feb 20 09:42:51 np0005625203.localdomain podman[290329]: 2026-02-20 09:42:51.891941833 +0000 UTC m=+0.168530517 container attach 086fe001762cc4758305d0db60326533cc736207fe2e4cb6d33329692d467754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_shaw, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, distribution-scope=public, release=1770267347, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, RELEASE=main, version=7, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, com.redhat.component=rhceph-container, ceph=True, GIT_CLEAN=True, io.openshift.expose-services=, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:42:51 np0005625203.localdomain frosty_shaw[290346]: 167 167
Feb 20 09:42:51 np0005625203.localdomain systemd[1]: libpod-086fe001762cc4758305d0db60326533cc736207fe2e4cb6d33329692d467754.scope: Deactivated successfully.
Feb 20 09:42:51 np0005625203.localdomain podman[290329]: 2026-02-20 09:42:51.894271625 +0000 UTC m=+0.170860349 container died 086fe001762cc4758305d0db60326533cc736207fe2e4cb6d33329692d467754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_shaw, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, io.openshift.expose-services=, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, ceph=True, vendor=Red Hat, Inc., name=rhceph, vcs-type=git, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2)
Feb 20 09:42:51 np0005625203.localdomain podman[290345]: 2026-02-20 09:42:51.946238112 +0000 UTC m=+0.085854239 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-type=git, config_id=openstack_network_exporter, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, version=9.7, release=1770267347, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9)
Feb 20 09:42:51 np0005625203.localdomain podman[290345]: 2026-02-20 09:42:51.967330131 +0000 UTC m=+0.106946218 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.33.7, version=9.7, io.openshift.expose-services=, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1770267347, maintainer=Red Hat, Inc.)
Feb 20 09:42:51 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:42:52 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e4c6672a45a4ed78d90a34075917c650ca5bfe1be96600216083f7ad69f31695-merged.mount: Deactivated successfully.
Feb 20 09:42:52 np0005625203.localdomain podman[290362]: 2026-02-20 09:42:52.059145405 +0000 UTC m=+0.150216321 container remove 086fe001762cc4758305d0db60326533cc736207fe2e4cb6d33329692d467754 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=frosty_shaw, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, distribution-scope=public, io.openshift.tags=rhceph ceph)
Feb 20 09:42:52 np0005625203.localdomain systemd[1]: libpod-conmon-086fe001762cc4758305d0db60326533cc736207fe2e4cb6d33329692d467754.scope: Deactivated successfully.
Feb 20 09:42:52 np0005625203.localdomain sudo[290294]: pam_unix(sudo:session): session closed for user root
Feb 20 09:42:52 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Feb 20 09:42:52 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Feb 20 09:42:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:52.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:42:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:42:52.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:42:52 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon) e8  adding peer [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] to list of hints
Feb 20 09:42:52 np0005625203.localdomain ceph-mgr[285471]: ms_deliver_dispatch: unhandled message 0x557a2fb01600 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0
Feb 20 09:42:52 np0005625203.localdomain ceph-mon[286888]: log_channel(cluster) log [INF] : mon.np0005625203 calling monitor election
Feb 20 09:42:52 np0005625203.localdomain ceph-mon[286888]: paxos.3).electionLogic(32) init, last seen epoch 32
Feb 20 09:42:52 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:52 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:42:53 np0005625203.localdomain systemd[1]: tmp-crun.7Is2jJ.mount: Deactivated successfully.
Feb 20 09:42:53 np0005625203.localdomain podman[290386]: 2026-02-20 09:42:53.776113194 +0000 UTC m=+0.090682599 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 20 09:42:53 np0005625203.localdomain podman[290386]: 2026-02-20 09:42:53.788079498 +0000 UTC m=+0.102648913 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:42:53 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: paxos.3).electionLogic(33) init, last seen epoch 33, mid-election, bumping
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(electing) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625204 (monmap changed)...
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625200"} : dispatch
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 calling monitor election
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625200 calling monitor election
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625204 calling monitor election
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 is new leader, mons np0005625201,np0005625200,np0005625204 in quorum (ranks 0,1,2)
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: monmap epoch 9
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: last_changed 2026-02-20T09:42:52.462377+0000
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: min_mon_release 18 (reef)
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: election_strategy: 1
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625200
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005625203
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: 4: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: osdmap e85: 6 total, 6 up, 6 in
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mgrmap e19: np0005625201.mtnyvu(active, since 71s), standbys: np0005625199.ileebh, np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: Health check failed: 2/5 mons down, quorum np0005625201,np0005625200,np0005625204 (MON_DOWN)
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: Health detail: HEALTH_WARN 2/5 mons down, quorum np0005625201,np0005625200,np0005625204
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: [WRN] MON_DOWN: 2/5 mons down, quorum np0005625201,np0005625200,np0005625204
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]:     mon.np0005625203 (rank 3) addr [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] is down (out of quorum)
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]:     mon.np0005625202 (rank 4) addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] is down (out of quorum)
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon) e9 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203 calling monitor election
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625202 calling monitor election
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 calling monitor election
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625204 calling monitor election
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625200 calling monitor election
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 is new leader, mons np0005625201,np0005625200,np0005625204,np0005625203,np0005625202 in quorum (ranks 0,1,2,3,4)
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: monmap epoch 9
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: last_changed 2026-02-20T09:42:52.462377+0000
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: min_mon_release 18 (reef)
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: election_strategy: 1
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625200
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: 2: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: 3: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005625203
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: 4: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: osdmap e85: 6 total, 6 up, 6 in
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: mgrmap e19: np0005625201.mtnyvu(active, since 72s), standbys: np0005625199.ileebh, np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: Health check cleared: MON_DOWN (was: 2/5 mons down, quorum np0005625201,np0005625200,np0005625204)
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: Cluster is now healthy
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: overall HEALTH_OK
Feb 20 09:42:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:59 np0005625203.localdomain podman[240359]: time="2026-02-20T09:42:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:42:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:42:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:42:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17779 "" "Go-http-client/1.1"
Feb 20 09:42:59 np0005625203.localdomain ceph-mon[286888]: pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:42:59 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:59 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.3 (monmap changed)...
Feb 20 09:42:59 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Feb 20 09:42:59 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:42:59 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.3 on np0005625204.localdomain
Feb 20 09:42:59 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:42:59 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:59 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:42:59 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:43:00 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)...
Feb 20 09:43:00 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:00 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain
Feb 20 09:43:00 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:00 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:00 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:43:00 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:43:00 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:01 np0005625203.localdomain sudo[290405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:43:01 np0005625203.localdomain sudo[290405]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:01 np0005625203.localdomain sudo[290405]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:01 np0005625203.localdomain sudo[290423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:43:01 np0005625203.localdomain sudo[290423]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:01 np0005625203.localdomain ceph-mon[286888]: pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:43:01 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)...
Feb 20 09:43:01 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain
Feb 20 09:43:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:02 np0005625203.localdomain sudo[290423]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:02 np0005625203.localdomain sshd[290472]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:43:03 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:43:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:43:03 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.32:0/2262134840' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:43:03 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.32:0/2262134840' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:43:03 np0005625203.localdomain podman[290474]: 2026-02-20 09:43:03.211944078 +0000 UTC m=+0.080430767 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:43:03 np0005625203.localdomain podman[290474]: 2026-02-20 09:43:03.221425546 +0000 UTC m=+0.089912195 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:43:03 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:43:03 np0005625203.localdomain sshd[290472]: Invalid user admin from 34.131.211.42 port 37870
Feb 20 09:43:04 np0005625203.localdomain sudo[290497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:43:04 np0005625203.localdomain sudo[290497]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:04 np0005625203.localdomain sudo[290497]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:04 np0005625203.localdomain ceph-mon[286888]: pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:43:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:04 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.200:0/2943791233' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 20 09:43:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:43:04 np0005625203.localdomain sudo[290515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:43:04 np0005625203.localdomain sudo[290515]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:04 np0005625203.localdomain sudo[290515]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:04 np0005625203.localdomain sshd[290472]: Received disconnect from 34.131.211.42 port 37870:11: Bye Bye [preauth]
Feb 20 09:43:04 np0005625203.localdomain sshd[290472]: Disconnected from invalid user admin 34.131.211.42 port 37870 [preauth]
Feb 20 09:43:04 np0005625203.localdomain sudo[290533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:43:04 np0005625203.localdomain sudo[290533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:04 np0005625203.localdomain sudo[290533]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:04 np0005625203.localdomain sudo[290551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:43:04 np0005625203.localdomain sudo[290551]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:04 np0005625203.localdomain sudo[290551]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:04 np0005625203.localdomain sudo[290569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:43:04 np0005625203.localdomain sudo[290569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:04 np0005625203.localdomain sudo[290569]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:04 np0005625203.localdomain sudo[290603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:43:04 np0005625203.localdomain sudo[290603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:04 np0005625203.localdomain sudo[290603]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:04 np0005625203.localdomain sudo[290621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:43:04 np0005625203.localdomain sudo[290621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:04 np0005625203.localdomain sudo[290621]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:04 np0005625203.localdomain sudo[290639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:43:04 np0005625203.localdomain sudo[290639]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:04 np0005625203.localdomain sudo[290639]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:04 np0005625203.localdomain sudo[290657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:43:04 np0005625203.localdomain sudo[290657]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:43:04 np0005625203.localdomain sudo[290657]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:04 np0005625203.localdomain sudo[290681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:43:04 np0005625203.localdomain sudo[290681]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:04 np0005625203.localdomain sudo[290681]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:04 np0005625203.localdomain podman[290675]: 2026-02-20 09:43:04.9299702 +0000 UTC m=+0.096912044 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:43:04 np0005625203.localdomain podman[290675]: 2026-02-20 09:43:04.939848579 +0000 UTC m=+0.106790423 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:43:04 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:43:04 np0005625203.localdomain sudo[290711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:43:05 np0005625203.localdomain sudo[290711]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:05 np0005625203.localdomain sudo[290711]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:05 np0005625203.localdomain sudo[290734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:43:05 np0005625203.localdomain sudo[290734]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:05 np0005625203.localdomain sudo[290734]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:05 np0005625203.localdomain sudo[290752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:43:05 np0005625203.localdomain sudo[290752]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:05 np0005625203.localdomain sudo[290752]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:05 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:05 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:05 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:05 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:05 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:05 np0005625203.localdomain ceph-mon[286888]: pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:43:05 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:05 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:05 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:05 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:05 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:05 np0005625203.localdomain sudo[290786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:43:05 np0005625203.localdomain sudo[290786]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:05 np0005625203.localdomain sudo[290786]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:05 np0005625203.localdomain sudo[290804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:43:05 np0005625203.localdomain sudo[290804]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:05 np0005625203.localdomain sudo[290804]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:05 np0005625203.localdomain sudo[290822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:05 np0005625203.localdomain sudo[290822]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:05 np0005625203.localdomain sudo[290822]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:05 np0005625203.localdomain sudo[290840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:43:05 np0005625203.localdomain sudo[290840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:05 np0005625203.localdomain sudo[290840]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='client.44119 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: Reconfig service osd.default_drive_group
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:43:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:06 np0005625203.localdomain sshd[290858]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:43:06 np0005625203.localdomain sshd[290858]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:43:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:43:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:43:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:43:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:43:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:43:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e86 e86: 6 total, 6 up, 6 in
Feb 20 09:43:07 np0005625203.localdomain sshd[287628]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:43:07 np0005625203.localdomain systemd[1]: session-63.scope: Deactivated successfully.
Feb 20 09:43:07 np0005625203.localdomain systemd[1]: session-63.scope: Consumed 18.387s CPU time.
Feb 20 09:43:07 np0005625203.localdomain systemd-logind[759]: Session 63 logged out. Waiting for processes to exit.
Feb 20 09:43:07 np0005625203.localdomain systemd-logind[759]: Removed session 63.
Feb 20 09:43:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:43:07.658 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:43:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:43:07.658 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:43:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:43:07.659 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625200 (monmap changed)...
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 562 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' 
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.14196 172.18.0.105:0/2584856206' entity='mgr.np0005625201.mtnyvu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.200:0/863103056' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: from='client.? ' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: Activating manager daemon np0005625199.ileebh
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: osdmap e86: 6 total, 6 up, 6 in
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: from='client.? ' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Feb 20 09:43:07 np0005625203.localdomain ceph-mon[286888]: mgrmap e20: np0005625199.ileebh(active, starting, since 0.0630848s), standbys: np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:43:08 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:43:12 np0005625203.localdomain ceph-mon[286888]: Standby manager daemon np0005625201.mtnyvu started
Feb 20 09:43:13 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:43:13 np0005625203.localdomain ceph-mon[286888]: mgrmap e21: np0005625199.ileebh(active, starting, since 5s), standbys: np0005625200.ypbkax, np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx, np0005625201.mtnyvu
Feb 20 09:43:13 np0005625203.localdomain sshd[290860]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:43:14 np0005625203.localdomain sshd[290860]: Invalid user pixel from 194.107.115.2 port 27506
Feb 20 09:43:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:43:14 np0005625203.localdomain podman[290862]: 2026-02-20 09:43:14.584141659 +0000 UTC m=+0.093284141 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 20 09:43:14 np0005625203.localdomain podman[290862]: 2026-02-20 09:43:14.615719617 +0000 UTC m=+0.124862049 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:43:14 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:43:14 np0005625203.localdomain sshd[290860]: Received disconnect from 194.107.115.2 port 27506:11: Bye Bye [preauth]
Feb 20 09:43:14 np0005625203.localdomain sshd[290860]: Disconnected from invalid user pixel 194.107.115.2 port 27506 [preauth]
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.186 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:43:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:43:17 np0005625203.localdomain systemd[1]: Stopping User Manager for UID 1002...
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Activating special unit Exit the Session...
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Removed slice User Background Tasks Slice.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Stopped target Main User Target.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Stopped target Basic System.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Stopped target Paths.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Stopped target Sockets.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Stopped target Timers.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Closed D-Bus User Message Bus Socket.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Stopped Create User's Volatile Files and Directories.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Removed slice User Application Slice.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Reached target Shutdown.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Finished Exit the Session.
Feb 20 09:43:17 np0005625203.localdomain systemd[26644]: Reached target Exit the Session.
Feb 20 09:43:17 np0005625203.localdomain systemd[1]: user@1002.service: Deactivated successfully.
Feb 20 09:43:17 np0005625203.localdomain systemd[1]: Stopped User Manager for UID 1002.
Feb 20 09:43:17 np0005625203.localdomain systemd[1]: user@1002.service: Consumed 10.707s CPU time, read 0B from disk, written 7.0K to disk.
Feb 20 09:43:17 np0005625203.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1002...
Feb 20 09:43:17 np0005625203.localdomain systemd[1]: run-user-1002.mount: Deactivated successfully.
Feb 20 09:43:17 np0005625203.localdomain systemd[1]: user-runtime-dir@1002.service: Deactivated successfully.
Feb 20 09:43:17 np0005625203.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1002.
Feb 20 09:43:17 np0005625203.localdomain systemd[1]: Removed slice User Slice of UID 1002.
Feb 20 09:43:17 np0005625203.localdomain systemd[1]: user-1002.slice: Consumed 4min 248ms CPU time.
Feb 20 09:43:18 np0005625203.localdomain sshd[290880]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:43:18 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:43:18 np0005625203.localdomain sshd[290880]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:43:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:43:18 np0005625203.localdomain podman[290882]: 2026-02-20 09:43:18.768260943 +0000 UTC m=+0.083652088 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller)
Feb 20 09:43:18 np0005625203.localdomain podman[290882]: 2026-02-20 09:43:18.810359281 +0000 UTC m=+0.125750466 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb 20 09:43:18 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:43:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:43:22 np0005625203.localdomain podman[290907]: 2026-02-20 09:43:22.758983997 +0000 UTC m=+0.077169257 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 09:43:22 np0005625203.localdomain podman[290907]: 2026-02-20 09:43:22.775516834 +0000 UTC m=+0.093702134 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, version=9.7, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347)
Feb 20 09:43:22 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:43:23 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:43:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:43:24 np0005625203.localdomain podman[290928]: 2026-02-20 09:43:24.769118 +0000 UTC m=+0.079496890 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:43:24 np0005625203.localdomain podman[290928]: 2026-02-20 09:43:24.780814456 +0000 UTC m=+0.091193356 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 09:43:24 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:43:24 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Feb 20 09:43:24 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:43:24.965496) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:43:24 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Feb 20 09:43:24 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580604965593, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 13828, "num_deletes": 770, "total_data_size": 23775030, "memory_usage": 24960048, "flush_reason": "Manual Compaction"}
Feb 20 09:43:24 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580605026180, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 15075184, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 13833, "table_properties": {"data_size": 15012767, "index_size": 34791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26565, "raw_key_size": 275335, "raw_average_key_size": 25, "raw_value_size": 14830100, "raw_average_value_size": 1396, "num_data_blocks": 1344, "num_entries": 10618, "num_filter_entries": 10618, "num_deletions": 765, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580482, "oldest_key_time": 1771580482, "file_creation_time": 1771580604, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a18f0433-5302-412e-a730-8e4f9cc01661", "db_session_id": "IVJC5Q80ONGS9Z85L01D", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 60753 microseconds, and 30677 cpu microseconds.
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:43:25.026247) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 15075184 bytes OK
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:43:25.026280) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:43:25.028039) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:43:25.028065) EVENT_LOG_v1 {"time_micros": 1771580605028057, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:43:25.028092) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 23687616, prev total WAL file size 23687616, number of live WAL files 2.
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:43:25.031954) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end)
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(14MB) 8(1887B)]
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580605032043, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 15077071, "oldest_snapshot_seqno": -1}
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 9856 keys, 15063365 bytes, temperature: kUnknown
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580605105120, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 15063365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15002946, "index_size": 34718, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24645, "raw_key_size": 262283, "raw_average_key_size": 26, "raw_value_size": 14830158, "raw_average_value_size": 1504, "num_data_blocks": 1342, "num_entries": 9856, "num_filter_entries": 9856, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580482, "oldest_key_time": 0, "file_creation_time": 1771580605, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a18f0433-5302-412e-a730-8e4f9cc01661", "db_session_id": "IVJC5Q80ONGS9Z85L01D", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:43:25.105545) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 15063365 bytes
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:43:25.107399) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.9 rd, 205.8 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(14.4, 0.0 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 10623, records dropped: 767 output_compression: NoCompression
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:43:25.107432) EVENT_LOG_v1 {"time_micros": 1771580605107417, "job": 4, "event": "compaction_finished", "compaction_time_micros": 73208, "compaction_time_cpu_micros": 38988, "output_level": 6, "num_output_files": 1, "total_output_size": 15063365, "num_input_records": 10623, "num_output_records": 9856, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580605109514, "job": 4, "event": "table_file_deletion", "file_number": 14}
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580605109569, "job": 4, "event": "table_file_deletion", "file_number": 8}
Feb 20 09:43:25 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:43:25.031828) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:43:28 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:43:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:43:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:43:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:43:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:43:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17783 "" "Go-http-client/1.1"
Feb 20 09:43:33 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:43:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:43:33 np0005625203.localdomain podman[290946]: 2026-02-20 09:43:33.774542856 +0000 UTC m=+0.089324747 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:43:33 np0005625203.localdomain podman[290946]: 2026-02-20 09:43:33.814559138 +0000 UTC m=+0.129340999 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:43:33 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:43:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:43:35 np0005625203.localdomain podman[290970]: 2026-02-20 09:43:35.77572825 +0000 UTC m=+0.087429658 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:43:35 np0005625203.localdomain podman[290970]: 2026-02-20 09:43:35.787258931 +0000 UTC m=+0.098960329 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:43:35 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:43:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:43:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:43:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:43:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:43:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:43:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:43:37 np0005625203.localdomain sshd[287228]: fatal: Timeout before authentication for 101.126.88.203 port 37502
Feb 20 09:43:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:43:38 np0005625203.localdomain sshd[290993]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:43:39 np0005625203.localdomain sshd[290993]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:43:39 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e87 e87: 6 total, 6 up, 6 in
Feb 20 09:43:40 np0005625203.localdomain ceph-mon[286888]: Activating manager daemon np0005625200.ypbkax
Feb 20 09:43:40 np0005625203.localdomain ceph-mon[286888]: Manager daemon np0005625199.ileebh is unresponsive, replacing it with standby daemon np0005625200.ypbkax
Feb 20 09:43:40 np0005625203.localdomain ceph-mon[286888]: osdmap e87: 6 total, 6 up, 6 in
Feb 20 09:43:40 np0005625203.localdomain ceph-mon[286888]: mgrmap e22: np0005625200.ypbkax(active, starting, since 0.0464926s), standbys: np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx, np0005625201.mtnyvu
Feb 20 09:43:40 np0005625203.localdomain sshd[290995]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:43:40 np0005625203.localdomain sshd[290995]: Accepted publickey for ceph-admin from 192.168.122.104 port 43600 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 09:43:40 np0005625203.localdomain systemd[1]: Created slice User Slice of UID 1002.
Feb 20 09:43:40 np0005625203.localdomain systemd[1]: Starting User Runtime Directory /run/user/1002...
Feb 20 09:43:40 np0005625203.localdomain systemd-logind[759]: New session 64 of user ceph-admin.
Feb 20 09:43:40 np0005625203.localdomain systemd[1]: Finished User Runtime Directory /run/user/1002.
Feb 20 09:43:40 np0005625203.localdomain systemd[1]: Starting User Manager for UID 1002...
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Queued start job for default target Main User Target.
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Created slice User Application Slice.
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Started Daily Cleanup of User's Temporary Directories.
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Reached target Paths.
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Reached target Timers.
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Starting D-Bus User Message Bus Socket...
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Starting Create User's Volatile Files and Directories...
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Listening on D-Bus User Message Bus Socket.
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Reached target Sockets.
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Finished Create User's Volatile Files and Directories.
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Reached target Basic System.
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Reached target Main User Target.
Feb 20 09:43:40 np0005625203.localdomain systemd[290999]: Startup finished in 149ms.
Feb 20 09:43:40 np0005625203.localdomain systemd[1]: Started User Manager for UID 1002.
Feb 20 09:43:40 np0005625203.localdomain systemd[1]: Started Session 64 of User ceph-admin.
Feb 20 09:43:40 np0005625203.localdomain sshd[290995]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 09:43:40 np0005625203.localdomain sudo[291015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:43:40 np0005625203.localdomain sudo[291015]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:40 np0005625203.localdomain sudo[291015]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:40 np0005625203.localdomain sudo[291033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 09:43:40 np0005625203.localdomain sudo[291033]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mon metadata", "id": "np0005625200"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mds metadata", "who": "mds.np0005625202.akhmop"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mds metadata", "who": "mds.np0005625204.wnsphl"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mds metadata", "who": "mds.np0005625203.zsrwgk"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mgr metadata", "who": "np0005625200.ypbkax", "id": "np0005625200.ypbkax"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mgr metadata", "who": "np0005625201.mtnyvu", "id": "np0005625201.mtnyvu"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mds metadata"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "osd metadata"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mon metadata"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: Manager daemon np0005625200.ypbkax is now available
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: removing stray HostCache host record np0005625199.localdomain.devices.0
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain.devices.0"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain.devices.0"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain.devices.0"}]': finished
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain.devices.0"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain.devices.0"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625199.localdomain.devices.0"}]': finished
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625200.ypbkax/mirror_snapshot_schedule"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625200.ypbkax/mirror_snapshot_schedule"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625200.ypbkax/trash_purge_schedule"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625200.ypbkax/trash_purge_schedule"} : dispatch
Feb 20 09:43:41 np0005625203.localdomain sudo[291033]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:41 np0005625203.localdomain sudo[291071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:43:41 np0005625203.localdomain sudo[291071]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:41 np0005625203.localdomain sudo[291071]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:41 np0005625203.localdomain sudo[291089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:43:41 np0005625203.localdomain sudo[291089]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:42 np0005625203.localdomain podman[291178]: 2026-02-20 09:43:42.404145697 +0000 UTC m=+0.092542398 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, architecture=x86_64, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, RELEASE=main, distribution-scope=public, com.redhat.component=rhceph-container, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, name=rhceph, CEPH_POINT_RELEASE=, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc.)
Feb 20 09:43:42 np0005625203.localdomain ceph-mon[286888]: mgrmap e23: np0005625200.ypbkax(active, since 1.12428s), standbys: np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx, np0005625201.mtnyvu
Feb 20 09:43:42 np0005625203.localdomain ceph-mon[286888]: from='client.34262 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:43:42 np0005625203.localdomain ceph-mon[286888]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:43:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:42 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:42 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:43:41] ENGINE Bus STARTING
Feb 20 09:43:42 np0005625203.localdomain podman[291178]: 2026-02-20 09:43:42.501245525 +0000 UTC m=+0.189642176 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., name=rhceph, com.redhat.component=rhceph-container)
Feb 20 09:43:43 np0005625203.localdomain sudo[291089]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:43:43 np0005625203.localdomain sudo[291292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:43:43 np0005625203.localdomain sudo[291292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:43 np0005625203.localdomain sudo[291292]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:43 np0005625203.localdomain sudo[291310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:43:43 np0005625203.localdomain sudo[291310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:43:41] ENGINE Serving on https://172.18.0.104:7150
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:43:41] ENGINE Client ('172.18.0.104', 40458) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:43:41] ENGINE Serving on http://172.18.0.104:8765
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:43:41] ENGINE Bus STARTED
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: mgrmap e24: np0005625200.ypbkax(active, since 2s), standbys: np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx, np0005625201.mtnyvu
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:43 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:43 np0005625203.localdomain sudo[291310]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:44 np0005625203.localdomain sudo[291359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:43:44 np0005625203.localdomain sudo[291359]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:44 np0005625203.localdomain sudo[291359]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:44 np0005625203.localdomain sudo[291377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Feb 20 09:43:44 np0005625203.localdomain sudo[291377]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:44 np0005625203.localdomain sudo[291377]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:44 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:43:44 np0005625203.localdomain systemd[1]: tmp-crun.0VbKtH.mount: Deactivated successfully.
Feb 20 09:43:44 np0005625203.localdomain podman[291414]: 2026-02-20 09:43:44.771931794 +0000 UTC m=+0.085178177 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 20 09:43:44 np0005625203.localdomain podman[291414]: 2026-02-20 09:43:44.783218687 +0000 UTC m=+0.096465090 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 20 09:43:44 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:43:45 np0005625203.localdomain sudo[291433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:43:45 np0005625203.localdomain sudo[291433]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:45 np0005625203.localdomain sudo[291433]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:45 np0005625203.localdomain sudo[291451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:43:45 np0005625203.localdomain sudo[291451]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:45 np0005625203.localdomain sudo[291451]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:45 np0005625203.localdomain sudo[291469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:43:45 np0005625203.localdomain sudo[291469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:45 np0005625203.localdomain sudo[291469]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:45 np0005625203.localdomain sudo[291487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:43:45 np0005625203.localdomain sudo[291487]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:45 np0005625203.localdomain sudo[291487]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:45 np0005625203.localdomain sudo[291505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:43:45 np0005625203.localdomain sudo[291505]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:45 np0005625203.localdomain sudo[291505]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='client.44149 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: Saving service mon spec with placement label:mon
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: mgrmap e25: np0005625200.ypbkax(active, since 4s), standbys: np0005625202.arwxwo, np0005625203.lonygy, np0005625204.exgrzx, np0005625201.mtnyvu
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:43:45 np0005625203.localdomain sudo[291539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:43:45 np0005625203.localdomain sudo[291539]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:45 np0005625203.localdomain sudo[291539]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:45 np0005625203.localdomain sudo[291557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:43:45 np0005625203.localdomain sudo[291557]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:45 np0005625203.localdomain sudo[291557]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:45 np0005625203.localdomain sudo[291575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:43:45 np0005625203.localdomain sudo[291575]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:45 np0005625203.localdomain sudo[291575]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:45 np0005625203.localdomain sudo[291593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:43:45 np0005625203.localdomain sudo[291593]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:45 np0005625203.localdomain sudo[291593]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:45 np0005625203.localdomain sudo[291611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:43:45 np0005625203.localdomain sudo[291611]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:45 np0005625203.localdomain sudo[291611]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:45 np0005625203.localdomain sudo[291629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:43:45 np0005625203.localdomain sudo[291629]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:45 np0005625203.localdomain sudo[291629]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:46 np0005625203.localdomain sudo[291647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:43:46 np0005625203.localdomain sudo[291647]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:46 np0005625203.localdomain sudo[291647]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:46 np0005625203.localdomain sudo[291665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:43:46 np0005625203.localdomain sudo[291665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:46 np0005625203.localdomain sudo[291665]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:46 np0005625203.localdomain sudo[291699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:43:46 np0005625203.localdomain sudo[291699]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:46 np0005625203.localdomain sudo[291699]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:46 np0005625203.localdomain sudo[291717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:43:46 np0005625203.localdomain sudo[291717]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:46 np0005625203.localdomain sudo[291717]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:46.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:43:46 np0005625203.localdomain sudo[291735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:46 np0005625203.localdomain sudo[291735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:46 np0005625203.localdomain sudo[291735]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:46 np0005625203.localdomain sudo[291753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:43:46 np0005625203.localdomain sudo[291753]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:46 np0005625203.localdomain sudo[291753]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:46 np0005625203.localdomain sudo[291771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:43:46 np0005625203.localdomain sudo[291771]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:46 np0005625203.localdomain sudo[291771]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: from='client.27196 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625202", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:46 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.108:0/117158667' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:43:46 np0005625203.localdomain sudo[291789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:43:46 np0005625203.localdomain sudo[291789]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:46 np0005625203.localdomain sudo[291789]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:46 np0005625203.localdomain sudo[291807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:43:46 np0005625203.localdomain sudo[291807]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:46 np0005625203.localdomain sudo[291807]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:46 np0005625203.localdomain sudo[291825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:43:46 np0005625203.localdomain sudo[291825]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:46 np0005625203.localdomain sudo[291825]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:46 np0005625203.localdomain sudo[291859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:43:46 np0005625203.localdomain sudo[291859]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:46 np0005625203.localdomain sudo[291859]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:46 np0005625203.localdomain sudo[291877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:43:46 np0005625203.localdomain sudo[291877]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:46 np0005625203.localdomain sudo[291877]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:47 np0005625203.localdomain sudo[291895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 20 09:43:47 np0005625203.localdomain sudo[291895]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:47 np0005625203.localdomain sudo[291895]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:47 np0005625203.localdomain sudo[291913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:43:47 np0005625203.localdomain sudo[291913]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:47 np0005625203.localdomain sudo[291913]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:47 np0005625203.localdomain sudo[291931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:43:47 np0005625203.localdomain sudo[291931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:47 np0005625203.localdomain sudo[291931]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:47 np0005625203.localdomain sudo[291949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:43:47 np0005625203.localdomain sudo[291949]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:47 np0005625203.localdomain sudo[291949]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:47.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:43:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:47.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:43:47 np0005625203.localdomain sudo[291967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:43:47 np0005625203.localdomain sudo[291967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:47 np0005625203.localdomain sudo[291967]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:47.384 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:43:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:47.384 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:43:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:47.385 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:43:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:47.385 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:43:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:47.386 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:43:47 np0005625203.localdomain sudo[291985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:43:47 np0005625203.localdomain sudo[291985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:47 np0005625203.localdomain sudo[291985]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:47 np0005625203.localdomain sudo[292020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:43:47 np0005625203.localdomain sudo[292020]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:47 np0005625203.localdomain sudo[292020]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:47 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:47 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:47 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:47 np0005625203.localdomain ceph-mon[286888]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:43:47 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:43:47 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:43:47 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:43:47 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:43:47 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:43:47 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.108:0/1470667621' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:43:47 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.200:0/3259045040' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 20 09:43:47 np0005625203.localdomain sudo[292057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:43:47 np0005625203.localdomain sudo[292057]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:47 np0005625203.localdomain sudo[292057]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:47 np0005625203.localdomain sudo[292075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:43:47 np0005625203.localdomain sudo[292075]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:47 np0005625203.localdomain sudo[292075]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:47.843 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon) e9 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/2312330214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:43:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:48.078 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:43:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:48.080 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12432MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:43:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:48.080 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:43:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:48.080 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:43:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:48.156 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:43:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:48.157 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:43:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:48.178 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:43:48 np0005625203.localdomain sudo[292096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:43:48 np0005625203.localdomain sudo[292096]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:48 np0005625203.localdomain sudo[292096]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon) e9 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2137806559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:43:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:48.710 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:43:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:48.717 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:43:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:48.731 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:43:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:48.734 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:43:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:48.735 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.107:0/1060514335' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.106:0/2312330214' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.107:0/2137806559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:43:48 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.106:0/4196256663' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:43:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:43:49 np0005625203.localdomain podman[292135]: 2026-02-20 09:43:49.232729319 +0000 UTC m=+0.074913296 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:43:49 np0005625203.localdomain podman[292135]: 2026-02-20 09:43:49.271638457 +0000 UTC m=+0.113822404 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 20 09:43:49 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:43:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:49.731 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:43:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:49.732 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:43:49 np0005625203.localdomain ceph-mon[286888]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s
Feb 20 09:43:49 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625200 (monmap changed)...
Feb 20 09:43:49 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625200 on np0005625200.localdomain
Feb 20 09:43:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:43:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625200.ypbkax", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:43:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:43:49 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:50.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:43:50 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625200.ypbkax (monmap changed)...
Feb 20 09:43:50 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625200.ypbkax on np0005625200.localdomain
Feb 20 09:43:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:43:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:43:50 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:51.337 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:43:51 np0005625203.localdomain ceph-mon[286888]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Feb 20 09:43:51 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625201 (monmap changed)...
Feb 20 09:43:51 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain
Feb 20 09:43:51 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:51 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:51 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:43:51 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:43:51 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:43:51 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:52.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:43:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:52.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:43:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:52.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:43:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:52.358 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:43:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:52.358 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:43:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:52.359 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:43:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:43:52.359 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:43:52 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)...
Feb 20 09:43:52 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain
Feb 20 09:43:52 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:52 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.200:0/3972118785' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json"} : dispatch
Feb 20 09:43:52 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:52 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:43:52 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:43:52 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:53 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:43:53 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:43:53 np0005625203.localdomain systemd[1]: tmp-crun.AjaynM.mount: Deactivated successfully.
Feb 20 09:43:53 np0005625203.localdomain podman[292158]: 2026-02-20 09:43:53.76394965 +0000 UTC m=+0.079635744 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_id=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:43:53 np0005625203.localdomain podman[292158]: 2026-02-20 09:43:53.784259245 +0000 UTC m=+0.099945329 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, config_id=openstack_network_exporter, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-type=git, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, version=9.7, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 09:43:53 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:43:53 np0005625203.localdomain ceph-mon[286888]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Feb 20 09:43:53 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625201 (monmap changed)...
Feb 20 09:43:53 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain
Feb 20 09:43:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' 
Feb 20 09:43:53 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:43:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:43:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 ' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:43:53 np0005625203.localdomain ceph-mon[286888]: from='mgr.24104 172.18.0.104:0/2092071211' entity='mgr.np0005625200.ypbkax' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:53 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e88 e88: 6 total, 6 up, 6 in
Feb 20 09:43:54 np0005625203.localdomain sshd[290995]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:43:54 np0005625203.localdomain systemd[1]: session-64.scope: Deactivated successfully.
Feb 20 09:43:54 np0005625203.localdomain systemd[1]: session-64.scope: Consumed 6.279s CPU time.
Feb 20 09:43:54 np0005625203.localdomain systemd-logind[759]: Session 64 logged out. Waiting for processes to exit.
Feb 20 09:43:54 np0005625203.localdomain systemd-logind[759]: Removed session 64.
Feb 20 09:43:54 np0005625203.localdomain sshd[292177]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:43:54 np0005625203.localdomain sshd[292177]: Accepted publickey for ceph-admin from 192.168.122.106 port 39468 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 09:43:54 np0005625203.localdomain systemd-logind[759]: New session 66 of user ceph-admin.
Feb 20 09:43:54 np0005625203.localdomain systemd[1]: Started Session 66 of User ceph-admin.
Feb 20 09:43:54 np0005625203.localdomain sshd[292177]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 09:43:54 np0005625203.localdomain sudo[292181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:43:54 np0005625203.localdomain sudo[292181]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:54 np0005625203.localdomain sudo[292181]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:54 np0005625203.localdomain sudo[292199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:43:54 np0005625203.localdomain sudo[292199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.200:0/3880794004' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: Activating manager daemon np0005625202.arwxwo
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: osdmap e88: 6 total, 6 up, 6 in
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.200:0/3880794004' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: mgrmap e26: np0005625202.arwxwo(active, starting, since 0.0432458s), standbys: np0005625203.lonygy, np0005625204.exgrzx, np0005625201.mtnyvu
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625200"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata", "who": "mds.np0005625202.akhmop"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata", "who": "mds.np0005625204.wnsphl"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata", "who": "mds.np0005625203.zsrwgk"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625201.mtnyvu", "id": "np0005625201.mtnyvu"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: Manager daemon np0005625202.arwxwo is now available
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/mirror_snapshot_schedule"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/mirror_snapshot_schedule"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/trash_purge_schedule"} : dispatch
Feb 20 09:43:54 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/trash_purge_schedule"} : dispatch
Feb 20 09:43:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:43:55 np0005625203.localdomain sshd[292266]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:43:55 np0005625203.localdomain podman[292258]: 2026-02-20 09:43:55.451336702 +0000 UTC m=+0.089551984 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 20 09:43:55 np0005625203.localdomain podman[292258]: 2026-02-20 09:43:55.461654135 +0000 UTC m=+0.099869447 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb 20 09:43:55 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:43:55 np0005625203.localdomain podman[292309]: 2026-02-20 09:43:55.698125616 +0000 UTC m=+0.101174417 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., RELEASE=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, ceph=True, vcs-type=git, com.redhat.component=rhceph-container, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 09:43:55 np0005625203.localdomain podman[292309]: 2026-02-20 09:43:55.829266811 +0000 UTC m=+0.232315592 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, architecture=x86_64, version=7, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container)
Feb 20 09:43:56 np0005625203.localdomain ceph-mon[286888]: mgrmap e27: np0005625202.arwxwo(active, since 1.05623s), standbys: np0005625203.lonygy, np0005625204.exgrzx, np0005625201.mtnyvu
Feb 20 09:43:56 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:43:55] ENGINE Bus STARTING
Feb 20 09:43:56 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:43:55] ENGINE Serving on http://172.18.0.106:8765
Feb 20 09:43:56 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:43:55] ENGINE Serving on https://172.18.0.106:7150
Feb 20 09:43:56 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:43:55] ENGINE Bus STARTED
Feb 20 09:43:56 np0005625203.localdomain ceph-mon[286888]: [20/Feb/2026:09:43:55] ENGINE Client ('172.18.0.106', 42946) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 20 09:43:56 np0005625203.localdomain sudo[292199]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:56 np0005625203.localdomain sudo[292427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:43:56 np0005625203.localdomain sudo[292427]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:56 np0005625203.localdomain sudo[292427]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:56 np0005625203.localdomain sudo[292445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:43:56 np0005625203.localdomain sudo[292445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:56 np0005625203.localdomain sshd[292266]: Received disconnect from 118.99.80.29 port 4437:11: Bye Bye [preauth]
Feb 20 09:43:56 np0005625203.localdomain sshd[292266]: Disconnected from authenticating user root 118.99.80.29 port 4437 [preauth]
Feb 20 09:43:57 np0005625203.localdomain ceph-mon[286888]: pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:43:57 np0005625203.localdomain ceph-mon[286888]: mgrmap e28: np0005625202.arwxwo(active, since 2s), standbys: np0005625203.lonygy, np0005625204.exgrzx, np0005625201.mtnyvu
Feb 20 09:43:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:57 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:57 np0005625203.localdomain sudo[292445]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:57 np0005625203.localdomain sudo[292495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:43:57 np0005625203.localdomain sudo[292495]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:57 np0005625203.localdomain sudo[292495]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:57 np0005625203.localdomain sudo[292513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Feb 20 09:43:57 np0005625203.localdomain sudo[292513]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:57 np0005625203.localdomain sudo[292513]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:58 np0005625203.localdomain sudo[292550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:43:58 np0005625203.localdomain sudo[292550]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:58 np0005625203.localdomain sudo[292550]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:43:58 np0005625203.localdomain sudo[292568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:43:58 np0005625203.localdomain sudo[292568]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:58 np0005625203.localdomain sudo[292568]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:58 np0005625203.localdomain sudo[292586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:43:58 np0005625203.localdomain sudo[292586]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:58 np0005625203.localdomain sudo[292586]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:58 np0005625203.localdomain sudo[292604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:43:58 np0005625203.localdomain sudo[292604]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:58 np0005625203.localdomain sudo[292604]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:58 np0005625203.localdomain sudo[292622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:43:58 np0005625203.localdomain sudo[292622]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:58 np0005625203.localdomain sudo[292622]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:58 np0005625203.localdomain sudo[292656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:43:58 np0005625203.localdomain sudo[292656]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:58 np0005625203.localdomain sudo[292656]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:58 np0005625203.localdomain sudo[292674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:43:58 np0005625203.localdomain sudo[292674]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:58 np0005625203.localdomain sudo[292674]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd/host:np0005625200", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: mgrmap e29: np0005625202.arwxwo(active, since 4s), standbys: np0005625203.lonygy, np0005625204.exgrzx, np0005625201.mtnyvu
Feb 20 09:43:58 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:58 np0005625203.localdomain sudo[292692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:43:58 np0005625203.localdomain sudo[292692]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:58 np0005625203.localdomain sudo[292692]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:58 np0005625203.localdomain sudo[292710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:43:58 np0005625203.localdomain sudo[292710]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:58 np0005625203.localdomain sudo[292710]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:58 np0005625203.localdomain sudo[292728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:43:58 np0005625203.localdomain sudo[292728]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:58 np0005625203.localdomain sudo[292728]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:58 np0005625203.localdomain sudo[292746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:43:58 np0005625203.localdomain sudo[292746]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:58 np0005625203.localdomain sudo[292746]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:43:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:43:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:43:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:43:59 np0005625203.localdomain sudo[292764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:43:59 np0005625203.localdomain sudo[292764]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:59 np0005625203.localdomain sudo[292764]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17772 "" "Go-http-client/1.1"
Feb 20 09:43:59 np0005625203.localdomain sudo[292782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:43:59 np0005625203.localdomain sudo[292782]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:59 np0005625203.localdomain sudo[292782]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:59 np0005625203.localdomain sudo[292816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:43:59 np0005625203.localdomain sudo[292816]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:59 np0005625203.localdomain sudo[292816]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:59 np0005625203.localdomain sudo[292834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:43:59 np0005625203.localdomain sudo[292834]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:59 np0005625203.localdomain sudo[292834]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:59 np0005625203.localdomain sudo[292852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:43:59 np0005625203.localdomain sudo[292852]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:59 np0005625203.localdomain sudo[292852]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:59 np0005625203.localdomain sudo[292870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:43:59 np0005625203.localdomain sudo[292870]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:59 np0005625203.localdomain sudo[292870]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:59 np0005625203.localdomain sudo[292888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:43:59 np0005625203.localdomain sudo[292888]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:59 np0005625203.localdomain sudo[292888]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:59 np0005625203.localdomain sudo[292906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:43:59 np0005625203.localdomain sudo[292906]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:59 np0005625203.localdomain sudo[292906]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:59 np0005625203.localdomain sudo[292924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:43:59 np0005625203.localdomain sudo[292924]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:59 np0005625203.localdomain sudo[292924]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:59 np0005625203.localdomain sudo[292942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:43:59 np0005625203.localdomain sudo[292942]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:59 np0005625203.localdomain sudo[292942]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:59 np0005625203.localdomain sudo[292976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:43:59 np0005625203.localdomain sudo[292976]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:59 np0005625203.localdomain sudo[292976]: pam_unix(sudo:session): session closed for user root
Feb 20 09:43:59 np0005625203.localdomain sudo[292994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:43:59 np0005625203.localdomain sudo[292994]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:43:59 np0005625203.localdomain sudo[292994]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:00 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:00 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:00 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:00 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:00 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:44:00 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:44:00 np0005625203.localdomain ceph-mon[286888]: Standby manager daemon np0005625200.ypbkax started
Feb 20 09:44:00 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:44:00 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:44:00 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:44:00 np0005625203.localdomain sudo[293012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 20 09:44:00 np0005625203.localdomain sudo[293012]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:00 np0005625203.localdomain sudo[293012]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:00 np0005625203.localdomain sudo[293030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:44:00 np0005625203.localdomain sudo[293030]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:00 np0005625203.localdomain sudo[293030]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:00 np0005625203.localdomain sudo[293048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:44:00 np0005625203.localdomain sudo[293048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:00 np0005625203.localdomain sudo[293048]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:00 np0005625203.localdomain sudo[293066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:44:00 np0005625203.localdomain sudo[293066]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:00 np0005625203.localdomain sudo[293066]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:00 np0005625203.localdomain sudo[293084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:00 np0005625203.localdomain sudo[293084]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:00 np0005625203.localdomain sudo[293084]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:00 np0005625203.localdomain sudo[293102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:44:00 np0005625203.localdomain sudo[293102]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:00 np0005625203.localdomain sudo[293102]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:00 np0005625203.localdomain sudo[293136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:44:00 np0005625203.localdomain sudo[293136]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:00 np0005625203.localdomain sudo[293136]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:00 np0005625203.localdomain sudo[293154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:44:00 np0005625203.localdomain sudo[293154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:00 np0005625203.localdomain sudo[293154]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:00 np0005625203.localdomain sudo[293172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:44:00 np0005625203.localdomain sudo[293172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:00 np0005625203.localdomain sudo[293172]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: Updating np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: mgrmap e30: np0005625202.arwxwo(active, since 6s), standbys: np0005625203.lonygy, np0005625204.exgrzx, np0005625200.ypbkax, np0005625201.mtnyvu
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625200.ypbkax", "id": "np0005625200.ypbkax"} : dispatch
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:01 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:44:01 np0005625203.localdomain sudo[293190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:44:01 np0005625203.localdomain sudo[293190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:01 np0005625203.localdomain sudo[293190]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:02 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:44:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:02 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:02 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:44:02 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.32:0/2943728084' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:44:02 np0005625203.localdomain ceph-mon[286888]: from='client.? 172.18.0.32:0/2943728084' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:44:03 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:44:03 np0005625203.localdomain ceph-mon[286888]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s
Feb 20 09:44:03 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:03 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:03 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.2 (monmap changed)...
Feb 20 09:44:03 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 20 09:44:03 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:03 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:44:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:04 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.5 (monmap changed)...
Feb 20 09:44:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Feb 20 09:44:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:04 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:44:04 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:44:04 np0005625203.localdomain podman[293208]: 2026-02-20 09:44:04.784132993 +0000 UTC m=+0.091429443 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:44:04 np0005625203.localdomain podman[293208]: 2026-02-20 09:44:04.82333328 +0000 UTC m=+0.130629720 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:44:04 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.001719) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645001788, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1806, "num_deletes": 254, "total_data_size": 9515829, "memory_usage": 10070512, "flush_reason": "Manual Compaction"}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645025980, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 5847284, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13840, "largest_seqno": 15639, "table_properties": {"data_size": 5839303, "index_size": 4614, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 20862, "raw_average_key_size": 22, "raw_value_size": 5822041, "raw_average_value_size": 6383, "num_data_blocks": 197, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580605, "oldest_key_time": 1771580605, "file_creation_time": 1771580645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a18f0433-5302-412e-a730-8e4f9cc01661", "db_session_id": "IVJC5Q80ONGS9Z85L01D", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 24317 microseconds, and 12945 cpu microseconds.
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.026041) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 5847284 bytes OK
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.026071) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.028227) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.028249) EVENT_LOG_v1 {"time_micros": 1771580645028243, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.028272) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 9506701, prev total WAL file size 9514805, number of live WAL files 2.
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.030080) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353036' seq:72057594037927935, type:22 .. '6D6772737461740033373537' seq:0, type:0; will stop at (end)
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(5710KB)], [15(14MB)]
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645030122, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 20910649, "oldest_snapshot_seqno": -1}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 10235 keys, 18650134 bytes, temperature: kUnknown
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645127307, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 18650134, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18590208, "index_size": 33265, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25605, "raw_key_size": 271622, "raw_average_key_size": 26, "raw_value_size": 18413877, "raw_average_value_size": 1799, "num_data_blocks": 1289, "num_entries": 10235, "num_filter_entries": 10235, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580482, "oldest_key_time": 0, "file_creation_time": 1771580645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a18f0433-5302-412e-a730-8e4f9cc01661", "db_session_id": "IVJC5Q80ONGS9Z85L01D", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.127765) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 18650134 bytes
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.129610) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.9 rd, 191.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.6, 14.4 +0.0 blob) out(17.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 10768, records dropped: 533 output_compression: NoCompression
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.129641) EVENT_LOG_v1 {"time_micros": 1771580645129628, "job": 6, "event": "compaction_finished", "compaction_time_micros": 97317, "compaction_time_cpu_micros": 50598, "output_level": 6, "num_output_files": 1, "total_output_size": 18650134, "num_input_records": 10768, "num_output_records": 10235, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645130747, "job": 6, "event": "table_file_deletion", "file_number": 17}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645133389, "job": 6, "event": "table_file_deletion", "file_number": 15}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.029981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.133520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.133529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.133533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.133537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.133540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.134256) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645134303, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 275, "num_deletes": 264, "total_data_size": 20879, "memory_usage": 28328, "flush_reason": "Manual Compaction"}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645138713, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 13496, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15641, "largest_seqno": 15914, "table_properties": {"data_size": 11691, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4192, "raw_average_key_size": 15, "raw_value_size": 8092, "raw_average_value_size": 29, "num_data_blocks": 2, "num_entries": 274, "num_filter_entries": 274, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580645, "oldest_key_time": 1771580645, "file_creation_time": 1771580645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a18f0433-5302-412e-a730-8e4f9cc01661", "db_session_id": "IVJC5Q80ONGS9Z85L01D", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 4559 microseconds, and 1750 cpu microseconds.
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.138802) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 13496 bytes OK
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.138852) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.141097) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.141132) EVENT_LOG_v1 {"time_micros": 1771580645141122, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.141173) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 18714, prev total WAL file size 18714, number of live WAL files 2.
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.141947) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031303039' seq:72057594037927935, type:22 .. '6B760031323734' seq:0, type:0; will stop at (end)
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(13KB)], [18(17MB)]
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645141995, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18663630, "oldest_snapshot_seqno": -1}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 9972 keys, 17685982 bytes, temperature: kUnknown
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645220766, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 17685982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17629156, "index_size": 30805, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 267681, "raw_average_key_size": 26, "raw_value_size": 17458601, "raw_average_value_size": 1750, "num_data_blocks": 1165, "num_entries": 9972, "num_filter_entries": 9972, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580482, "oldest_key_time": 0, "file_creation_time": 1771580645, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a18f0433-5302-412e-a730-8e4f9cc01661", "db_session_id": "IVJC5Q80ONGS9Z85L01D", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.221211) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 17685982 bytes
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.222813) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.4 rd, 224.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 17.8 +0.0 blob) out(16.9 +0.0 blob), read-write-amplify(2693.4) write-amplify(1310.5) OK, records in: 10509, records dropped: 537 output_compression: NoCompression
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.222845) EVENT_LOG_v1 {"time_micros": 1771580645222832, "job": 8, "event": "compaction_finished", "compaction_time_micros": 78966, "compaction_time_cpu_micros": 45117, "output_level": 6, "num_output_files": 1, "total_output_size": 17685982, "num_input_records": 10509, "num_output_records": 9972, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645223029, "job": 8, "event": "table_file_deletion", "file_number": 20}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580645225619, "job": 8, "event": "table_file_deletion", "file_number": 18}
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.141799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.225728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.225736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.225739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.225742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:05.225745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:05 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:44:06 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:06 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:44:06 np0005625203.localdomain podman[293231]: 2026-02-20 09:44:06.759852759 +0000 UTC m=+0.080166720 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:44:06 np0005625203.localdomain podman[293231]: 2026-02-20 09:44:06.769647805 +0000 UTC m=+0.089961786 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:44:06 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:44:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:44:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:44:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:44:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:44:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:44:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:44:07 np0005625203.localdomain sudo[293255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:07 np0005625203.localdomain sudo[293255]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:07 np0005625203.localdomain sudo[293255]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:07 np0005625203.localdomain sudo[293273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:07 np0005625203.localdomain sudo[293273]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:07 np0005625203.localdomain ceph-mon[286888]: from='client.34378 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:44:07 np0005625203.localdomain ceph-mon[286888]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Feb 20 09:44:07 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625202 (monmap changed)...
Feb 20 09:44:07 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:44:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:07 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:44:07.659 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:44:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:44:07.659 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:44:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:44:07.660 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:44:07 np0005625203.localdomain podman[293307]: 
Feb 20 09:44:07 np0005625203.localdomain podman[293307]: 2026-02-20 09:44:07.984966353 +0000 UTC m=+0.069878788 container create 281e7e14e6c412eb2d5915e5892471eedae074c85561c53cfe0f637836c5437f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_perlman, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, architecture=x86_64, description=Red Hat Ceph Storage 7, distribution-scope=public, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, ceph=True, version=7, CEPH_POINT_RELEASE=, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, io.openshift.expose-services=, RELEASE=main, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z)
Feb 20 09:44:08 np0005625203.localdomain systemd[1]: Started libpod-conmon-281e7e14e6c412eb2d5915e5892471eedae074c85561c53cfe0f637836c5437f.scope.
Feb 20 09:44:08 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:44:08 np0005625203.localdomain podman[293307]: 2026-02-20 09:44:08.055473669 +0000 UTC m=+0.140386104 container init 281e7e14e6c412eb2d5915e5892471eedae074c85561c53cfe0f637836c5437f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_perlman, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, RELEASE=main, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, distribution-scope=public, vcs-type=git, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, release=1770267347, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc.)
Feb 20 09:44:08 np0005625203.localdomain podman[293307]: 2026-02-20 09:44:07.959476705 +0000 UTC m=+0.044389150 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:08 np0005625203.localdomain systemd[1]: tmp-crun.zNxXrn.mount: Deactivated successfully.
Feb 20 09:44:08 np0005625203.localdomain podman[293307]: 2026-02-20 09:44:08.07081387 +0000 UTC m=+0.155726305 container start 281e7e14e6c412eb2d5915e5892471eedae074c85561c53cfe0f637836c5437f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_perlman, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, architecture=x86_64, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git)
Feb 20 09:44:08 np0005625203.localdomain podman[293307]: 2026-02-20 09:44:08.071277414 +0000 UTC m=+0.156189889 container attach 281e7e14e6c412eb2d5915e5892471eedae074c85561c53cfe0f637836c5437f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_perlman, build-date=2026-02-09T10:25:24Z, release=1770267347, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., name=rhceph, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:44:08 np0005625203.localdomain systemd[1]: libpod-281e7e14e6c412eb2d5915e5892471eedae074c85561c53cfe0f637836c5437f.scope: Deactivated successfully.
Feb 20 09:44:08 np0005625203.localdomain modest_perlman[293322]: 167 167
Feb 20 09:44:08 np0005625203.localdomain podman[293307]: 2026-02-20 09:44:08.076208739 +0000 UTC m=+0.161121214 container died 281e7e14e6c412eb2d5915e5892471eedae074c85561c53cfe0f637836c5437f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_perlman, com.redhat.component=rhceph-container, architecture=x86_64, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, ceph=True, version=7, RELEASE=main, name=rhceph, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git)
Feb 20 09:44:08 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:44:08 np0005625203.localdomain podman[293327]: 2026-02-20 09:44:08.176670423 +0000 UTC m=+0.091625639 container remove 281e7e14e6c412eb2d5915e5892471eedae074c85561c53cfe0f637836c5437f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=modest_perlman, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, version=7, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, io.openshift.expose-services=, RELEASE=main, release=1770267347, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph)
Feb 20 09:44:08 np0005625203.localdomain systemd[1]: libpod-conmon-281e7e14e6c412eb2d5915e5892471eedae074c85561c53cfe0f637836c5437f.scope: Deactivated successfully.
Feb 20 09:44:08 np0005625203.localdomain sudo[293273]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:08 np0005625203.localdomain sudo[293343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:08 np0005625203.localdomain sudo[293343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:08 np0005625203.localdomain sudo[293343]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:08 np0005625203.localdomain sudo[293361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:08 np0005625203.localdomain sudo[293361]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:08 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:44:08 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:44:08 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:08 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:08 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 20 09:44:08 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:08 np0005625203.localdomain podman[293396]: 
Feb 20 09:44:08 np0005625203.localdomain podman[293396]: 2026-02-20 09:44:08.89866127 +0000 UTC m=+0.074303786 container create 60dedc5c129daa673ccb58624e3a355818e55b3f12a85eb24484af976fb35755 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_jones, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, architecture=x86_64, name=rhceph, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, GIT_BRANCH=main, release=1770267347, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git)
Feb 20 09:44:08 np0005625203.localdomain systemd[1]: Started libpod-conmon-60dedc5c129daa673ccb58624e3a355818e55b3f12a85eb24484af976fb35755.scope.
Feb 20 09:44:08 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:44:08 np0005625203.localdomain podman[293396]: 2026-02-20 09:44:08.964509891 +0000 UTC m=+0.140152417 container init 60dedc5c129daa673ccb58624e3a355818e55b3f12a85eb24484af976fb35755 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_jones, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, architecture=x86_64, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main)
Feb 20 09:44:08 np0005625203.localdomain podman[293396]: 2026-02-20 09:44:08.871260012 +0000 UTC m=+0.046902528 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:08 np0005625203.localdomain podman[293396]: 2026-02-20 09:44:08.973541513 +0000 UTC m=+0.149184039 container start 60dedc5c129daa673ccb58624e3a355818e55b3f12a85eb24484af976fb35755 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_jones, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, RELEASE=main, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, architecture=x86_64, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main)
Feb 20 09:44:08 np0005625203.localdomain podman[293396]: 2026-02-20 09:44:08.973787601 +0000 UTC m=+0.149430127 container attach 60dedc5c129daa673ccb58624e3a355818e55b3f12a85eb24484af976fb35755 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_jones, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, io.buildah.version=1.42.2, name=rhceph, version=7, ceph=True, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, GIT_CLEAN=True, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 09:44:08 np0005625203.localdomain beautiful_jones[293411]: 167 167
Feb 20 09:44:08 np0005625203.localdomain systemd[1]: libpod-60dedc5c129daa673ccb58624e3a355818e55b3f12a85eb24484af976fb35755.scope: Deactivated successfully.
Feb 20 09:44:08 np0005625203.localdomain podman[293396]: 2026-02-20 09:44:08.980868593 +0000 UTC m=+0.156511159 container died 60dedc5c129daa673ccb58624e3a355818e55b3f12a85eb24484af976fb35755 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_jones, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, ceph=True, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, io.buildah.version=1.42.2, io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:44:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ba1fcf284ce98be33bb6f59e3704eae1b22327068db584055cb4edd238214a38-merged.mount: Deactivated successfully.
Feb 20 09:44:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-a9d27f807cfb6d0c4e54a2a24a7960df894546ef374b8b1a61b04ae928ccae14-merged.mount: Deactivated successfully.
Feb 20 09:44:09 np0005625203.localdomain podman[293416]: 2026-02-20 09:44:09.07309241 +0000 UTC m=+0.083338850 container remove 60dedc5c129daa673ccb58624e3a355818e55b3f12a85eb24484af976fb35755 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_jones, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public)
Feb 20 09:44:09 np0005625203.localdomain systemd[1]: libpod-conmon-60dedc5c129daa673ccb58624e3a355818e55b3f12a85eb24484af976fb35755.scope: Deactivated successfully.
Feb 20 09:44:09 np0005625203.localdomain sudo[293361]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:09 np0005625203.localdomain sudo[293439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:09 np0005625203.localdomain sudo[293439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:09 np0005625203.localdomain sudo[293439]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:09 np0005625203.localdomain sudo[293457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:09 np0005625203.localdomain sudo[293457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:09 np0005625203.localdomain ceph-mon[286888]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:44:09 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.1 (monmap changed)...
Feb 20 09:44:09 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:44:09 np0005625203.localdomain ceph-mon[286888]: from='client.27331 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625200", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:44:09 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:09 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:09 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:09 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:09 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Feb 20 09:44:09 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:09 np0005625203.localdomain podman[293492]: 
Feb 20 09:44:09 np0005625203.localdomain podman[293492]: 2026-02-20 09:44:09.962849997 +0000 UTC m=+0.081583914 container create 27afadd31eadb56189e9714e6c150b5a0ebf62cdfa6e9466bef2ecec5131ed90 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_sutherland, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, version=7, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=)
Feb 20 09:44:09 np0005625203.localdomain systemd[1]: Started libpod-conmon-27afadd31eadb56189e9714e6c150b5a0ebf62cdfa6e9466bef2ecec5131ed90.scope.
Feb 20 09:44:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:44:10 np0005625203.localdomain podman[293492]: 2026-02-20 09:44:10.022110502 +0000 UTC m=+0.140844419 container init 27afadd31eadb56189e9714e6c150b5a0ebf62cdfa6e9466bef2ecec5131ed90 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_sutherland, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, architecture=x86_64, io.openshift.expose-services=, GIT_CLEAN=True, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, release=1770267347, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z)
Feb 20 09:44:10 np0005625203.localdomain podman[293492]: 2026-02-20 09:44:10.028533103 +0000 UTC m=+0.147267020 container start 27afadd31eadb56189e9714e6c150b5a0ebf62cdfa6e9466bef2ecec5131ed90 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_sutherland, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:44:10 np0005625203.localdomain podman[293492]: 2026-02-20 09:44:09.930136043 +0000 UTC m=+0.048870010 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:10 np0005625203.localdomain flamboyant_sutherland[293507]: 167 167
Feb 20 09:44:10 np0005625203.localdomain podman[293492]: 2026-02-20 09:44:10.028949476 +0000 UTC m=+0.147683393 container attach 27afadd31eadb56189e9714e6c150b5a0ebf62cdfa6e9466bef2ecec5131ed90 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_sutherland, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, name=rhceph, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, ceph=True, description=Red Hat Ceph Storage 7, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, GIT_BRANCH=main, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z)
Feb 20 09:44:10 np0005625203.localdomain systemd[1]: libpod-27afadd31eadb56189e9714e6c150b5a0ebf62cdfa6e9466bef2ecec5131ed90.scope: Deactivated successfully.
Feb 20 09:44:10 np0005625203.localdomain podman[293492]: 2026-02-20 09:44:10.03644106 +0000 UTC m=+0.155175017 container died 27afadd31eadb56189e9714e6c150b5a0ebf62cdfa6e9466bef2ecec5131ed90 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_sutherland, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, com.redhat.component=rhceph-container, vcs-type=git, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7)
Feb 20 09:44:10 np0005625203.localdomain podman[293512]: 2026-02-20 09:44:10.129075419 +0000 UTC m=+0.088899412 container remove 27afadd31eadb56189e9714e6c150b5a0ebf62cdfa6e9466bef2ecec5131ed90 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=flamboyant_sutherland, version=7, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, vcs-type=git, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:44:10 np0005625203.localdomain systemd[1]: libpod-conmon-27afadd31eadb56189e9714e6c150b5a0ebf62cdfa6e9466bef2ecec5131ed90.scope: Deactivated successfully.
Feb 20 09:44:10 np0005625203.localdomain ceph-mgr[285471]: ms_deliver_dispatch: unhandled message 0x557a2fb01080 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0
Feb 20 09:44:10 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@3(peon) e10  my rank is now 2 (was 3)
Feb 20 09:44:10 np0005625203.localdomain ceph-mon[286888]: log_channel(cluster) log [INF] : mon.np0005625203 calling monitor election
Feb 20 09:44:10 np0005625203.localdomain ceph-mon[286888]: paxos.2).electionLogic(38) init, last seen epoch 38
Feb 20 09:44:10 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@2(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:44:10 np0005625203.localdomain sudo[293457]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1b50c4777442353dcc6b9559b882916c03e07382ce56eacafa0c58992db8e5cc-merged.mount: Deactivated successfully.
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@2(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@2(electing) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@2(peon) e10 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: from='client.27341 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005625200"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: Remove daemons mon.np0005625200
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "quorum_status"} : dispatch
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: Safe to remove mon.np0005625200: new quorum should be ['np0005625201', 'np0005625204', 'np0005625203', 'np0005625202'] (from ['np0005625201', 'np0005625204', 'np0005625203', 'np0005625202'])
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: Removing monitor np0005625200 from monmap...
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon rm", "name": "np0005625200"} : dispatch
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: Removing daemon mon.np0005625200 from np0005625200.localdomain -- ports []
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203 calling monitor election
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: mon.np0005625202 calling monitor election
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: mon.np0005625204 calling monitor election
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 calling monitor election
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: mon.np0005625201 is new leader, mons np0005625201,np0005625204,np0005625203,np0005625202 in quorum (ranks 0,1,2,3)
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: monmap epoch 10
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: last_changed 2026-02-20T09:44:10.215299+0000
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: min_mon_release 18 (reef)
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: election_strategy: 1
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: 2: [v2:172.18.0.107:3300/0,v1:172.18.0.107:6789/0] mon.np0005625203
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: 3: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: osdmap e88: 6 total, 6 up, 6 in
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: mgrmap e30: np0005625202.arwxwo(active, since 18s), standbys: np0005625203.lonygy, np0005625204.exgrzx, np0005625200.ypbkax, np0005625201.mtnyvu
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: overall HEALTH_OK
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:12 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:12 np0005625203.localdomain sudo[293536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:12 np0005625203.localdomain sudo[293536]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:12 np0005625203.localdomain sudo[293536]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:12 np0005625203.localdomain sudo[293554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:12 np0005625203.localdomain sudo[293554]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:12 np0005625203.localdomain podman[293589]: 
Feb 20 09:44:12 np0005625203.localdomain podman[293589]: 2026-02-20 09:44:12.992851662 +0000 UTC m=+0.082356570 container create c2256f167350737a7b0242d9f7f47d3807fc8f1f064613e8769491fd0c01b7e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_faraday, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, description=Red Hat Ceph Storage 7, version=7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:44:13 np0005625203.localdomain systemd[1]: Started libpod-conmon-c2256f167350737a7b0242d9f7f47d3807fc8f1f064613e8769491fd0c01b7e4.scope.
Feb 20 09:44:13 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:44:13 np0005625203.localdomain podman[293589]: 2026-02-20 09:44:12.959409655 +0000 UTC m=+0.048914603 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:13 np0005625203.localdomain podman[293589]: 2026-02-20 09:44:13.066444945 +0000 UTC m=+0.155949853 container init c2256f167350737a7b0242d9f7f47d3807fc8f1f064613e8769491fd0c01b7e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_faraday, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, CEPH_POINT_RELEASE=, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, ceph=True, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, description=Red Hat Ceph Storage 7)
Feb 20 09:44:13 np0005625203.localdomain podman[293589]: 2026-02-20 09:44:13.074967242 +0000 UTC m=+0.164472160 container start c2256f167350737a7b0242d9f7f47d3807fc8f1f064613e8769491fd0c01b7e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_faraday, io.openshift.tags=rhceph ceph, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, release=1770267347)
Feb 20 09:44:13 np0005625203.localdomain podman[293589]: 2026-02-20 09:44:13.075273011 +0000 UTC m=+0.164777919 container attach c2256f167350737a7b0242d9f7f47d3807fc8f1f064613e8769491fd0c01b7e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_faraday, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, version=7, vcs-type=git)
Feb 20 09:44:13 np0005625203.localdomain adoring_faraday[293605]: 167 167
Feb 20 09:44:13 np0005625203.localdomain podman[293589]: 2026-02-20 09:44:13.078573264 +0000 UTC m=+0.168078212 container died c2256f167350737a7b0242d9f7f47d3807fc8f1f064613e8769491fd0c01b7e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_faraday, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, release=1770267347, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, CEPH_POINT_RELEASE=, RELEASE=main, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.component=rhceph-container)
Feb 20 09:44:13 np0005625203.localdomain systemd[1]: libpod-c2256f167350737a7b0242d9f7f47d3807fc8f1f064613e8769491fd0c01b7e4.scope: Deactivated successfully.
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:44:13 np0005625203.localdomain podman[293610]: 2026-02-20 09:44:13.17460097 +0000 UTC m=+0.082068260 container remove c2256f167350737a7b0242d9f7f47d3807fc8f1f064613e8769491fd0c01b7e4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=adoring_faraday, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, GIT_BRANCH=main, distribution-scope=public, GIT_CLEAN=True, RELEASE=main, io.openshift.tags=rhceph ceph, ceph=True, io.buildah.version=1.42.2, release=1770267347, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph)
Feb 20 09:44:13 np0005625203.localdomain systemd[1]: libpod-conmon-c2256f167350737a7b0242d9f7f47d3807fc8f1f064613e8769491fd0c01b7e4.scope: Deactivated successfully.
Feb 20 09:44:13 np0005625203.localdomain sudo[293554]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)...
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:44:13 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:13 np0005625203.localdomain sudo[293627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:13 np0005625203.localdomain sudo[293627]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:13 np0005625203.localdomain sudo[293627]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:13 np0005625203.localdomain sudo[293645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:13 np0005625203.localdomain sudo[293645]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:13 np0005625203.localdomain sshd[293663]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:44:13 np0005625203.localdomain sshd[293663]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:44:13 np0005625203.localdomain podman[293681]: 
Feb 20 09:44:13 np0005625203.localdomain podman[293681]: 2026-02-20 09:44:13.922513649 +0000 UTC m=+0.076102874 container create b6c9fb56fefd565a97b64ea92e04f5ca52d9700765421fad86ae41f98bbce3fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_curran, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Feb 20 09:44:13 np0005625203.localdomain systemd[1]: Started libpod-conmon-b6c9fb56fefd565a97b64ea92e04f5ca52d9700765421fad86ae41f98bbce3fe.scope.
Feb 20 09:44:13 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:44:13 np0005625203.localdomain podman[293681]: 2026-02-20 09:44:13.98297776 +0000 UTC m=+0.136566985 container init b6c9fb56fefd565a97b64ea92e04f5ca52d9700765421fad86ae41f98bbce3fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_curran, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, CEPH_POINT_RELEASE=, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, GIT_CLEAN=True, name=rhceph, io.buildah.version=1.42.2, distribution-scope=public, build-date=2026-02-09T10:25:24Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64)
Feb 20 09:44:13 np0005625203.localdomain podman[293681]: 2026-02-20 09:44:13.990922389 +0000 UTC m=+0.144511624 container start b6c9fb56fefd565a97b64ea92e04f5ca52d9700765421fad86ae41f98bbce3fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_curran, distribution-scope=public, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, release=1770267347, io.openshift.tags=rhceph ceph, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, vendor=Red Hat, Inc., ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=)
Feb 20 09:44:13 np0005625203.localdomain podman[293681]: 2026-02-20 09:44:13.991955531 +0000 UTC m=+0.145544756 container attach b6c9fb56fefd565a97b64ea92e04f5ca52d9700765421fad86ae41f98bbce3fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_curran, io.buildah.version=1.42.2, RELEASE=main, io.openshift.expose-services=, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 09:44:13 np0005625203.localdomain sweet_curran[293696]: 167 167
Feb 20 09:44:13 np0005625203.localdomain podman[293681]: 2026-02-20 09:44:13.892577011 +0000 UTC m=+0.046166286 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:13 np0005625203.localdomain systemd[1]: libpod-b6c9fb56fefd565a97b64ea92e04f5ca52d9700765421fad86ae41f98bbce3fe.scope: Deactivated successfully.
Feb 20 09:44:13 np0005625203.localdomain podman[293681]: 2026-02-20 09:44:13.996444702 +0000 UTC m=+0.150033967 container died b6c9fb56fefd565a97b64ea92e04f5ca52d9700765421fad86ae41f98bbce3fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_curran, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, architecture=x86_64, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, version=7, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, RELEASE=main, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git)
Feb 20 09:44:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-21e034c1a1085a1273951d4fbfac9cb48033668ed2012c6e1f66b7c44d4dc7b9-merged.mount: Deactivated successfully.
Feb 20 09:44:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-dda66fcd6ccb557068ab7f2e09f8e30dc4dd074f8e761e4ff56d779fd825132a-merged.mount: Deactivated successfully.
Feb 20 09:44:14 np0005625203.localdomain podman[293701]: 2026-02-20 09:44:14.094672356 +0000 UTC m=+0.088165390 container remove b6c9fb56fefd565a97b64ea92e04f5ca52d9700765421fad86ae41f98bbce3fe (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_curran, name=rhceph, distribution-scope=public, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, version=7, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main)
Feb 20 09:44:14 np0005625203.localdomain systemd[1]: libpod-conmon-b6c9fb56fefd565a97b64ea92e04f5ca52d9700765421fad86ae41f98bbce3fe.scope: Deactivated successfully.
Feb 20 09:44:14 np0005625203.localdomain sudo[293645]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:14 np0005625203.localdomain sudo[293717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:14 np0005625203.localdomain sudo[293717]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:14 np0005625203.localdomain sudo[293717]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:14 np0005625203.localdomain sudo[293735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:14 np0005625203.localdomain sudo[293735]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:14 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)...
Feb 20 09:44:14 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain
Feb 20 09:44:14 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:14 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:14 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:44:14 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:44:14 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:14 np0005625203.localdomain podman[293770]: 
Feb 20 09:44:14 np0005625203.localdomain podman[293770]: 2026-02-20 09:44:14.855305943 +0000 UTC m=+0.070916831 container create 7ba985fbcd18b4d17689adacb20c23fa7bc27801569988ee968765b914b90f00 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_moser, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.42.2, io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, name=rhceph, architecture=x86_64, version=7, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 09:44:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:44:14 np0005625203.localdomain systemd[1]: Started libpod-conmon-7ba985fbcd18b4d17689adacb20c23fa7bc27801569988ee968765b914b90f00.scope.
Feb 20 09:44:14 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:44:14 np0005625203.localdomain podman[293770]: 2026-02-20 09:44:14.826659036 +0000 UTC m=+0.042269954 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:14 np0005625203.localdomain podman[293770]: 2026-02-20 09:44:14.929801384 +0000 UTC m=+0.145412262 container init 7ba985fbcd18b4d17689adacb20c23fa7bc27801569988ee968765b914b90f00 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_moser, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, RELEASE=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, distribution-scope=public, io.openshift.tags=rhceph ceph, vcs-type=git, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, io.openshift.expose-services=, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:44:14 np0005625203.localdomain podman[293770]: 2026-02-20 09:44:14.939710735 +0000 UTC m=+0.155321623 container start 7ba985fbcd18b4d17689adacb20c23fa7bc27801569988ee968765b914b90f00 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_moser, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.openshift.tags=rhceph ceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2026-02-09T10:25:24Z, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, name=rhceph, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, GIT_BRANCH=main, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7)
Feb 20 09:44:14 np0005625203.localdomain podman[293770]: 2026-02-20 09:44:14.940069026 +0000 UTC m=+0.155679914 container attach 7ba985fbcd18b4d17689adacb20c23fa7bc27801569988ee968765b914b90f00 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_moser, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, name=rhceph, release=1770267347, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., io.buildah.version=1.42.2, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, com.redhat.component=rhceph-container, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z)
Feb 20 09:44:14 np0005625203.localdomain gallant_moser[293787]: 167 167
Feb 20 09:44:14 np0005625203.localdomain systemd[1]: libpod-7ba985fbcd18b4d17689adacb20c23fa7bc27801569988ee968765b914b90f00.scope: Deactivated successfully.
Feb 20 09:44:14 np0005625203.localdomain podman[293770]: 2026-02-20 09:44:14.94243968 +0000 UTC m=+0.158050578 container died 7ba985fbcd18b4d17689adacb20c23fa7bc27801569988ee968765b914b90f00 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_moser, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, ceph=True)
Feb 20 09:44:15 np0005625203.localdomain podman[293786]: 2026-02-20 09:44:15.029109213 +0000 UTC m=+0.132996444 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 20 09:44:15 np0005625203.localdomain podman[293786]: 2026-02-20 09:44:15.062792607 +0000 UTC m=+0.166679838 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 09:44:15 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:44:15 np0005625203.localdomain podman[293803]: 2026-02-20 09:44:15.156410247 +0000 UTC m=+0.201166337 container remove 7ba985fbcd18b4d17689adacb20c23fa7bc27801569988ee968765b914b90f00 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_moser, RELEASE=main, CEPH_POINT_RELEASE=, io.openshift.expose-services=, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, architecture=x86_64, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=1770267347)
Feb 20 09:44:15 np0005625203.localdomain systemd[1]: libpod-conmon-7ba985fbcd18b4d17689adacb20c23fa7bc27801569988ee968765b914b90f00.scope: Deactivated successfully.
Feb 20 09:44:15 np0005625203.localdomain sudo[293735]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:15 np0005625203.localdomain ceph-mon[286888]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:15 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625203 (monmap changed)...
Feb 20 09:44:15 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625203 on np0005625203.localdomain
Feb 20 09:44:15 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:15 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:15 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:15 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:15 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:15 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:15 np0005625203.localdomain sshd[293824]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:44:15 np0005625203.localdomain sshd[293824]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:44:15 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-6812d3ca966377a5eafffc37edd29ac8d9a671db746fd123293a777fbb0dec5a-merged.mount: Deactivated successfully.
Feb 20 09:44:16 np0005625203.localdomain ceph-mon[286888]: from='client.27452 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625200.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:44:16 np0005625203.localdomain ceph-mon[286888]: Removed label mon from host np0005625200.localdomain
Feb 20 09:44:16 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625204 (monmap changed)...
Feb 20 09:44:16 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain
Feb 20 09:44:16 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:16 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:16 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 20 09:44:16 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:17 np0005625203.localdomain ceph-mon[286888]: pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:17 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.0 (monmap changed)...
Feb 20 09:44:17 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.0 on np0005625204.localdomain
Feb 20 09:44:17 np0005625203.localdomain ceph-mon[286888]: from='client.44283 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625200.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:44:17 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:17 np0005625203.localdomain ceph-mon[286888]: Removed label mgr from host np0005625200.localdomain
Feb 20 09:44:17 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:17 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:17 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:17 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:17 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Feb 20 09:44:17 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:18 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:44:18 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.3 (monmap changed)...
Feb 20 09:44:18 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.3 on np0005625204.localdomain
Feb 20 09:44:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:44:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:44:18 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:19 np0005625203.localdomain ceph-mon[286888]: from='client.27464 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625200.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:44:19 np0005625203.localdomain ceph-mon[286888]: Removed label _admin from host np0005625200.localdomain
Feb 20 09:44:19 np0005625203.localdomain ceph-mon[286888]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:19 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)...
Feb 20 09:44:19 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain
Feb 20 09:44:19 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:19 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:19 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:19 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:19 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:44:19 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:44:19 np0005625203.localdomain podman[293826]: 2026-02-20 09:44:19.785156578 +0000 UTC m=+0.090002487 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:44:19 np0005625203.localdomain podman[293826]: 2026-02-20 09:44:19.858349309 +0000 UTC m=+0.163195248 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Feb 20 09:44:19 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:44:20 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)...
Feb 20 09:44:20 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain
Feb 20 09:44:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:44:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:44:20 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:21 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625204 (monmap changed)...
Feb 20 09:44:21 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain
Feb 20 09:44:21 np0005625203.localdomain ceph-mon[286888]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:21 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:21 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:22 np0005625203.localdomain sudo[293851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:44:22 np0005625203.localdomain sudo[293851]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:22 np0005625203.localdomain sudo[293851]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:22 np0005625203.localdomain sudo[293869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:44:22 np0005625203.localdomain sudo[293869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:22 np0005625203.localdomain sudo[293869]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:22 np0005625203.localdomain sudo[293887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:44:22 np0005625203.localdomain sudo[293887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:22 np0005625203.localdomain sudo[293887]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:22 np0005625203.localdomain sudo[293905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:22 np0005625203.localdomain sudo[293905]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:22 np0005625203.localdomain sudo[293905]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:23 np0005625203.localdomain sudo[293923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:44:23 np0005625203.localdomain sudo[293923]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:23 np0005625203.localdomain sudo[293923]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:44:23 np0005625203.localdomain sudo[293957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:44:23 np0005625203.localdomain sudo[293957]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:23 np0005625203.localdomain sudo[293957]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:23 np0005625203.localdomain sudo[293975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:44:23 np0005625203.localdomain sudo[293975]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:23 np0005625203.localdomain sudo[293975]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:23 np0005625203.localdomain sudo[293993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:44:23 np0005625203.localdomain sudo[293993]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:23 np0005625203.localdomain sudo[293993]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:23 np0005625203.localdomain sudo[294011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:44:23 np0005625203.localdomain sudo[294011]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:23 np0005625203.localdomain sudo[294011]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:23 np0005625203.localdomain sudo[294029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:44:23 np0005625203.localdomain sudo[294029]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:23 np0005625203.localdomain sudo[294029]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: Removing np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:23 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:23 np0005625203.localdomain sudo[294047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:44:23 np0005625203.localdomain sudo[294047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:23 np0005625203.localdomain sudo[294047]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:23 np0005625203.localdomain sudo[294065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:23 np0005625203.localdomain sudo[294065]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:23 np0005625203.localdomain sudo[294065]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:23 np0005625203.localdomain sudo[294083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:44:23 np0005625203.localdomain sudo[294083]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:23 np0005625203.localdomain sudo[294083]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:23 np0005625203.localdomain sudo[294117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:44:23 np0005625203.localdomain sudo[294117]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:44:23 np0005625203.localdomain sudo[294117]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:24 np0005625203.localdomain sudo[294141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:44:24 np0005625203.localdomain sudo[294141]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:24 np0005625203.localdomain sudo[294141]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:24 np0005625203.localdomain podman[294135]: 2026-02-20 09:44:24.048188974 +0000 UTC m=+0.086352054 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., container_name=openstack_network_exporter, version=9.7, distribution-scope=public, name=ubi9/ubi-minimal, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 20 09:44:24 np0005625203.localdomain podman[294135]: 2026-02-20 09:44:24.067430926 +0000 UTC m=+0.105594006 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, container_name=openstack_network_exporter, config_id=openstack_network_exporter, release=1770267347, io.openshift.tags=minimal rhel9, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z)
Feb 20 09:44:24 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:44:24 np0005625203.localdomain sudo[294172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:24 np0005625203.localdomain sudo[294172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:24 np0005625203.localdomain sudo[294172]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: Removing np0005625200.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: Removing np0005625200.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:24 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:25 np0005625203.localdomain ceph-mon[286888]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:25 np0005625203.localdomain ceph-mon[286888]: Removing daemon mgr.np0005625200.ypbkax from np0005625200.localdomain -- ports [8765]
Feb 20 09:44:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:44:25 np0005625203.localdomain podman[294191]: 2026-02-20 09:44:25.776576339 +0000 UTC m=+0.090198494 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute)
Feb 20 09:44:25 np0005625203.localdomain podman[294191]: 2026-02-20 09:44:25.817347426 +0000 UTC m=+0.130969581 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 20 09:44:25 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:44:27 np0005625203.localdomain sudo[294209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:44:27 np0005625203.localdomain sudo[294209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:27 np0005625203.localdomain sudo[294209]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:27 np0005625203.localdomain ceph-mon[286888]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:27 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "mgr.np0005625200.ypbkax"} : dispatch
Feb 20 09:44:27 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "mgr.np0005625200.ypbkax"} : dispatch
Feb 20 09:44:27 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "mgr.np0005625200.ypbkax"}]': finished
Feb 20 09:44:27 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:27 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:27 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:44:28 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:44:28 np0005625203.localdomain ceph-mon[286888]: Removing key for mgr.np0005625200.ypbkax
Feb 20 09:44:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:44:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:44:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:44:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:44:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17780 "" "Go-http-client/1.1"
Feb 20 09:44:29 np0005625203.localdomain sudo[294227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:44:29 np0005625203.localdomain sudo[294227]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:29 np0005625203.localdomain sudo[294227]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625200", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:29 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.047935) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580670048017, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1288, "num_deletes": 251, "total_data_size": 2103391, "memory_usage": 2147704, "flush_reason": "Manual Compaction"}
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580670059715, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 1203162, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15919, "largest_seqno": 17202, "table_properties": {"data_size": 1197410, "index_size": 2903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15654, "raw_average_key_size": 22, "raw_value_size": 1184691, "raw_average_value_size": 1692, "num_data_blocks": 125, "num_entries": 700, "num_filter_entries": 700, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580645, "oldest_key_time": 1771580645, "file_creation_time": 1771580670, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a18f0433-5302-412e-a730-8e4f9cc01661", "db_session_id": "IVJC5Q80ONGS9Z85L01D", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 11829 microseconds, and 4574 cpu microseconds.
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.059770) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 1203162 bytes OK
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.059797) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.061941) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.061964) EVENT_LOG_v1 {"time_micros": 1771580670061958, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.061989) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2096692, prev total WAL file size 2096692, number of live WAL files 2.
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.063315) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end)
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(1174KB)], [21(16MB)]
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580670063411, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 18889144, "oldest_snapshot_seqno": -1}
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 10134 keys, 15189185 bytes, temperature: kUnknown
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580670135148, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 15189185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15132632, "index_size": 30148, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25349, "raw_key_size": 272356, "raw_average_key_size": 26, "raw_value_size": 14960551, "raw_average_value_size": 1476, "num_data_blocks": 1137, "num_entries": 10134, "num_filter_entries": 10134, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580482, "oldest_key_time": 0, "file_creation_time": 1771580670, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a18f0433-5302-412e-a730-8e4f9cc01661", "db_session_id": "IVJC5Q80ONGS9Z85L01D", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.135686) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 15189185 bytes
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.137681) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 262.6 rd, 211.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 16.9 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(28.3) write-amplify(12.6) OK, records in: 10672, records dropped: 538 output_compression: NoCompression
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.137714) EVENT_LOG_v1 {"time_micros": 1771580670137697, "job": 10, "event": "compaction_finished", "compaction_time_micros": 71937, "compaction_time_cpu_micros": 40244, "output_level": 6, "num_output_files": 1, "total_output_size": 15189185, "num_input_records": 10672, "num_output_records": 10134, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580670138157, "job": 10, "event": "table_file_deletion", "file_number": 23}
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580670141847, "job": 10, "event": "table_file_deletion", "file_number": 21}
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.063172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.141952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.141959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.141963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.141965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: rocksdb: (Original Log Time 2026/02/20-09:44:30.141968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625200 (monmap changed)...
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625200 on np0005625200.localdomain
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: from='client.27472 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005625200.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: Added label _no_schedule to host np0005625200.localdomain
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005625200.localdomain
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:44:30 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:31 np0005625203.localdomain sshd[294245]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:44:31 np0005625203.localdomain ceph-mon[286888]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:31 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625201 (monmap changed)...
Feb 20 09:44:31 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain
Feb 20 09:44:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:44:31 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:31 np0005625203.localdomain sshd[294245]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: from='client.27484 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005625200.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)...
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625201 (monmap changed)...
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: from='client.34398 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005625200.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain"} : dispatch
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain"} : dispatch
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain"}]': finished
Feb 20 09:44:32 np0005625203.localdomain ceph-mon[286888]: Removed host np0005625200.localdomain
Feb 20 09:44:33 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:44:34 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:34 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:34 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:34 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:44:34 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:34 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:34 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:44:34 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:34 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:34 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 20 09:44:34 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:34 np0005625203.localdomain sshd[294247]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:44:34 np0005625203.localdomain sshd[294247]: Accepted publickey for tripleo-admin from 192.168.122.11 port 45470 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 09:44:34 np0005625203.localdomain systemd-logind[759]: New session 67 of user tripleo-admin.
Feb 20 09:44:34 np0005625203.localdomain systemd[1]: Created slice User Slice of UID 1003.
Feb 20 09:44:34 np0005625203.localdomain systemd[1]: Starting User Runtime Directory /run/user/1003...
Feb 20 09:44:34 np0005625203.localdomain systemd[1]: Finished User Runtime Directory /run/user/1003.
Feb 20 09:44:34 np0005625203.localdomain systemd[1]: Starting User Manager for UID 1003...
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: pam_unix(systemd-user:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Queued start job for default target Main User Target.
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Created slice User Application Slice.
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Started Mark boot as successful after the user session has run 2 minutes.
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Started Daily Cleanup of User's Temporary Directories.
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Reached target Paths.
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Reached target Timers.
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Starting D-Bus User Message Bus Socket...
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Starting Create User's Volatile Files and Directories...
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Listening on D-Bus User Message Bus Socket.
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Finished Create User's Volatile Files and Directories.
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Reached target Sockets.
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Reached target Basic System.
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Reached target Main User Target.
Feb 20 09:44:34 np0005625203.localdomain systemd[294251]: Startup finished in 161ms.
Feb 20 09:44:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:44:34 np0005625203.localdomain systemd[1]: Started User Manager for UID 1003.
Feb 20 09:44:34 np0005625203.localdomain systemd[1]: Started Session 67 of User tripleo-admin.
Feb 20 09:44:34 np0005625203.localdomain sshd[294247]: pam_unix(sshd:session): session opened for user tripleo-admin(uid=1003) by (uid=0)
Feb 20 09:44:35 np0005625203.localdomain podman[294266]: 2026-02-20 09:44:35.017463153 +0000 UTC m=+0.076318089 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:44:35 np0005625203.localdomain podman[294266]: 2026-02-20 09:44:35.024964157 +0000 UTC m=+0.083819103 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:44:35 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:44:35 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.2 (monmap changed)...
Feb 20 09:44:35 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:44:35 np0005625203.localdomain ceph-mon[286888]: pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:35 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:35 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:35 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Feb 20 09:44:35 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:35 np0005625203.localdomain sudo[294414]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-tfxemtfvuidnhzyzetdopxhuhrvylxtv ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771580675.0350974-64081-9840633548684/AnsiballZ_lineinfile.py
Feb 20 09:44:35 np0005625203.localdomain sudo[294414]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Feb 20 09:44:35 np0005625203.localdomain python3[294416]: ansible-ansible.builtin.lineinfile Invoked with dest=/etc/os-net-config/tripleo_config.yaml insertafter=172.18.0 line=    - ip_netmask: 172.18.0.104/24 backup=True path=/etc/os-net-config/tripleo_config.yaml state=present backrefs=False create=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Feb 20 09:44:35 np0005625203.localdomain sudo[294414]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:36 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.5 (monmap changed)...
Feb 20 09:44:36 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:44:36 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:36 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:36 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:44:36 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:44:36 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:36 np0005625203.localdomain sudo[294560]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-koxjvdpqgpylimgtqwsyxefgiueupayn ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771580676.0054948-64097-145756480429314/AnsiballZ_command.py
Feb 20 09:44:36 np0005625203.localdomain sudo[294560]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Feb 20 09:44:36 np0005625203.localdomain python3[294562]: ansible-ansible.legacy.command Invoked with _raw_params=ip a add 172.18.0.104/24 dev vlan21 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:44:36 np0005625203.localdomain sudo[294560]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:44:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:44:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:44:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:44:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:44:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:44:37 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:44:37 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:44:37 np0005625203.localdomain ceph-mon[286888]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:44:37 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:37 np0005625203.localdomain sudo[294705]: tripleo-admin : TTY=pts/0 ; PWD=/home/tripleo-admin ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-gzycympbnqlvllmbexjyockkdltgilsk ; /usr/bin/python3 /home/tripleo-admin/.ansible/tmp/ansible-tmp-1771580676.8026314-64108-61371810804443/AnsiballZ_command.py
Feb 20 09:44:37 np0005625203.localdomain sudo[294705]: pam_unix(sudo:session): session opened for user root(uid=0) by tripleo-admin(uid=1003)
Feb 20 09:44:37 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:44:37 np0005625203.localdomain podman[294708]: 2026-02-20 09:44:37.360367441 +0000 UTC m=+0.087011554 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:44:37 np0005625203.localdomain podman[294708]: 2026-02-20 09:44:37.405449742 +0000 UTC m=+0.132093865 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:44:37 np0005625203.localdomain python3[294707]: ansible-ansible.legacy.command Invoked with _raw_params=ping -W1 -c 3 172.18.0.104 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 09:44:37 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:44:38 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:44:38 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:44:38 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:44:38 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:38 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:38 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:44:38 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:44:38 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:38 np0005625203.localdomain sudo[294732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:38 np0005625203.localdomain sudo[294732]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:38 np0005625203.localdomain sudo[294732]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:38 np0005625203.localdomain sudo[294750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:38 np0005625203.localdomain sudo[294750]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:39 np0005625203.localdomain podman[294784]: 
Feb 20 09:44:39 np0005625203.localdomain podman[294784]: 2026-02-20 09:44:39.260357577 +0000 UTC m=+0.060555156 container create 355fbe63975ff049a4f85c46ec3787a588b3064840802ca46aaef1e087831c04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_leavitt, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.42.2, release=1770267347, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.tags=rhceph ceph, architecture=x86_64, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z)
Feb 20 09:44:39 np0005625203.localdomain systemd[1]: Started libpod-conmon-355fbe63975ff049a4f85c46ec3787a588b3064840802ca46aaef1e087831c04.scope.
Feb 20 09:44:39 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:44:39 np0005625203.localdomain podman[294784]: 2026-02-20 09:44:39.229042267 +0000 UTC m=+0.029239916 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:39 np0005625203.localdomain podman[294784]: 2026-02-20 09:44:39.33651088 +0000 UTC m=+0.136708459 container init 355fbe63975ff049a4f85c46ec3787a588b3064840802ca46aaef1e087831c04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_leavitt, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, RELEASE=main, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., ceph=True, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 09:44:39 np0005625203.localdomain systemd[1]: tmp-crun.hT0g81.mount: Deactivated successfully.
Feb 20 09:44:39 np0005625203.localdomain podman[294784]: 2026-02-20 09:44:39.348601389 +0000 UTC m=+0.148798968 container start 355fbe63975ff049a4f85c46ec3787a588b3064840802ca46aaef1e087831c04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_leavitt, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, vcs-type=git, version=7, vendor=Red Hat, Inc., ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, io.openshift.tags=rhceph ceph, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, release=1770267347, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 09:44:39 np0005625203.localdomain podman[294784]: 2026-02-20 09:44:39.348857157 +0000 UTC m=+0.149054786 container attach 355fbe63975ff049a4f85c46ec3787a588b3064840802ca46aaef1e087831c04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_leavitt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, distribution-scope=public, ceph=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 09:44:39 np0005625203.localdomain affectionate_leavitt[294799]: 167 167
Feb 20 09:44:39 np0005625203.localdomain podman[294784]: 2026-02-20 09:44:39.352242383 +0000 UTC m=+0.152439972 container died 355fbe63975ff049a4f85c46ec3787a588b3064840802ca46aaef1e087831c04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_leavitt, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, distribution-scope=public, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, CEPH_POINT_RELEASE=, ceph=True)
Feb 20 09:44:39 np0005625203.localdomain systemd[1]: libpod-355fbe63975ff049a4f85c46ec3787a588b3064840802ca46aaef1e087831c04.scope: Deactivated successfully.
Feb 20 09:44:39 np0005625203.localdomain podman[294804]: 2026-02-20 09:44:39.448165755 +0000 UTC m=+0.082494732 container remove 355fbe63975ff049a4f85c46ec3787a588b3064840802ca46aaef1e087831c04 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_leavitt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.openshift.expose-services=, release=1770267347, version=7, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_CLEAN=True)
Feb 20 09:44:39 np0005625203.localdomain systemd[1]: libpod-conmon-355fbe63975ff049a4f85c46ec3787a588b3064840802ca46aaef1e087831c04.scope: Deactivated successfully.
Feb 20 09:44:39 np0005625203.localdomain sudo[294750]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:39 np0005625203.localdomain sudo[294705]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:39 np0005625203.localdomain sudo[294826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:39 np0005625203.localdomain sudo[294826]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:39 np0005625203.localdomain sudo[294826]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:39 np0005625203.localdomain ceph-mon[286888]: Reconfiguring mon.np0005625202 (monmap changed)...
Feb 20 09:44:39 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:44:39 np0005625203.localdomain ceph-mon[286888]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 20 09:44:39 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:39 np0005625203.localdomain sudo[294855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:39 np0005625203.localdomain sudo[294855]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:40 np0005625203.localdomain podman[294890]: 
Feb 20 09:44:40 np0005625203.localdomain podman[294890]: 2026-02-20 09:44:40.177922816 +0000 UTC m=+0.077186048 container create 26d3d405ca8f9ed1b956e474acab0fd3a5b57d08a9dc084c0a56224043e5540e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_pike, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_CLEAN=True, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=)
Feb 20 09:44:40 np0005625203.localdomain systemd[1]: Started libpod-conmon-26d3d405ca8f9ed1b956e474acab0fd3a5b57d08a9dc084c0a56224043e5540e.scope.
Feb 20 09:44:40 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:44:40 np0005625203.localdomain podman[294890]: 2026-02-20 09:44:40.237901633 +0000 UTC m=+0.137164865 container init 26d3d405ca8f9ed1b956e474acab0fd3a5b57d08a9dc084c0a56224043e5540e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_pike, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, GIT_BRANCH=main, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, release=1770267347)
Feb 20 09:44:40 np0005625203.localdomain podman[294890]: 2026-02-20 09:44:40.145309175 +0000 UTC m=+0.044572497 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:40 np0005625203.localdomain podman[294890]: 2026-02-20 09:44:40.246828492 +0000 UTC m=+0.146091724 container start 26d3d405ca8f9ed1b956e474acab0fd3a5b57d08a9dc084c0a56224043e5540e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_pike, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2026-02-09T10:25:24Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, name=rhceph, distribution-scope=public, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True)
Feb 20 09:44:40 np0005625203.localdomain podman[294890]: 2026-02-20 09:44:40.247210595 +0000 UTC m=+0.146473847 container attach 26d3d405ca8f9ed1b956e474acab0fd3a5b57d08a9dc084c0a56224043e5540e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_pike, distribution-scope=public, GIT_CLEAN=True, ceph=True, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, RELEASE=main, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=)
Feb 20 09:44:40 np0005625203.localdomain kind_pike[294905]: 167 167
Feb 20 09:44:40 np0005625203.localdomain systemd[1]: libpod-26d3d405ca8f9ed1b956e474acab0fd3a5b57d08a9dc084c0a56224043e5540e.scope: Deactivated successfully.
Feb 20 09:44:40 np0005625203.localdomain podman[294890]: 2026-02-20 09:44:40.249912589 +0000 UTC m=+0.149175851 container died 26d3d405ca8f9ed1b956e474acab0fd3a5b57d08a9dc084c0a56224043e5540e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_pike, architecture=x86_64, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, distribution-scope=public, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True)
Feb 20 09:44:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-068b94cac4c569b4d3b4d81e4efa4ba9d59a00c86d4b621e66a7d78a4c04e31b-merged.mount: Deactivated successfully.
Feb 20 09:44:40 np0005625203.localdomain systemd[1]: tmp-crun.WxzxWa.mount: Deactivated successfully.
Feb 20 09:44:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bd0b1aedb9c083bf7d03d41b44fea1ca5afdd2e083db3162b56f85329764a7b0-merged.mount: Deactivated successfully.
Feb 20 09:44:40 np0005625203.localdomain podman[294910]: 2026-02-20 09:44:40.363839294 +0000 UTC m=+0.101812517 container remove 26d3d405ca8f9ed1b956e474acab0fd3a5b57d08a9dc084c0a56224043e5540e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=kind_pike, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., release=1770267347, GIT_CLEAN=True, RELEASE=main, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:44:40 np0005625203.localdomain systemd[1]: libpod-conmon-26d3d405ca8f9ed1b956e474acab0fd3a5b57d08a9dc084c0a56224043e5540e.scope: Deactivated successfully.
Feb 20 09:44:40 np0005625203.localdomain sudo[294855]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:40 np0005625203.localdomain ceph-mon[286888]: Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:44:40 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:44:40 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.1 (monmap changed)...
Feb 20 09:44:40 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:44:40 np0005625203.localdomain ceph-mon[286888]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:40 np0005625203.localdomain ceph-mon[286888]: Reconfiguring osd.4 (monmap changed)...
Feb 20 09:44:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Feb 20 09:44:40 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:40 np0005625203.localdomain ceph-mon[286888]: Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:44:40 np0005625203.localdomain sudo[294933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:40 np0005625203.localdomain sudo[294933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:40 np0005625203.localdomain sudo[294933]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:40 np0005625203.localdomain sudo[294951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:40 np0005625203.localdomain sudo[294951]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:41 np0005625203.localdomain podman[294985]: 
Feb 20 09:44:41 np0005625203.localdomain podman[294985]: 2026-02-20 09:44:41.197010742 +0000 UTC m=+0.072695646 container create ea8b2ab9cf7b153a2d1e376cfd723316e84d3998fcf87014d610359d3b5bb0b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_visvesvaraya, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, release=1770267347, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_BRANCH=main, ceph=True, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph)
Feb 20 09:44:41 np0005625203.localdomain systemd[1]: Started libpod-conmon-ea8b2ab9cf7b153a2d1e376cfd723316e84d3998fcf87014d610359d3b5bb0b3.scope.
Feb 20 09:44:41 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:44:41 np0005625203.localdomain podman[294985]: 2026-02-20 09:44:41.259891269 +0000 UTC m=+0.135576173 container init ea8b2ab9cf7b153a2d1e376cfd723316e84d3998fcf87014d610359d3b5bb0b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_visvesvaraya, release=1770267347, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, distribution-scope=public, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, build-date=2026-02-09T10:25:24Z)
Feb 20 09:44:41 np0005625203.localdomain podman[294985]: 2026-02-20 09:44:41.168241291 +0000 UTC m=+0.043926255 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:41 np0005625203.localdomain podman[294985]: 2026-02-20 09:44:41.269905303 +0000 UTC m=+0.145590207 container start ea8b2ab9cf7b153a2d1e376cfd723316e84d3998fcf87014d610359d3b5bb0b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_visvesvaraya, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1770267347, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, version=7, RELEASE=main, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., name=rhceph, io.openshift.expose-services=, GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, ceph=True)
Feb 20 09:44:41 np0005625203.localdomain podman[294985]: 2026-02-20 09:44:41.270194202 +0000 UTC m=+0.145879146 container attach ea8b2ab9cf7b153a2d1e376cfd723316e84d3998fcf87014d610359d3b5bb0b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_visvesvaraya, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, vendor=Red Hat, Inc., name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, ceph=True, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, distribution-scope=public, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:44:41 np0005625203.localdomain gallant_visvesvaraya[295000]: 167 167
Feb 20 09:44:41 np0005625203.localdomain systemd[1]: libpod-ea8b2ab9cf7b153a2d1e376cfd723316e84d3998fcf87014d610359d3b5bb0b3.scope: Deactivated successfully.
Feb 20 09:44:41 np0005625203.localdomain podman[294985]: 2026-02-20 09:44:41.27522123 +0000 UTC m=+0.150906154 container died ea8b2ab9cf7b153a2d1e376cfd723316e84d3998fcf87014d610359d3b5bb0b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_visvesvaraya, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1770267347, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, version=7, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Feb 20 09:44:41 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-343f5e05b02b8d75792518cb559000d56a86a1916fe1c8890e5f5ce181904636-merged.mount: Deactivated successfully.
Feb 20 09:44:41 np0005625203.localdomain podman[295005]: 2026-02-20 09:44:41.374351682 +0000 UTC m=+0.084890478 container remove ea8b2ab9cf7b153a2d1e376cfd723316e84d3998fcf87014d610359d3b5bb0b3 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=gallant_visvesvaraya, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.buildah.version=1.42.2, vendor=Red Hat, Inc., ceph=True, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, version=7)
Feb 20 09:44:41 np0005625203.localdomain systemd[1]: libpod-conmon-ea8b2ab9cf7b153a2d1e376cfd723316e84d3998fcf87014d610359d3b5bb0b3.scope: Deactivated successfully.
Feb 20 09:44:41 np0005625203.localdomain sudo[294951]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:41 np0005625203.localdomain ceph-mon[286888]: from='client.27496 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:44:41 np0005625203.localdomain ceph-mon[286888]: Saving service mon spec with placement label:mon
Feb 20 09:44:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:44:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:41 np0005625203.localdomain ceph-mon[286888]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:44:41 np0005625203.localdomain sudo[295028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:44:41 np0005625203.localdomain sudo[295028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:41 np0005625203.localdomain sudo[295028]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:42 np0005625203.localdomain ceph-mon[286888]: from='client.27504 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625203", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:44:42 np0005625203.localdomain ceph-mon[286888]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:43 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@2(peon).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:44:43 np0005625203.localdomain ceph-mgr[285471]: ms_deliver_dispatch: unhandled message 0x557a2fb01600 mon_map magic: 0 from mon.0 v2:172.18.0.105:3300/0
Feb 20 09:44:43 np0005625203.localdomain ceph-mon[286888]: mon.np0005625203@2(peon) e11  removed from monmap, suicide.
Feb 20 09:44:43 np0005625203.localdomain sudo[295046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:43 np0005625203.localdomain sudo[295046]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:43 np0005625203.localdomain sudo[295046]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:43 np0005625203.localdomain podman[295062]: 2026-02-20 09:44:43.463678325 +0000 UTC m=+0.068749083 container died 2e1f82df8ee48e4b2aedd8e09eb89a2296747266e4afbe8e1e657e76b746335f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625203, version=7, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, CEPH_POINT_RELEASE=, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., io.openshift.expose-services=)
Feb 20 09:44:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-2c484d01ca7293d66e415b4d69c3f9078767a0d1b377dd3c014b20011e57a376-merged.mount: Deactivated successfully.
Feb 20 09:44:43 np0005625203.localdomain podman[295062]: 2026-02-20 09:44:43.500838568 +0000 UTC m=+0.105909276 container remove 2e1f82df8ee48e4b2aedd8e09eb89a2296747266e4afbe8e1e657e76b746335f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625203, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1770267347, vendor=Red Hat, Inc., distribution-scope=public, GIT_CLEAN=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, architecture=x86_64, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:44:43 np0005625203.localdomain sudo[295075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 rm-daemon --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8 --name mon.np0005625203 --force
Feb 20 09:44:43 np0005625203.localdomain sudo[295075]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8@mon.np0005625203.service: Deactivated successfully.
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: Stopped Ceph mon.np0005625203 for a8557ee9-b55d-5519-942c-cf8f6172f1d8.
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8@mon.np0005625203.service: Consumed 7.613s CPU time.
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:44:44 np0005625203.localdomain systemd-sysv-generator[295227]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:44:44 np0005625203.localdomain systemd-rc-local-generator[295220]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:44 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:44 np0005625203.localdomain sudo[295075]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:44:45 np0005625203.localdomain podman[295233]: 2026-02-20 09:44:45.764762445 +0000 UTC m=+0.083561327 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 20 09:44:45 np0005625203.localdomain podman[295233]: 2026-02-20 09:44:45.80136522 +0000 UTC m=+0.120164102 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 20 09:44:45 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:44:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:46.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:44:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:47.340 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:44:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:47.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:44:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:47.365 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:44:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:47.365 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:44:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:47.366 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:44:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:47.366 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:44:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:47.366 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:44:48 np0005625203.localdomain sudo[295262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:44:48 np0005625203.localdomain sudo[295262]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:48 np0005625203.localdomain sudo[295262]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:48 np0005625203.localdomain sudo[295280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:44:48 np0005625203.localdomain sudo[295280]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:48 np0005625203.localdomain sudo[295280]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:48 np0005625203.localdomain sudo[295298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:44:48 np0005625203.localdomain sudo[295298]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:48 np0005625203.localdomain sudo[295298]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:48 np0005625203.localdomain sudo[295316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:48 np0005625203.localdomain sudo[295316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:48 np0005625203.localdomain sudo[295316]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:48 np0005625203.localdomain sudo[295334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:44:48 np0005625203.localdomain sudo[295334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:48 np0005625203.localdomain sudo[295334]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:48 np0005625203.localdomain sudo[295375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:44:48 np0005625203.localdomain sudo[295375]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:48 np0005625203.localdomain sudo[295375]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:49 np0005625203.localdomain sudo[295395]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:44:49 np0005625203.localdomain sudo[295395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:49 np0005625203.localdomain sudo[295395]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:49 np0005625203.localdomain sudo[295413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:44:49 np0005625203.localdomain sudo[295413]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:49 np0005625203.localdomain sudo[295413]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:49 np0005625203.localdomain sudo[295431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:44:49 np0005625203.localdomain sudo[295431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:49 np0005625203.localdomain sudo[295431]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:49.255 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.889s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:44:49 np0005625203.localdomain sudo[295449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:44:49 np0005625203.localdomain sudo[295449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:49 np0005625203.localdomain sudo[295449]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:49 np0005625203.localdomain sudo[295469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:44:49 np0005625203.localdomain sudo[295469]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:49 np0005625203.localdomain sudo[295469]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:49 np0005625203.localdomain sudo[295487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:49 np0005625203.localdomain sudo[295487]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:49 np0005625203.localdomain sudo[295487]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:49.453 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:44:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:49.454 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12466MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:44:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:49.454 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:44:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:49.455 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:44:49 np0005625203.localdomain sudo[295505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:44:49 np0005625203.localdomain sudo[295505]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:49 np0005625203.localdomain sudo[295505]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:49.533 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:44:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:49.534 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:44:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:49.557 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:44:49 np0005625203.localdomain sudo[295539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:44:49 np0005625203.localdomain sudo[295539]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:49 np0005625203.localdomain sudo[295539]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:49 np0005625203.localdomain sudo[295558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:44:49 np0005625203.localdomain sudo[295558]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:49 np0005625203.localdomain sudo[295558]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:49 np0005625203.localdomain sudo[295595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:49 np0005625203.localdomain sudo[295595]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:49 np0005625203.localdomain sudo[295595]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:50.023 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:44:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:50.029 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:44:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:50.044 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:44:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:50.046 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:44:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:50.047 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:44:50 np0005625203.localdomain sudo[295615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:44:50 np0005625203.localdomain sudo[295615]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:44:50 np0005625203.localdomain sudo[295615]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:50 np0005625203.localdomain podman[295633]: 2026-02-20 09:44:50.223938929 +0000 UTC m=+0.074993569 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 20 09:44:50 np0005625203.localdomain podman[295633]: 2026-02-20 09:44:50.265334735 +0000 UTC m=+0.116389354 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:44:50 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:44:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:51.048 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:44:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:51.049 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:44:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:51.049 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:44:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:52.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:44:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:52.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:44:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:53.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:44:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:54.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:44:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:54.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:44:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:54.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:44:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:44:54.358 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:44:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:44:54 np0005625203.localdomain podman[295658]: 2026-02-20 09:44:54.758118112 +0000 UTC m=+0.075929568 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, io.buildah.version=1.33.7, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 20 09:44:54 np0005625203.localdomain podman[295658]: 2026-02-20 09:44:54.772848033 +0000 UTC m=+0.090659449 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, architecture=x86_64, distribution-scope=public, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 20 09:44:54 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:44:56 np0005625203.localdomain sudo[295678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:56 np0005625203.localdomain sudo[295678]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:44:56 np0005625203.localdomain sudo[295678]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:56 np0005625203.localdomain sudo[295697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:56 np0005625203.localdomain sudo[295697]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:56 np0005625203.localdomain podman[295696]: 2026-02-20 09:44:56.556804198 +0000 UTC m=+0.078076604 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:44:56 np0005625203.localdomain podman[295696]: 2026-02-20 09:44:56.567559834 +0000 UTC m=+0.088832280 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:44:56 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:44:57 np0005625203.localdomain podman[295776]: 
Feb 20 09:44:57 np0005625203.localdomain podman[295776]: 2026-02-20 09:44:57.059729529 +0000 UTC m=+0.061631370 container create be212ab2e6a05a4296f7b0d23cb43ca9bbf64cd7141ecdf4d75fa4aa88907d5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_golick, RELEASE=main, distribution-scope=public, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, description=Red Hat Ceph Storage 7, version=7, io.buildah.version=1.42.2, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7)
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: Started libpod-conmon-be212ab2e6a05a4296f7b0d23cb43ca9bbf64cd7141ecdf4d75fa4aa88907d5f.scope.
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:44:57 np0005625203.localdomain podman[295776]: 2026-02-20 09:44:57.029193563 +0000 UTC m=+0.031095444 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:57 np0005625203.localdomain podman[295776]: 2026-02-20 09:44:57.130788352 +0000 UTC m=+0.132690193 container init be212ab2e6a05a4296f7b0d23cb43ca9bbf64cd7141ecdf4d75fa4aa88907d5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_golick, ceph=True, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, name=rhceph, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, RELEASE=main, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph)
Feb 20 09:44:57 np0005625203.localdomain podman[295776]: 2026-02-20 09:44:57.143303564 +0000 UTC m=+0.145205415 container start be212ab2e6a05a4296f7b0d23cb43ca9bbf64cd7141ecdf4d75fa4aa88907d5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_golick, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., version=7, distribution-scope=public, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, architecture=x86_64, name=rhceph, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True)
Feb 20 09:44:57 np0005625203.localdomain podman[295776]: 2026-02-20 09:44:57.143584663 +0000 UTC m=+0.145486514 container attach be212ab2e6a05a4296f7b0d23cb43ca9bbf64cd7141ecdf4d75fa4aa88907d5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_golick, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, architecture=x86_64, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, release=1770267347, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, distribution-scope=public, RELEASE=main, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, name=rhceph)
Feb 20 09:44:57 np0005625203.localdomain funny_golick[295791]: 167 167
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: libpod-be212ab2e6a05a4296f7b0d23cb43ca9bbf64cd7141ecdf4d75fa4aa88907d5f.scope: Deactivated successfully.
Feb 20 09:44:57 np0005625203.localdomain podman[295776]: 2026-02-20 09:44:57.148306541 +0000 UTC m=+0.150208412 container died be212ab2e6a05a4296f7b0d23cb43ca9bbf64cd7141ecdf4d75fa4aa88907d5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_golick, CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, vendor=Red Hat, Inc., release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, version=7, build-date=2026-02-09T10:25:24Z)
Feb 20 09:44:57 np0005625203.localdomain podman[295796]: 2026-02-20 09:44:57.231441033 +0000 UTC m=+0.075742572 container remove be212ab2e6a05a4296f7b0d23cb43ca9bbf64cd7141ecdf4d75fa4aa88907d5f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=funny_golick, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, version=7, GIT_BRANCH=main, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: libpod-conmon-be212ab2e6a05a4296f7b0d23cb43ca9bbf64cd7141ecdf4d75fa4aa88907d5f.scope: Deactivated successfully.
Feb 20 09:44:57 np0005625203.localdomain podman[295813]: 
Feb 20 09:44:57 np0005625203.localdomain podman[295813]: 2026-02-20 09:44:57.3131318 +0000 UTC m=+0.056585923 container create b0f3d7719e24a3f69c5240eef88102148600236617192704bc4f4008823d0fac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_lamarr, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, vcs-type=git, distribution-scope=public, release=1770267347, architecture=x86_64, GIT_BRANCH=main, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container)
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: Started libpod-conmon-b0f3d7719e24a3f69c5240eef88102148600236617192704bc4f4008823d0fac.scope.
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:44:57 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec316eccd3807c4c8f02803eebd2e015e4deb0fe2c9f28124587c3e79b8ad3a/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Feb 20 09:44:57 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec316eccd3807c4c8f02803eebd2e015e4deb0fe2c9f28124587c3e79b8ad3a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Feb 20 09:44:57 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec316eccd3807c4c8f02803eebd2e015e4deb0fe2c9f28124587c3e79b8ad3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 09:44:57 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec316eccd3807c4c8f02803eebd2e015e4deb0fe2c9f28124587c3e79b8ad3a/merged/var/lib/ceph/mon/ceph-np0005625203 supports timestamps until 2038 (0x7fffffff)
Feb 20 09:44:57 np0005625203.localdomain podman[295813]: 2026-02-20 09:44:57.364153807 +0000 UTC m=+0.107607930 container init b0f3d7719e24a3f69c5240eef88102148600236617192704bc4f4008823d0fac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_lamarr, com.redhat.component=rhceph-container, version=7, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:44:57 np0005625203.localdomain podman[295813]: 2026-02-20 09:44:57.369759752 +0000 UTC m=+0.113213885 container start b0f3d7719e24a3f69c5240eef88102148600236617192704bc4f4008823d0fac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_lamarr, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.openshift.expose-services=, name=rhceph, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_BRANCH=main, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.component=rhceph-container, release=1770267347, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, ceph=True)
Feb 20 09:44:57 np0005625203.localdomain podman[295813]: 2026-02-20 09:44:57.369960148 +0000 UTC m=+0.113414281 container attach b0f3d7719e24a3f69c5240eef88102148600236617192704bc4f4008823d0fac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_lamarr, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, version=7, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, GIT_BRANCH=main, distribution-scope=public, vcs-type=git, io.buildah.version=1.42.2, name=rhceph, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7)
Feb 20 09:44:57 np0005625203.localdomain podman[295813]: 2026-02-20 09:44:57.291473312 +0000 UTC m=+0.034927515 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: libpod-b0f3d7719e24a3f69c5240eef88102148600236617192704bc4f4008823d0fac.scope: Deactivated successfully.
Feb 20 09:44:57 np0005625203.localdomain podman[295813]: 2026-02-20 09:44:57.472733874 +0000 UTC m=+0.216188077 container died b0f3d7719e24a3f69c5240eef88102148600236617192704bc4f4008823d0fac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_lamarr, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, name=rhceph, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, release=1770267347, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-b62d3465ebb1392b3ebabd4b7c9c1232590b08dd9db176eb8b2971a9953c2638-merged.mount: Deactivated successfully.
Feb 20 09:44:57 np0005625203.localdomain podman[295854]: 2026-02-20 09:44:57.556572808 +0000 UTC m=+0.075654018 container remove b0f3d7719e24a3f69c5240eef88102148600236617192704bc4f4008823d0fac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=youthful_lamarr, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, CEPH_POINT_RELEASE=, name=rhceph, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, RELEASE=main, io.buildah.version=1.42.2, release=1770267347, io.openshift.expose-services=, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: libpod-conmon-b0f3d7719e24a3f69c5240eef88102148600236617192704bc4f4008823d0fac.scope: Deactivated successfully.
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:44:57 np0005625203.localdomain systemd-rc-local-generator[295907]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:44:57 np0005625203.localdomain systemd-sysv-generator[295910]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:44:57 np0005625203.localdomain sudo[295871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:57 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:57 np0005625203.localdomain sudo[295871]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:57 np0005625203.localdomain sudo[295871]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: Reloading.
Feb 20 09:44:58 np0005625203.localdomain sudo[295927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:58 np0005625203.localdomain systemd-sysv-generator[295970]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Feb 20 09:44:58 np0005625203.localdomain systemd-rc-local-generator[295965]: /etc/rc.d/rc.local is not marked executable, skipping.
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtsecretd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtqemud.service:25: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtproxyd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnodedevd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/insights-client-boot.service:24: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon.
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtstoraged.service:20: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnwfilterd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtnetworkd.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: /usr/lib/systemd/system/virtinterfaced.service:18: Failed to parse service type, ignoring: notify-reload
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: Starting Ceph mon.np0005625203 for a8557ee9-b55d-5519-942c-cf8f6172f1d8...
Feb 20 09:44:58 np0005625203.localdomain sudo[295927]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:58 np0005625203.localdomain podman[296040]: 
Feb 20 09:44:58 np0005625203.localdomain podman[296040]: 2026-02-20 09:44:58.732556675 +0000 UTC m=+0.083047390 container create 1ca6fa81c11f37e53d2aac9133410246a5f7534cdcde45e5f0d2e2ebcf198e88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625203, name=rhceph, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, architecture=x86_64, vendor=Red Hat, Inc., ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 09:44:58 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a770f4c8f50544d88f9bddb5cb606d9e30ad85bf45962725f3b25a561445df99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Feb 20 09:44:58 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a770f4c8f50544d88f9bddb5cb606d9e30ad85bf45962725f3b25a561445df99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Feb 20 09:44:58 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a770f4c8f50544d88f9bddb5cb606d9e30ad85bf45962725f3b25a561445df99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Feb 20 09:44:58 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a770f4c8f50544d88f9bddb5cb606d9e30ad85bf45962725f3b25a561445df99/merged/var/lib/ceph/mon/ceph-np0005625203 supports timestamps until 2038 (0x7fffffff)
Feb 20 09:44:58 np0005625203.localdomain podman[296040]: 2026-02-20 09:44:58.792081788 +0000 UTC m=+0.142572473 container init 1ca6fa81c11f37e53d2aac9133410246a5f7534cdcde45e5f0d2e2ebcf198e88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625203, io.openshift.expose-services=, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1770267347, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:44:58 np0005625203.localdomain podman[296040]: 2026-02-20 09:44:58.802114082 +0000 UTC m=+0.152604757 container start 1ca6fa81c11f37e53d2aac9133410246a5f7534cdcde45e5f0d2e2ebcf198e88 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mon-np0005625203, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., GIT_CLEAN=True, description=Red Hat Ceph Storage 7, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=)
Feb 20 09:44:58 np0005625203.localdomain bash[296040]: 1ca6fa81c11f37e53d2aac9133410246a5f7534cdcde45e5f0d2e2ebcf198e88
Feb 20 09:44:58 np0005625203.localdomain podman[296040]: 2026-02-20 09:44:58.704312061 +0000 UTC m=+0.054802796 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: Started Ceph mon.np0005625203 for a8557ee9-b55d-5519-942c-cf8f6172f1d8.
Feb 20 09:44:58 np0005625203.localdomain sudo[295697]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: set uid:gid to 167:167 (ceph:ceph)
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mon, pid 2
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: pidfile_write: ignore empty --pid-file
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: load: jerasure load: lrc 
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: RocksDB version: 7.9.2
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Git sha 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Compile date 2026-02-06 00:00:00
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: DB SUMMARY
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: DB Session ID:  XSHOJ401GNN3F43CMPC3
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: CURRENT file:  CURRENT
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: IDENTITY file:  IDENTITY
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: SST files in /var/lib/ceph/mon/ceph-np0005625203/store.db dir, Total Num: 0, files: 
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-np0005625203/store.db: 000004.log size: 761 ; 
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                         Options.error_if_exists: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                       Options.create_if_missing: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                         Options.paranoid_checks: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.flush_verify_memtable_count: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                                     Options.env: 0x5619b9094a20
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                                      Options.fs: PosixFileSystem
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                                Options.info_log: 0x5619bae7ed20
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                Options.max_file_opening_threads: 16
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                              Options.statistics: (nil)
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                               Options.use_fsync: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                       Options.max_log_file_size: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                   Options.log_file_time_to_roll: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                       Options.keep_log_file_num: 1000
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                    Options.recycle_log_file_num: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                         Options.allow_fallocate: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                        Options.allow_mmap_reads: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                       Options.allow_mmap_writes: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                        Options.use_direct_reads: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:          Options.create_missing_column_families: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                              Options.db_log_dir: 
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                                 Options.wal_dir: 
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                Options.table_cache_numshardbits: 6
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                         Options.WAL_ttl_seconds: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                       Options.WAL_size_limit_MB: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.manifest_preallocation_size: 4194304
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                     Options.is_fd_close_on_exec: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                   Options.advise_random_on_open: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                    Options.db_write_buffer_size: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                    Options.write_buffer_manager: 0x5619bae8f540
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.access_hint_on_compaction_start: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                      Options.use_adaptive_mutex: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                            Options.rate_limiter: (nil)
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                       Options.wal_recovery_mode: 2
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                  Options.enable_thread_tracking: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                  Options.enable_pipelined_write: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                  Options.unordered_write: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.write_thread_max_yield_usec: 100
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                               Options.row_cache: None
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                              Options.wal_filter: None
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.avoid_flush_during_recovery: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.allow_ingest_behind: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.two_write_queues: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.manual_wal_flush: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.wal_compression: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.atomic_flush: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                 Options.persist_stats_to_disk: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                 Options.write_dbid_to_manifest: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                 Options.log_readahead_size: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                 Options.best_efforts_recovery: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.allow_data_in_errors: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.db_host_id: __hostname__
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.enforce_single_del_contracts: true
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.max_background_jobs: 2
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.max_background_compactions: -1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.max_subcompactions: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.delayed_write_rate : 16777216
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.max_total_wal_size: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                   Options.stats_dump_period_sec: 600
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                 Options.stats_persist_period_sec: 600
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                          Options.max_open_files: -1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                          Options.bytes_per_sync: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                      Options.wal_bytes_per_sync: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                   Options.strict_bytes_per_sync: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:       Options.compaction_readahead_size: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                  Options.max_background_flushes: -1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Compression algorithms supported:
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         kZSTD supported: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         kXpressCompression supported: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         kBZip2Compression supported: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         kZSTDNotFinalCompression supported: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         kLZ4Compression supported: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         kZlibCompression supported: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         kLZ4HCCompression supported: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         kSnappyCompression supported: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Fast CRC32 supported: Supported on x86
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: DMutex implementation: pthread_mutex_t
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-np0005625203/store.db/MANIFEST-000005
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:           Options.merge_operator: 
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:        Options.compaction_filter: None
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:        Options.compaction_filter_factory: None
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:  Options.sst_partitioner_factory: None
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.memtable_factory: SkipListFactory
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:            Options.table_factory: BlockBasedTable
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5619bae7e980)
                                                             cache_index_and_filter_blocks: 1
                                                             cache_index_and_filter_blocks_with_high_priority: 0
                                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                                             pin_top_level_index_and_filter: 1
                                                             index_type: 0
                                                             data_block_index_type: 0
                                                             index_shortening: 1
                                                             data_block_hash_table_util_ratio: 0.750000
                                                             checksum: 4
                                                             no_block_cache: 0
                                                             block_cache: 0x5619bae7b350
                                                             block_cache_name: BinnedLRUCache
                                                             block_cache_options:
                                                               capacity : 536870912
                                                               num_shard_bits : 4
                                                               strict_capacity_limit : 0
                                                               high_pri_pool_ratio: 0.000
                                                             block_cache_compressed: (nil)
                                                             persistent_cache: (nil)
                                                             block_size: 4096
                                                             block_size_deviation: 10
                                                             block_restart_interval: 16
                                                             index_block_restart_interval: 1
                                                             metadata_block_size: 4096
                                                             partition_filters: 0
                                                             use_delta_encoding: 1
                                                             filter_policy: bloomfilter
                                                             whole_key_filtering: 1
                                                             verify_compression: 0
                                                             read_amp_bytes_per_bit: 0
                                                             format_version: 5
                                                             enable_index_compression: 1
                                                             block_align: 0
                                                             max_auto_readahead_size: 262144
                                                             prepopulate_block_cache: 0
                                                             initial_auto_readahead_size: 8192
                                                             num_file_reads_for_auto_readahead: 2
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:        Options.write_buffer_size: 33554432
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:  Options.max_write_buffer_number: 2
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:          Options.compression: NoCompression
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                  Options.bottommost_compression: Disabled
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:       Options.prefix_extractor: nullptr
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.num_levels: 7
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:            Options.compression_opts.window_bits: -14
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                  Options.compression_opts.level: 32767
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:               Options.compression_opts.strategy: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.compression_opts.parallel_threads: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                  Options.compression_opts.enabled: false
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:              Options.level0_stop_writes_trigger: 36
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                   Options.target_file_size_base: 67108864
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:             Options.target_file_size_multiplier: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                        Options.arena_block_size: 1048576
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                Options.disable_auto_compactions: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                   Options.table_properties_collectors: 
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                   Options.inplace_update_support: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                 Options.inplace_update_num_locks: 10000
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:               Options.memtable_whole_key_filtering: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:   Options.memtable_huge_page_size: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                           Options.bloom_locality: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                    Options.max_successive_merges: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                Options.optimize_filters_for_hits: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                Options.paranoid_file_checks: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                Options.force_consistency_checks: 1
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                Options.report_bg_io_stats: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                               Options.ttl: 2592000
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:          Options.periodic_compaction_seconds: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:    Options.preserve_internal_time_seconds: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                       Options.enable_blob_files: false
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                           Options.min_blob_size: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                          Options.blob_file_size: 268435456
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                   Options.blob_compression_type: NoCompression
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:          Options.enable_blob_garbage_collection: false
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:          Options.blob_compaction_readahead_size: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb:                Options.blob_file_starting_level: 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-np0005625203/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d7091244-ec7b-4a4c-98bf-27480b1bb7f4
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580698871011, "job": 1, "event": "recovery_started", "wal_files": [4]}
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580698873818, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 773, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 651, "raw_average_value_size": 130, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580698874023, "job": 1, "event": "recovery_finished"}
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5619baea2e00
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: DB pointer 0x5619baf98000
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203 does not exist in monmap, will attempt to join an existing cluster
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      1/0    1.84 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                            Sum      1/0    1.84 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 0.0 total, 0.0 interval
                                                           Flush(GB): cumulative 0.000, interval 0.000
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x5619bae7b350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.7e-05 secs_since: 0
                                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: using public_addr v2:172.18.0.104:0/0 -> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0]
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: starting mon.np0005625203 rank -1 at public addrs [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] at bind addrs [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon_data /var/lib/ceph/mon/ceph-np0005625203 fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(???) e0 preinit fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(synchronizing) e11 sync_obtain_latest_monmap
Feb 20 09:44:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(synchronizing) e11 sync_obtain_latest_monmap obtained monmap e11
Feb 20 09:44:58 np0005625203.localdomain podman[296092]: 
Feb 20 09:44:58 np0005625203.localdomain podman[296092]: 2026-02-20 09:44:58.945266242 +0000 UTC m=+0.062078663 container create a7869a843b18e34d1a3811ac327aa1dddabb004395514a5653bc7dbedace02ee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_knuth, architecture=x86_64, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, distribution-scope=public, release=1770267347, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, name=rhceph, version=7, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 09:44:58 np0005625203.localdomain systemd[1]: Started libpod-conmon-a7869a843b18e34d1a3811ac327aa1dddabb004395514a5653bc7dbedace02ee.scope.
Feb 20 09:44:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:44:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:44:59 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:44:59 np0005625203.localdomain podman[296092]: 2026-02-20 09:44:58.913077446 +0000 UTC m=+0.029889897 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:44:59 np0005625203.localdomain podman[296092]: 2026-02-20 09:44:59.030987216 +0000 UTC m=+0.147799647 container init a7869a843b18e34d1a3811ac327aa1dddabb004395514a5653bc7dbedace02ee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_knuth, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, architecture=x86_64, ceph=True, name=rhceph, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, release=1770267347, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, io.buildah.version=1.42.2, GIT_BRANCH=main)
Feb 20 09:44:59 np0005625203.localdomain podman[296092]: 2026-02-20 09:44:59.041585928 +0000 UTC m=+0.158398359 container start a7869a843b18e34d1a3811ac327aa1dddabb004395514a5653bc7dbedace02ee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_knuth, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, vcs-type=git, GIT_CLEAN=True, release=1770267347, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, RELEASE=main, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, io.buildah.version=1.42.2, GIT_BRANCH=main, ceph=True)
Feb 20 09:44:59 np0005625203.localdomain podman[296092]: 2026-02-20 09:44:59.041945339 +0000 UTC m=+0.158757770 container attach a7869a843b18e34d1a3811ac327aa1dddabb004395514a5653bc7dbedace02ee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_knuth, GIT_BRANCH=main, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.buildah.version=1.42.2, release=1770267347, version=7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, distribution-scope=public, name=rhceph, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64)
Feb 20 09:44:59 np0005625203.localdomain musing_knuth[296124]: 167 167
Feb 20 09:44:59 np0005625203.localdomain systemd[1]: libpod-a7869a843b18e34d1a3811ac327aa1dddabb004395514a5653bc7dbedace02ee.scope: Deactivated successfully.
Feb 20 09:44:59 np0005625203.localdomain podman[296092]: 2026-02-20 09:44:59.046526082 +0000 UTC m=+0.163338543 container died a7869a843b18e34d1a3811ac327aa1dddabb004395514a5653bc7dbedace02ee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_knuth, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, architecture=x86_64, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=rhceph ceph, release=1770267347, version=7, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:44:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:44:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155909 "" "Go-http-client/1.1"
Feb 20 09:44:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18089 "" "Go-http-client/1.1"
Feb 20 09:44:59 np0005625203.localdomain podman[296129]: 2026-02-20 09:44:59.155961217 +0000 UTC m=+0.094905632 container remove a7869a843b18e34d1a3811ac327aa1dddabb004395514a5653bc7dbedace02ee (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=musing_knuth, io.buildah.version=1.42.2, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:44:59 np0005625203.localdomain systemd[1]: libpod-conmon-a7869a843b18e34d1a3811ac327aa1dddabb004395514a5653bc7dbedace02ee.scope: Deactivated successfully.
Feb 20 09:44:59 np0005625203.localdomain sudo[295927]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:59 np0005625203.localdomain sudo[296143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(synchronizing).mds e17 new map
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(synchronizing).mds e17 print_map
                                                           e17
                                                           enable_multiple, ever_enabled_multiple: 1,1
                                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           legacy client fscid: 1
                                                            
                                                           Filesystem 'cephfs' (1)
                                                           fs_name        cephfs
                                                           epoch        16
                                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                                           created        2026-02-20T07:58:28.398421+0000
                                                           modified        2026-02-20T09:40:14.722031+0000
                                                           tableserver        0
                                                           root        0
                                                           session_timeout        60
                                                           session_autoclose        300
                                                           max_file_size        1099511627776
                                                           required_client_features        {}
                                                           last_failure        0
                                                           last_failure_osd_epoch        83
                                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,12=quiesce subvolumes}
                                                           max_mds        1
                                                           in        0
                                                           up        {0=26854}
                                                           failed        
                                                           damaged        
                                                           stopped        
                                                           data_pools        [6]
                                                           metadata_pool        7
                                                           inline_data        disabled
                                                           balancer        
                                                           bal_rank_mask        -1
                                                           standby_count_wanted        1
                                                           qdb_cluster        leader: 26854 members: 26854
                                                           [mds.mds.np0005625203.zsrwgk{0:26854} state up:active seq 13 addr [v2:172.18.0.107:6808/3334119751,v1:172.18.0.107:6809/3334119751] compat {c=[1],r=[1],i=[17ff]}]
                                                            
                                                            
                                                           Standby daemons:
                                                            
                                                           [mds.mds.np0005625202.akhmop{-1:17124} state up:standby seq 1 addr [v2:172.18.0.106:6808/3865978972,v1:172.18.0.106:6809/3865978972] compat {c=[1],r=[1],i=[17ff]}]
                                                           [mds.mds.np0005625204.wnsphl{-1:26848} state up:standby seq 1 addr [v2:172.18.0.108:6808/2508223371,v1:172.18.0.108:6809/2508223371] compat {c=[1],r=[1],i=[17ff]}]
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(synchronizing).osd e88 crush map has features 3314933000852226048, adjusting msgr requires
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.5 (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mon.np0005625202 (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.1 (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.4 (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='client.27496 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Saving service mon spec with placement label:mon
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='client.27504 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625203", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Remove daemons mon.np0005625203
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "quorum_status"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Safe to remove mon.np0005625203: new quorum should be ['np0005625201', 'np0005625204', 'np0005625202'] (from ['np0005625201', 'np0005625204', 'np0005625202'])
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Removing monitor np0005625203 from monmap...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon rm", "name": "np0005625203"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Removing daemon mon.np0005625203 from np0005625203.localdomain -- ports []
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625204 calling monitor election
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625202 calling monitor election
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625204 calling monitor election
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: monmap epoch 11
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: last_changed 2026-02-20T09:44:43.337910+0000
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: min_mon_release 18 (reef)
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: election_strategy: 1
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: 2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: osdmap e88: 6 total, 6 up, 6 in
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mgrmap e30: np0005625202.arwxwo(active, since 54s), standbys: np0005625203.lonygy, np0005625204.exgrzx, np0005625200.ypbkax, np0005625201.mtnyvu
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Health check failed: 1/3 mons down, quorum np0005625201,np0005625204 (MON_DOWN)
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: overall HEALTH_OK
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625201 calling monitor election
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625201 is new leader, mons np0005625201,np0005625204,np0005625202 in quorum (ranks 0,1,2)
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: monmap epoch 11
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: last_changed 2026-02-20T09:44:43.337910+0000
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: min_mon_release 18 (reef)
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: election_strategy: 1
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: 2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: osdmap e88: 6 total, 6 up, 6 in
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mgrmap e30: np0005625202.arwxwo(active, since 54s), standbys: np0005625203.lonygy, np0005625204.exgrzx, np0005625200.ypbkax, np0005625201.mtnyvu
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005625201,np0005625204)
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Cluster is now healthy
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: overall HEALTH_OK
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1442551253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1469195323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3982866868' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/4195120872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2014991144' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1776286500' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625201.mtnyvu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625201.mtnyvu (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625201.mtnyvu on np0005625201.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(synchronizing).osd e88 crush map has features 288514051259236352, adjusting msgr requires
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625201", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(synchronizing).osd e88 crush map has features 288514051259236352, adjusting msgr requires
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(synchronizing).osd e88 crush map has features 288514051259236352, adjusting msgr requires
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625201 (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625201 on np0005625201.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.2 (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.5 (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='client.44339 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005625203.localdomain:172.18.0.104", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Deploying daemon mon.np0005625203 on np0005625203.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:44:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(synchronizing).paxosservice(auth 1..38) refresh upgraded, format 0 -> 3
Feb 20 09:44:59 np0005625203.localdomain sudo[296143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:59 np0005625203.localdomain sudo[296143]: pam_unix(sudo:session): session closed for user root
Feb 20 09:44:59 np0005625203.localdomain sudo[296161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:44:59 np0005625203.localdomain sudo[296161]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:44:59 np0005625203.localdomain podman[296195]: 
Feb 20 09:44:59 np0005625203.localdomain podman[296195]: 2026-02-20 09:44:59.932092968 +0000 UTC m=+0.080361255 container create b043d6f5e2edfda3b61d3e87f3334fa8a501da8f0e0d4a5634d094fb2e62024f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_booth, release=1770267347, ceph=True, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-type=git, version=7, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z)
Feb 20 09:44:59 np0005625203.localdomain systemd[1]: Started libpod-conmon-b043d6f5e2edfda3b61d3e87f3334fa8a501da8f0e0d4a5634d094fb2e62024f.scope.
Feb 20 09:44:59 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:45:00 np0005625203.localdomain podman[296195]: 2026-02-20 09:45:00.001008506 +0000 UTC m=+0.149276783 container init b043d6f5e2edfda3b61d3e87f3334fa8a501da8f0e0d4a5634d094fb2e62024f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_booth, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, RELEASE=main, vendor=Red Hat, Inc., architecture=x86_64, ceph=True, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, distribution-scope=public, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=)
Feb 20 09:45:00 np0005625203.localdomain podman[296195]: 2026-02-20 09:44:59.90209888 +0000 UTC m=+0.050367207 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:45:00 np0005625203.localdomain podman[296195]: 2026-02-20 09:45:00.010711739 +0000 UTC m=+0.158980026 container start b043d6f5e2edfda3b61d3e87f3334fa8a501da8f0e0d4a5634d094fb2e62024f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_booth, CEPH_POINT_RELEASE=, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, vcs-type=git, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, release=1770267347, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, com.redhat.component=rhceph-container, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True)
Feb 20 09:45:00 np0005625203.localdomain podman[296195]: 2026-02-20 09:45:00.01103605 +0000 UTC m=+0.159304337 container attach b043d6f5e2edfda3b61d3e87f3334fa8a501da8f0e0d4a5634d094fb2e62024f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_booth, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, com.redhat.component=rhceph-container, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, vcs-type=git, description=Red Hat Ceph Storage 7, architecture=x86_64, ceph=True, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, name=rhceph, distribution-scope=public)
Feb 20 09:45:00 np0005625203.localdomain silly_booth[296211]: 167 167
Feb 20 09:45:00 np0005625203.localdomain systemd[1]: libpod-b043d6f5e2edfda3b61d3e87f3334fa8a501da8f0e0d4a5634d094fb2e62024f.scope: Deactivated successfully.
Feb 20 09:45:00 np0005625203.localdomain podman[296195]: 2026-02-20 09:45:00.014204909 +0000 UTC m=+0.162473216 container died b043d6f5e2edfda3b61d3e87f3334fa8a501da8f0e0d4a5634d094fb2e62024f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_booth, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., release=1770267347)
Feb 20 09:45:00 np0005625203.localdomain podman[296216]: 2026-02-20 09:45:00.09218177 +0000 UTC m=+0.067647379 container remove b043d6f5e2edfda3b61d3e87f3334fa8a501da8f0e0d4a5634d094fb2e62024f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=silly_booth, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, version=7, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, distribution-scope=public, io.buildah.version=1.42.2, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.openshift.expose-services=, architecture=x86_64, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 09:45:00 np0005625203.localdomain systemd[1]: libpod-conmon-b043d6f5e2edfda3b61d3e87f3334fa8a501da8f0e0d4a5634d094fb2e62024f.scope: Deactivated successfully.
Feb 20 09:45:00 np0005625203.localdomain sudo[296161]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:00 np0005625203.localdomain sudo[296239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:45:00 np0005625203.localdomain sudo[296239]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:00 np0005625203.localdomain sudo[296239]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:00 np0005625203.localdomain sudo[296257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:00 np0005625203.localdomain sudo[296257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-9c588f0a4674dbeddd6208994d94f1de4449d14020748ac02d532322e6499ec7-merged.mount: Deactivated successfully.
Feb 20 09:45:00 np0005625203.localdomain podman[296291]: 
Feb 20 09:45:00 np0005625203.localdomain podman[296291]: 2026-02-20 09:45:00.935098901 +0000 UTC m=+0.081170962 container create 830644bf307b9905e8ab23c961e3c26c141178fb8929097b9c4be2489bd75bc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_colden, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, version=7, description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, RELEASE=main, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_BRANCH=main, io.buildah.version=1.42.2, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, release=1770267347)
Feb 20 09:45:00 np0005625203.localdomain systemd[1]: Started libpod-conmon-830644bf307b9905e8ab23c961e3c26c141178fb8929097b9c4be2489bd75bc5.scope.
Feb 20 09:45:00 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:45:00 np0005625203.localdomain podman[296291]: 2026-02-20 09:45:00.994654306 +0000 UTC m=+0.140726377 container init 830644bf307b9905e8ab23c961e3c26c141178fb8929097b9c4be2489bd75bc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_colden, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.42.2, ceph=True, name=rhceph, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:45:01 np0005625203.localdomain podman[296291]: 2026-02-20 09:45:00.90215865 +0000 UTC m=+0.048230711 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:45:01 np0005625203.localdomain podman[296291]: 2026-02-20 09:45:01.007016212 +0000 UTC m=+0.153088273 container start 830644bf307b9905e8ab23c961e3c26c141178fb8929097b9c4be2489bd75bc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_colden, build-date=2026-02-09T10:25:24Z, release=1770267347, RELEASE=main, distribution-scope=public, version=7, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, vendor=Red Hat, Inc.)
Feb 20 09:45:01 np0005625203.localdomain podman[296291]: 2026-02-20 09:45:01.007376263 +0000 UTC m=+0.153448374 container attach 830644bf307b9905e8ab23c961e3c26c141178fb8929097b9c4be2489bd75bc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_colden, io.openshift.tags=rhceph ceph, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1770267347, vcs-type=git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, io.buildah.version=1.42.2, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=)
Feb 20 09:45:01 np0005625203.localdomain elastic_colden[296306]: 167 167
Feb 20 09:45:01 np0005625203.localdomain systemd[1]: libpod-830644bf307b9905e8ab23c961e3c26c141178fb8929097b9c4be2489bd75bc5.scope: Deactivated successfully.
Feb 20 09:45:01 np0005625203.localdomain podman[296291]: 2026-02-20 09:45:01.010387997 +0000 UTC m=+0.156460068 container died 830644bf307b9905e8ab23c961e3c26c141178fb8929097b9c4be2489bd75bc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_colden, architecture=x86_64, com.redhat.component=rhceph-container, version=7, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, release=1770267347, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True)
Feb 20 09:45:01 np0005625203.localdomain podman[296311]: 2026-02-20 09:45:01.09418087 +0000 UTC m=+0.075739941 container remove 830644bf307b9905e8ab23c961e3c26c141178fb8929097b9c4be2489bd75bc5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=elastic_colden, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, distribution-scope=public, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, release=1770267347, name=rhceph, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, GIT_BRANCH=main, CEPH_POINT_RELEASE=)
Feb 20 09:45:01 np0005625203.localdomain systemd[1]: libpod-conmon-830644bf307b9905e8ab23c961e3c26c141178fb8929097b9c4be2489bd75bc5.scope: Deactivated successfully.
Feb 20 09:45:01 np0005625203.localdomain sudo[296257]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:01 np0005625203.localdomain sudo[296334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:45:01 np0005625203.localdomain sudo[296334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:01 np0005625203.localdomain sudo[296334]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:01 np0005625203.localdomain sudo[296352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:01 np0005625203.localdomain sudo[296352]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:01 np0005625203.localdomain systemd[1]: tmp-crun.u7Fxo9.mount: Deactivated successfully.
Feb 20 09:45:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0851e7ccb6976252a1656102a56423c481de7b31bc74752166180e5754d74ea1-merged.mount: Deactivated successfully.
Feb 20 09:45:01 np0005625203.localdomain podman[296386]: 
Feb 20 09:45:01 np0005625203.localdomain podman[296386]: 2026-02-20 09:45:01.904482071 +0000 UTC m=+0.051771011 container create 431ad203d5ea8d37e6b550235e201c04f304285b4e63ea8bb632a4f565862c99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_ramanujan, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, vendor=Red Hat, Inc., io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, version=7, GIT_CLEAN=True, GIT_BRANCH=main)
Feb 20 09:45:01 np0005625203.localdomain systemd[1]: Started libpod-conmon-431ad203d5ea8d37e6b550235e201c04f304285b4e63ea8bb632a4f565862c99.scope.
Feb 20 09:45:01 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:45:01 np0005625203.localdomain podman[296386]: 2026-02-20 09:45:01.965956465 +0000 UTC m=+0.113245395 container init 431ad203d5ea8d37e6b550235e201c04f304285b4e63ea8bb632a4f565862c99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_ramanujan, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, GIT_CLEAN=True, ceph=True, RELEASE=main, io.openshift.tags=rhceph ceph, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:45:01 np0005625203.localdomain podman[296386]: 2026-02-20 09:45:01.973044227 +0000 UTC m=+0.120333157 container start 431ad203d5ea8d37e6b550235e201c04f304285b4e63ea8bb632a4f565862c99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_ramanujan, GIT_BRANCH=main, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, version=7, io.openshift.expose-services=, release=1770267347, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, vendor=Red Hat, Inc., io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public)
Feb 20 09:45:01 np0005625203.localdomain podman[296386]: 2026-02-20 09:45:01.973278004 +0000 UTC m=+0.120566984 container attach 431ad203d5ea8d37e6b550235e201c04f304285b4e63ea8bb632a4f565862c99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_ramanujan, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, vcs-type=git, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, RELEASE=main, build-date=2026-02-09T10:25:24Z)
Feb 20 09:45:01 np0005625203.localdomain happy_ramanujan[296401]: 167 167
Feb 20 09:45:01 np0005625203.localdomain systemd[1]: libpod-431ad203d5ea8d37e6b550235e201c04f304285b4e63ea8bb632a4f565862c99.scope: Deactivated successfully.
Feb 20 09:45:01 np0005625203.localdomain podman[296386]: 2026-02-20 09:45:01.975923467 +0000 UTC m=+0.123212407 container died 431ad203d5ea8d37e6b550235e201c04f304285b4e63ea8bb632a4f565862c99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_ramanujan, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, vcs-type=git, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., GIT_CLEAN=True, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, architecture=x86_64, io.openshift.expose-services=, name=rhceph, version=7, CEPH_POINT_RELEASE=, GIT_BRANCH=main)
Feb 20 09:45:01 np0005625203.localdomain podman[296386]: 2026-02-20 09:45:01.886037374 +0000 UTC m=+0.033326294 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:45:02 np0005625203.localdomain podman[296406]: 2026-02-20 09:45:02.042104609 +0000 UTC m=+0.059350269 container remove 431ad203d5ea8d37e6b550235e201c04f304285b4e63ea8bb632a4f565862c99 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=happy_ramanujan, GIT_BRANCH=main, name=rhceph, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, ceph=True, version=7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_CLEAN=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, io.openshift.expose-services=)
Feb 20 09:45:02 np0005625203.localdomain systemd[1]: libpod-conmon-431ad203d5ea8d37e6b550235e201c04f304285b4e63ea8bb632a4f565862c99.scope: Deactivated successfully.
Feb 20 09:45:02 np0005625203.localdomain sudo[296352]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:02 np0005625203.localdomain sudo[296422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:45:02 np0005625203.localdomain sudo[296422]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:02 np0005625203.localdomain sudo[296422]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:02 np0005625203.localdomain sshd[296439]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:45:02 np0005625203.localdomain sudo[296441]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:02 np0005625203.localdomain sudo[296441]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:02 np0005625203.localdomain podman[296475]: 
Feb 20 09:45:02 np0005625203.localdomain systemd[1]: tmp-crun.I6ZTq3.mount: Deactivated successfully.
Feb 20 09:45:02 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e652319dd7c3c5e45a54a15088302d9623135d88d3bdf142ab937489f4b375ab-merged.mount: Deactivated successfully.
Feb 20 09:45:02 np0005625203.localdomain podman[296475]: 2026-02-20 09:45:02.75019957 +0000 UTC m=+0.080339064 container create ab2017f8e5638f2db15a0c8853d1d6dfbb2811027a648f91b3f58924705d2f12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_fermi, io.buildah.version=1.42.2, release=1770267347, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, io.openshift.expose-services=, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph)
Feb 20 09:45:02 np0005625203.localdomain sshd[296439]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:45:02 np0005625203.localdomain systemd[1]: Started libpod-conmon-ab2017f8e5638f2db15a0c8853d1d6dfbb2811027a648f91b3f58924705d2f12.scope.
Feb 20 09:45:02 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:45:02 np0005625203.localdomain podman[296475]: 2026-02-20 09:45:02.714685699 +0000 UTC m=+0.044825213 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:45:02 np0005625203.localdomain podman[296475]: 2026-02-20 09:45:02.815269398 +0000 UTC m=+0.145408902 container init ab2017f8e5638f2db15a0c8853d1d6dfbb2811027a648f91b3f58924705d2f12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_fermi, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, vcs-type=git, GIT_BRANCH=main, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, distribution-scope=public, version=7, release=1770267347, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True)
Feb 20 09:45:02 np0005625203.localdomain podman[296475]: 2026-02-20 09:45:02.825982352 +0000 UTC m=+0.156121846 container start ab2017f8e5638f2db15a0c8853d1d6dfbb2811027a648f91b3f58924705d2f12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_fermi, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.buildah.version=1.42.2, GIT_CLEAN=True, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, release=1770267347, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.openshift.expose-services=, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, io.openshift.tags=rhceph ceph)
Feb 20 09:45:02 np0005625203.localdomain podman[296475]: 2026-02-20 09:45:02.826196979 +0000 UTC m=+0.156336483 container attach ab2017f8e5638f2db15a0c8853d1d6dfbb2811027a648f91b3f58924705d2f12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_fermi, io.k8s.description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, release=1770267347, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.buildah.version=1.42.2, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vendor=Red Hat, Inc., GIT_BRANCH=main, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:45:02 np0005625203.localdomain systemd[1]: tmp-crun.WhzZEs.mount: Deactivated successfully.
Feb 20 09:45:02 np0005625203.localdomain objective_fermi[296490]: 167 167
Feb 20 09:45:02 np0005625203.localdomain systemd[1]: libpod-ab2017f8e5638f2db15a0c8853d1d6dfbb2811027a648f91b3f58924705d2f12.scope: Deactivated successfully.
Feb 20 09:45:02 np0005625203.localdomain podman[296475]: 2026-02-20 09:45:02.831860467 +0000 UTC m=+0.162000031 container died ab2017f8e5638f2db15a0c8853d1d6dfbb2811027a648f91b3f58924705d2f12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_fermi, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, RELEASE=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, release=1770267347, GIT_CLEAN=True, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 09:45:02 np0005625203.localdomain podman[296495]: 2026-02-20 09:45:02.920329775 +0000 UTC m=+0.075604257 container remove ab2017f8e5638f2db15a0c8853d1d6dfbb2811027a648f91b3f58924705d2f12 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=objective_fermi, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, release=1770267347, GIT_BRANCH=main, distribution-scope=public, description=Red Hat Ceph Storage 7, ceph=True, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, GIT_CLEAN=True, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:45:02 np0005625203.localdomain systemd[1]: libpod-conmon-ab2017f8e5638f2db15a0c8853d1d6dfbb2811027a648f91b3f58924705d2f12.scope: Deactivated successfully.
Feb 20 09:45:02 np0005625203.localdomain sudo[296441]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.1 (monmap changed)...
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.4 (monmap changed)...
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)...
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4145115626' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4145115626' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)...
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:45:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:03 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-005a157f43a1bfe6563b15177ef755785903f210d313157b2a71172e9b4aa755-merged.mount: Deactivated successfully.
Feb 20 09:45:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:45:05 np0005625203.localdomain podman[296511]: 2026-02-20 09:45:05.764256573 +0000 UTC m=+0.081194052 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:45:05 np0005625203.localdomain podman[296511]: 2026-02-20 09:45:05.802251823 +0000 UTC m=+0.119189262 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:45:05 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:45:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:45:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:45:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:45:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:45:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:45:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625204 (monmap changed)...
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.0 (monmap changed)...
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.0 on np0005625204.localdomain
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.3 (monmap changed)...
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.3 on np0005625204.localdomain
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)...
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:45:07.660 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:45:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:45:07.661 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:45:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:45:07.661 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:45:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:45:07 np0005625203.localdomain podman[296534]: 2026-02-20 09:45:07.762390522 +0000 UTC m=+0.071734646 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:45:07 np0005625203.localdomain podman[296534]: 2026-02-20 09:45:07.773157579 +0000 UTC m=+0.082501743 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:45:07 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:45:08 np0005625203.localdomain sudo[296556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:45:08 np0005625203.localdomain sudo[296556]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:08 np0005625203.localdomain sudo[296556]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:08 np0005625203.localdomain sudo[296574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:45:08 np0005625203.localdomain sudo[296574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:09 np0005625203.localdomain podman[296660]: 2026-02-20 09:45:09.06653489 +0000 UTC m=+0.070871109 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, release=1770267347, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., version=7, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main)
Feb 20 09:45:09 np0005625203.localdomain podman[296660]: 2026-02-20 09:45:09.191609304 +0000 UTC m=+0.195945583 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, RELEASE=main, io.buildah.version=1.42.2, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1770267347, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, name=rhceph, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.expose-services=)
Feb 20 09:45:09 np0005625203.localdomain sudo[296574]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:09 np0005625203.localdomain sudo[296780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:45:09 np0005625203.localdomain sudo[296780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:09 np0005625203.localdomain sudo[296780]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:09 np0005625203.localdomain sudo[296798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:45:09 np0005625203.localdomain sudo[296798]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:10 np0005625203.localdomain sudo[296798]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:10 np0005625203.localdomain sudo[296849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:45:10 np0005625203.localdomain sudo[296849]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:10 np0005625203.localdomain sudo[296849]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)...
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.200:0/2738563585' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(probing) e11 handle_auth_request failed to assign global_id
Feb 20 09:45:11 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(probing) e11 handle_auth_request failed to assign global_id
Feb 20 09:45:12 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(probing) e11 handle_auth_request failed to assign global_id
Feb 20 09:45:12 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(probing) e11 handle_auth_request failed to assign global_id
Feb 20 09:45:13 np0005625203.localdomain sshd[296867]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:45:14 np0005625203.localdomain sshd[296867]: Invalid user claude from 103.48.192.48 port 53743
Feb 20 09:45:14 np0005625203.localdomain sshd[296867]: Received disconnect from 103.48.192.48 port 53743:11: Bye Bye [preauth]
Feb 20 09:45:14 np0005625203.localdomain sshd[296867]: Disconnected from invalid user claude 103.48.192.48 port 53743 [preauth]
Feb 20 09:45:14 np0005625203.localdomain sudo[296869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:45:14 np0005625203.localdomain sudo[296869]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:14 np0005625203.localdomain sudo[296869]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:14 np0005625203.localdomain sudo[296887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:45:14 np0005625203.localdomain sudo[296887]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 583 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:15 np0005625203.localdomain podman[296974]: 2026-02-20 09:45:15.726495365 +0000 UTC m=+0.086888711 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.openshift.expose-services=, RELEASE=main, ceph=True, vcs-type=git, version=7, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public)
Feb 20 09:45:15 np0005625203.localdomain podman[296974]: 2026-02-20 09:45:15.851568549 +0000 UTC m=+0.211961895 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, RELEASE=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_BRANCH=main, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, ceph=True, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7)
Feb 20 09:45:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:45:15 np0005625203.localdomain podman[297007]: 2026-02-20 09:45:15.990524299 +0000 UTC m=+0.085177908 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:45:16 np0005625203.localdomain podman[297007]: 2026-02-20 09:45:16.050540337 +0000 UTC m=+0.145193896 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 20 09:45:16 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:45:16 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(probing) e11 handle_auth_request failed to assign global_id
Feb 20 09:45:16 np0005625203.localdomain sudo[296887]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr handle_mgr_map Activating!
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr handle_mgr_map I am now activating
Feb 20 09:45:16 np0005625203.localdomain sshd[292177]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:45:16 np0005625203.localdomain systemd[1]: session-66.scope: Deactivated successfully.
Feb 20 09:45:16 np0005625203.localdomain systemd[1]: session-66.scope: Consumed 26.929s CPU time.
Feb 20 09:45:16 np0005625203.localdomain systemd-logind[759]: Session 66 logged out. Waiting for processes to exit.
Feb 20 09:45:16 np0005625203.localdomain systemd-logind[759]: Removed session 66.
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: balancer
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [balancer INFO root] Starting
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [balancer INFO root] Optimize plan auto_2026-02-20_09:45:16
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [cephadm WARNING root] removing stray HostCache host record np0005625200.localdomain.devices.0
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [WRN] : removing stray HostCache host record np0005625200.localdomain.devices.0
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: cephadm
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: crash
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: devicehealth
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: iostat
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: nfs
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [devicehealth INFO root] Starting
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: orchestrator
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: pg_autoscaler
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: progress
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Loading...
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f57db04bca0>, <progress.module.GhostEvent object at 0x7f57db04bcd0>, <progress.module.GhostEvent object at 0x7f57db04bd00>, <progress.module.GhostEvent object at 0x7f57db04bd30>, <progress.module.GhostEvent object at 0x7f57db04bd60>, <progress.module.GhostEvent object at 0x7f57db04bd90>, <progress.module.GhostEvent object at 0x7f57db04bdc0>, <progress.module.GhostEvent object at 0x7f57db04bdf0>, <progress.module.GhostEvent object at 0x7f57db04be20>, <progress.module.GhostEvent object at 0x7f57db04be50>, <progress.module.GhostEvent object at 0x7f57db04be80>, <progress.module.GhostEvent object at 0x7f57db04beb0>, <progress.module.GhostEvent object at 0x7f57db04bee0>, <progress.module.GhostEvent object at 0x7f57db04bf10>, <progress.module.GhostEvent object at 0x7f57db04bf40>, <progress.module.GhostEvent object at 0x7f57db04bf70>, <progress.module.GhostEvent object at 0x7f57db04bfa0>, <progress.module.GhostEvent object at 0x7f57db04bfd0>, <progress.module.GhostEvent object at 0x7f57db062040>, <progress.module.GhostEvent object at 0x7f57db062070>, <progress.module.GhostEvent object at 0x7f57db0620a0>, <progress.module.GhostEvent object at 0x7f57db0620d0>, <progress.module.GhostEvent object at 0x7f57db062100>, <progress.module.GhostEvent object at 0x7f57db062130>, <progress.module.GhostEvent object at 0x7f57db062160>, <progress.module.GhostEvent object at 0x7f57db062190>, <progress.module.GhostEvent object at 0x7f57db0621c0>, <progress.module.GhostEvent object at 0x7f57db0621f0>, <progress.module.GhostEvent object at 0x7f57db062220>, <progress.module.GhostEvent object at 0x7f57db062250>, <progress.module.GhostEvent object at 0x7f57db062280>, <progress.module.GhostEvent object at 0x7f57db0622b0>, <progress.module.GhostEvent object at 0x7f57db0622e0>, <progress.module.GhostEvent object at 0x7f57db062310>, <progress.module.GhostEvent object at 0x7f57db062340>, <progress.module.GhostEvent object at 0x7f57db062370>, <progress.module.GhostEvent object at 0x7f57db0623a0>, <progress.module.GhostEvent object at 0x7f57db0623d0>, <progress.module.GhostEvent object at 0x7f57db062400>, <progress.module.GhostEvent object at 0x7f57db062430>, <progress.module.GhostEvent object at 0x7f57db062460>, <progress.module.GhostEvent object at 0x7f57db062490>, <progress.module.GhostEvent object at 0x7f57db0624c0>, <progress.module.GhostEvent object at 0x7f57db0624f0>, <progress.module.GhostEvent object at 0x7f57db062520>, <progress.module.GhostEvent object at 0x7f57db062550>, <progress.module.GhostEvent object at 0x7f57db062580>, <progress.module.GhostEvent object at 0x7f57db0625b0>, <progress.module.GhostEvent object at 0x7f57db0625e0>, <progress.module.GhostEvent object at 0x7f57db062610>] historic events
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Loaded OSDMap, ready.
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] _maybe_adjust
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] recovery thread starting
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] starting setup
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: rbd_support
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: restful
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [restful INFO root] server_addr: :: server_port: 8003
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [restful WARNING root] server not running: no certificate configured
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: status
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: telemetry
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: mgr load Constructed class from module: volumes
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] PerfHandler: starting
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_task_task: vms, start_after=
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:45:16.831+0000 7f57c5747640 -1 client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:45:16.831+0000 7f57c5747640 -1 client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:45:16.831+0000 7f57c5747640 -1 client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:45:16.831+0000 7f57c5747640 -1 client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:45:16.831+0000 7f57c5747640 -1 client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_task_task: volumes, start_after=
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:45:16.840+0000 7f57c2f42640 -1 client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_task_task: images, start_after=
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:45:16.841+0000 7f57c2f42640 -1 client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:45:16.842+0000 7f57c2f42640 -1 client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:45:16.842+0000 7f57c2f42640 -1 client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_task_task: backups, start_after=
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:45:16.843+0000 7f57c2f42640 -1 client.0 error registering admin socket command: (17) File exists
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] TaskHandler: starting
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Feb 20 09:45:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] setup complete
Feb 20 09:45:16 np0005625203.localdomain sshd[297250]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:45:17 np0005625203.localdomain sshd[297250]: Accepted publickey for ceph-admin from 192.168.122.107 port 49366 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 09:45:17 np0005625203.localdomain systemd-logind[759]: New session 69 of user ceph-admin.
Feb 20 09:45:17 np0005625203.localdomain systemd[1]: Started Session 69 of User ceph-admin.
Feb 20 09:45:17 np0005625203.localdomain sshd[297250]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.187 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.188 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:45:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:45:17 np0005625203.localdomain sudo[297254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:45:17 np0005625203.localdomain sudo[297254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:17 np0005625203.localdomain sudo[297254]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:17 np0005625203.localdomain sudo[297272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:45:17 np0005625203.localdomain sudo[297272]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:17 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:18 np0005625203.localdomain systemd[1]: tmp-crun.qRMRgi.mount: Deactivated successfully.
Feb 20 09:45:18 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:45:18] ENGINE Bus STARTING
Feb 20 09:45:18 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:45:18] ENGINE Bus STARTING
Feb 20 09:45:18 np0005625203.localdomain podman[297364]: 2026-02-20 09:45:18.134006276 +0000 UTC m=+0.112860483 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, name=rhceph, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, ceph=True, GIT_BRANCH=main, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, release=1770267347)
Feb 20 09:45:18 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:45:18] ENGINE Serving on http://172.18.0.107:8765
Feb 20 09:45:18 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:45:18] ENGINE Serving on http://172.18.0.107:8765
Feb 20 09:45:18 np0005625203.localdomain podman[297364]: 2026-02-20 09:45:18.233338514 +0000 UTC m=+0.212192751 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, io.openshift.expose-services=, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, ceph=True, build-date=2026-02-09T10:25:24Z, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, version=7)
Feb 20 09:45:18 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:45:18] ENGINE Serving on https://172.18.0.107:7150
Feb 20 09:45:18 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:45:18] ENGINE Serving on https://172.18.0.107:7150
Feb 20 09:45:18 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:45:18] ENGINE Bus STARTED
Feb 20 09:45:18 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:45:18] ENGINE Bus STARTED
Feb 20 09:45:18 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cherrypy.error] [20/Feb/2026:09:45:18] ENGINE Client ('172.18.0.107', 58052) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 20 09:45:18 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : [20/Feb/2026:09:45:18] ENGINE Client ('172.18.0.107', 58052) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 20 09:45:18 np0005625203.localdomain sshd[297476]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:45:18 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:18 np0005625203.localdomain sudo[297272]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:18 np0005625203.localdomain sshd[297476]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:45:18 np0005625203.localdomain ceph-mgr[285471]: [devicehealth INFO root] Check health
Feb 20 09:45:18 np0005625203.localdomain sudo[297520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:45:18 np0005625203.localdomain sudo[297520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:18 np0005625203.localdomain sudo[297520]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:19 np0005625203.localdomain sudo[297538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:45:19 np0005625203.localdomain sudo[297538]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(synchronizing).osd e88 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(synchronizing).osd e88 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(synchronizing).osd e89 e89: 6 total, 6 up, 6 in
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='client.27552 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: Reconfig service osd.default_drive_group
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 ' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26977 172.18.0.106:0/872747270' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.200:0/2448153276' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: Activating manager daemon np0005625203.lonygy
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: osdmap e89: 6 total, 6 up, 6 in
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.200:0/2448153276' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: mgrmap e31: np0005625203.lonygy(active, starting, since 0.0383473s), standbys: np0005625204.exgrzx, np0005625200.ypbkax, np0005625201.mtnyvu
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mds metadata", "who": "mds.np0005625202.akhmop"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mds metadata", "who": "mds.np0005625204.wnsphl"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mds metadata", "who": "mds.np0005625203.zsrwgk"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr metadata", "who": "np0005625200.ypbkax", "id": "np0005625200.ypbkax"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr metadata", "who": "np0005625201.mtnyvu", "id": "np0005625201.mtnyvu"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mds metadata"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd metadata"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: Manager daemon np0005625203.lonygy is now available
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: removing stray HostCache host record np0005625200.localdomain.devices.0
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"}]': finished
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625200.localdomain.devices.0"}]': finished
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625203.lonygy/mirror_snapshot_schedule"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625203.lonygy/mirror_snapshot_schedule"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625203.lonygy/trash_purge_schedule"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625203.lonygy/trash_purge_schedule"} : dispatch
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: mgrmap e32: np0005625203.lonygy(active, since 1.05473s), standbys: np0005625204.exgrzx, np0005625200.ypbkax, np0005625201.mtnyvu
Feb 20 09:45:19 np0005625203.localdomain ceph-mon[296066]: pgmap v3: 177 pgs: 177 active+clean; 104 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:19 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:19 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:19 np0005625203.localdomain sudo[297538]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:19 np0005625203.localdomain sudo[297588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:45:19 np0005625203.localdomain sudo[297588]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:19 np0005625203.localdomain sudo[297588]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:19 np0005625203.localdomain sudo[297606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Feb 20 09:45:19 np0005625203.localdomain sudo[297606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:20 np0005625203.localdomain sudo[297606]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Adjusting osd_memory_target on np0005625203.localdomain to 836.6M
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005625203.localdomain to 836.6M
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Adjusting osd_memory_target on np0005625202.localdomain to 836.6M
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005625202.localdomain to 836.6M
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:45:20 np0005625203.localdomain podman[297642]: 2026-02-20 09:45:20.742965692 +0000 UTC m=+0.063687695 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller)
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Adjusting osd_memory_target on np0005625204.localdomain to 836.6M
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on np0005625204.localdomain to 836.6M
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:20 np0005625203.localdomain podman[297642]: 2026-02-20 09:45:20.778223866 +0000 UTC m=+0.098945839 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:45:20 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:45:20 np0005625203.localdomain sudo[297668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:45:20 np0005625203.localdomain sudo[297668]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:20 np0005625203.localdomain sudo[297668]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:20 np0005625203.localdomain sudo[297686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:45:20 np0005625203.localdomain sudo[297686]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:20 np0005625203.localdomain sudo[297686]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:20 np0005625203.localdomain sudo[297704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:45:20 np0005625203.localdomain sudo[297704]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:20 np0005625203.localdomain sudo[297704]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:21 np0005625203.localdomain sudo[297722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:21 np0005625203.localdomain sudo[297722]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:21 np0005625203.localdomain sudo[297722]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:21 np0005625203.localdomain sudo[297740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:45:21 np0005625203.localdomain sudo[297740]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:21 np0005625203.localdomain sudo[297740]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:21 np0005625203.localdomain sudo[297774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:45:21 np0005625203.localdomain sudo[297774]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:21 np0005625203.localdomain sudo[297774]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:21 np0005625203.localdomain sudo[297792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:45:21 np0005625203.localdomain sudo[297792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:21 np0005625203.localdomain sudo[297792]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:21 np0005625203.localdomain sudo[297810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain sudo[297810]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:21 np0005625203.localdomain sudo[297810]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:21 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain sudo[297828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:45:21 np0005625203.localdomain sudo[297828]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:21 np0005625203.localdomain sudo[297828]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:21 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mgr.np0005625202.arwxwo 172.18.0.106:0/1082098019; not ready for session (expect reconnect)
Feb 20 09:45:21 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:21 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:45:18] ENGINE Bus STARTING
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:45:18] ENGINE Serving on http://172.18.0.107:8765
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:45:18] ENGINE Serving on https://172.18.0.107:7150
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:45:18] ENGINE Bus STARTED
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:45:18] ENGINE Client ('172.18.0.107', 58052) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: pgmap v4: 177 pgs: 177 active+clean; 104 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: mgrmap e33: np0005625203.lonygy(active, since 2s), standbys: np0005625204.exgrzx, np0005625200.ypbkax, np0005625201.mtnyvu
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: mgrmap e34: np0005625203.lonygy(active, since 3s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd/host:np0005625201", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: pgmap v5: 177 pgs: 177 active+clean; 104 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:21 np0005625203.localdomain sudo[297846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:45:21 np0005625203.localdomain sudo[297846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:21 np0005625203.localdomain sudo[297846]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:21 np0005625203.localdomain sudo[297864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:45:21 np0005625203.localdomain sudo[297864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:21 np0005625203.localdomain sudo[297864]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:21 np0005625203.localdomain sudo[297882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:21 np0005625203.localdomain sudo[297882]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:21 np0005625203.localdomain sudo[297882]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:21 np0005625203.localdomain sudo[297900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:45:21 np0005625203.localdomain sudo[297900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:21 np0005625203.localdomain sudo[297900]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:21 np0005625203.localdomain sshd[297918]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:45:21 np0005625203.localdomain sudo[297936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:45:21 np0005625203.localdomain sudo[297936]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:21 np0005625203.localdomain sudo[297936]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:22 np0005625203.localdomain sudo[297954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:45:22 np0005625203.localdomain sudo[297954]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:22 np0005625203.localdomain sudo[297954]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain sudo[297972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:22 np0005625203.localdomain sudo[297972]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain sudo[297972]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain sudo[297990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:45:22 np0005625203.localdomain sudo[297990]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:22 np0005625203.localdomain sudo[297990]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:22 np0005625203.localdomain sudo[298008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:45:22 np0005625203.localdomain sudo[298008]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:22 np0005625203.localdomain sudo[298008]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:22 np0005625203.localdomain sshd[297918]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:45:22 np0005625203.localdomain sudo[298026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:45:22 np0005625203.localdomain sudo[298026]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:22 np0005625203.localdomain sudo[298026]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:22 np0005625203.localdomain sudo[298044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:22 np0005625203.localdomain sudo[298044]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:22 np0005625203.localdomain sudo[298044]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:22 np0005625203.localdomain sudo[298062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:45:22 np0005625203.localdomain sudo[298062]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:22 np0005625203.localdomain sudo[298062]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:22 np0005625203.localdomain sudo[298096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 0 B/s wr, 23 op/s
Feb 20 09:45:22 np0005625203.localdomain sudo[298096]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:22 np0005625203.localdomain sudo[298096]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:22 np0005625203.localdomain sudo[298114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:45:22 np0005625203.localdomain sudo[298114]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:22 np0005625203.localdomain sudo[298114]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:22 np0005625203.localdomain sudo[298132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain sudo[298132]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:22 np0005625203.localdomain sudo[298132]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:22 np0005625203.localdomain sudo[298150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:45:22 np0005625203.localdomain sudo[298150]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:22 np0005625203.localdomain sudo[298150]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:22 np0005625203.localdomain sudo[298168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:45:22 np0005625203.localdomain sudo[298168]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:22 np0005625203.localdomain sudo[298168]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:23 np0005625203.localdomain sudo[298186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:45:23 np0005625203.localdomain sudo[298186]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:23 np0005625203.localdomain sudo[298186]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:23 np0005625203.localdomain sudo[298204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:23 np0005625203.localdomain sudo[298204]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:23 np0005625203.localdomain sudo[298204]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:23 np0005625203.localdomain sudo[298222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:45:23 np0005625203.localdomain sudo[298222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:23 np0005625203.localdomain sudo[298222]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:23 np0005625203.localdomain sudo[298256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:45:23 np0005625203.localdomain sudo[298256]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:23 np0005625203.localdomain sudo[298256]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:23 np0005625203.localdomain sudo[298274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:45:23 np0005625203.localdomain sudo[298274]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:23 np0005625203.localdomain sudo[298274]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:23 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:23 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:23 np0005625203.localdomain sudo[298292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:23 np0005625203.localdomain sudo[298292]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:23 np0005625203.localdomain sudo[298292]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:23 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:23 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:24 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] update: starting ev 2c8b6fa0-bdc0-4a63-9437-07b4ea0d6a70 (Updating node-proxy deployment (+4 -> 4))
Feb 20 09:45:24 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] complete: finished ev 2c8b6fa0-bdc0-4a63-9437-07b4ea0d6a70 (Updating node-proxy deployment (+4 -> 4))
Feb 20 09:45:24 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Completed event 2c8b6fa0-bdc0-4a63-9437-07b4ea0d6a70 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds
Feb 20 09:45:24 np0005625203.localdomain sudo[298310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:45:24 np0005625203.localdomain sudo[298310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:24 np0005625203.localdomain sudo[298310]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:24 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:45:24 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:45:24 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:24 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:24 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 0 B/s wr, 16 op/s
Feb 20 09:45:25 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:25 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: mgrmap e35: np0005625203.lonygy(active, since 4s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Standby manager daemon np0005625202.arwxwo started
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Updating np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: mgrmap e36: np0005625203.lonygy(active, since 5s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu, np0005625202.arwxwo
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} : dispatch
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: pgmap v6: 177 pgs: 177 active+clean; 104 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 0 B/s wr, 23 op/s
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 20 09:45:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:45:25 np0005625203.localdomain podman[298328]: 2026-02-20 09:45:25.760047787 +0000 UTC m=+0.076137273 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc.)
Feb 20 09:45:25 np0005625203.localdomain podman[298328]: 2026-02-20 09:45:25.804363715 +0000 UTC m=+0.120453151 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 09:45:25 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:45:26 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:45:26 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:45:26 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:26 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:26 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Feb 20 09:45:26 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:45:26 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Writing back 50 completed events
Feb 20 09:45:26 np0005625203.localdomain podman[298347]: 2026-02-20 09:45:26.770627167 +0000 UTC m=+0.089105390 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Feb 20 09:45:26 np0005625203.localdomain podman[298347]: 2026-02-20 09:45:26.807567153 +0000 UTC m=+0.126045346 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:45:26 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:45:27 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:45:27 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:45:27 np0005625203.localdomain sudo[298366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:45:27 np0005625203.localdomain sudo[298366]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:27 np0005625203.localdomain sudo[298366]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:27 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:27 np0005625203.localdomain sudo[298384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:27 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:27 np0005625203.localdomain sudo[298384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: pgmap v7: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 0 B/s wr, 16 op/s
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: pgmap v8: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Feb 20 09:45:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:27 np0005625203.localdomain podman[298420]: 
Feb 20 09:45:28 np0005625203.localdomain podman[298420]: 2026-02-20 09:45:28.009960136 +0000 UTC m=+0.075171094 container create 749ee5bfa2cb8c7603e5a330b99ba4fbc3c89417e4d63c52e134712a7d3a487f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_moore, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, CEPH_POINT_RELEASE=, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, ceph=True, io.buildah.version=1.42.2, GIT_BRANCH=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Feb 20 09:45:28 np0005625203.localdomain systemd[1]: Started libpod-conmon-749ee5bfa2cb8c7603e5a330b99ba4fbc3c89417e4d63c52e134712a7d3a487f.scope.
Feb 20 09:45:28 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:45:28 np0005625203.localdomain podman[298420]: 2026-02-20 09:45:27.984777867 +0000 UTC m=+0.049988905 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:45:28 np0005625203.localdomain podman[298420]: 2026-02-20 09:45:28.095781202 +0000 UTC m=+0.160992190 container init 749ee5bfa2cb8c7603e5a330b99ba4fbc3c89417e4d63c52e134712a7d3a487f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_moore, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, vcs-type=git, GIT_BRANCH=main, distribution-scope=public, version=7, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, release=1770267347, build-date=2026-02-09T10:25:24Z)
Feb 20 09:45:28 np0005625203.localdomain podman[298420]: 2026-02-20 09:45:28.109539833 +0000 UTC m=+0.174750821 container start 749ee5bfa2cb8c7603e5a330b99ba4fbc3c89417e4d63c52e134712a7d3a487f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_moore, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, distribution-scope=public, architecture=x86_64, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, release=1770267347, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, ceph=True, RELEASE=main, io.buildah.version=1.42.2)
Feb 20 09:45:28 np0005625203.localdomain podman[298420]: 2026-02-20 09:45:28.109842672 +0000 UTC m=+0.175053710 container attach 749ee5bfa2cb8c7603e5a330b99ba4fbc3c89417e4d63c52e134712a7d3a487f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_moore, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_CLEAN=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, ceph=True, vcs-type=git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, com.redhat.component=rhceph-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2)
Feb 20 09:45:28 np0005625203.localdomain exciting_moore[298435]: 167 167
Feb 20 09:45:28 np0005625203.localdomain systemd[1]: libpod-749ee5bfa2cb8c7603e5a330b99ba4fbc3c89417e4d63c52e134712a7d3a487f.scope: Deactivated successfully.
Feb 20 09:45:28 np0005625203.localdomain podman[298420]: 2026-02-20 09:45:28.114331092 +0000 UTC m=+0.179542120 container died 749ee5bfa2cb8c7603e5a330b99ba4fbc3c89417e4d63c52e134712a7d3a487f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_moore, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, release=1770267347, architecture=x86_64, GIT_CLEAN=True, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, name=rhceph)
Feb 20 09:45:28 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.44411 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:45:28 np0005625203.localdomain podman[298440]: 2026-02-20 09:45:28.219918897 +0000 UTC m=+0.090010238 container remove 749ee5bfa2cb8c7603e5a330b99ba4fbc3c89417e4d63c52e134712a7d3a487f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=exciting_moore, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, ceph=True, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, architecture=x86_64, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, release=1770267347)
Feb 20 09:45:28 np0005625203.localdomain systemd[1]: libpod-conmon-749ee5bfa2cb8c7603e5a330b99ba4fbc3c89417e4d63c52e134712a7d3a487f.scope: Deactivated successfully.
Feb 20 09:45:28 np0005625203.localdomain sudo[298384]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:28 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:45:28 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:45:28 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:28 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:28 np0005625203.localdomain sudo[298461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:45:28 np0005625203.localdomain sudo[298461]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:28 np0005625203.localdomain sudo[298461]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:28 np0005625203.localdomain sudo[298480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:28 np0005625203.localdomain sudo[298480]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:28 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Feb 20 09:45:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:45:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:45:29 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d47baae9f9939a95aade5514c16e4fc61ec62594c5787b50d403250bf2c30341-merged.mount: Deactivated successfully.
Feb 20 09:45:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:45:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:45:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17786 "" "Go-http-client/1.1"
Feb 20 09:45:29 np0005625203.localdomain podman[298514]: 
Feb 20 09:45:29 np0005625203.localdomain podman[298514]: 2026-02-20 09:45:29.060127244 +0000 UTC m=+0.070592190 container create 5aa6067ae00c9cd6ecb8168df10af1118b0e9fa78e3c11cdda24457314ffbdd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_cartwright, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., distribution-scope=public, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, GIT_CLEAN=True, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph)
Feb 20 09:45:29 np0005625203.localdomain systemd[1]: Started libpod-conmon-5aa6067ae00c9cd6ecb8168df10af1118b0e9fa78e3c11cdda24457314ffbdd2.scope.
Feb 20 09:45:29 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:45:29 np0005625203.localdomain podman[298514]: 2026-02-20 09:45:29.119557944 +0000 UTC m=+0.130022920 container init 5aa6067ae00c9cd6ecb8168df10af1118b0e9fa78e3c11cdda24457314ffbdd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_cartwright, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, name=rhceph, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, release=1770267347, architecture=x86_64, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, vcs-type=git)
Feb 20 09:45:29 np0005625203.localdomain podman[298514]: 2026-02-20 09:45:29.026272694 +0000 UTC m=+0.036737700 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:45:29 np0005625203.localdomain podman[298514]: 2026-02-20 09:45:29.12996915 +0000 UTC m=+0.140434126 container start 5aa6067ae00c9cd6ecb8168df10af1118b0e9fa78e3c11cdda24457314ffbdd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_cartwright, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, vendor=Red Hat, Inc., distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, ceph=True, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, GIT_CLEAN=True, version=7, description=Red Hat Ceph Storage 7)
Feb 20 09:45:29 np0005625203.localdomain podman[298514]: 2026-02-20 09:45:29.130202857 +0000 UTC m=+0.140667873 container attach 5aa6067ae00c9cd6ecb8168df10af1118b0e9fa78e3c11cdda24457314ffbdd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_cartwright, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, distribution-scope=public, ceph=True, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, vcs-type=git, version=7)
Feb 20 09:45:29 np0005625203.localdomain sad_cartwright[298529]: 167 167
Feb 20 09:45:29 np0005625203.localdomain systemd[1]: libpod-5aa6067ae00c9cd6ecb8168df10af1118b0e9fa78e3c11cdda24457314ffbdd2.scope: Deactivated successfully.
Feb 20 09:45:29 np0005625203.localdomain podman[298514]: 2026-02-20 09:45:29.13220888 +0000 UTC m=+0.142673836 container died 5aa6067ae00c9cd6ecb8168df10af1118b0e9fa78e3c11cdda24457314ffbdd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_cartwright, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, name=rhceph, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:45:29 np0005625203.localdomain podman[298534]: 2026-02-20 09:45:29.222870217 +0000 UTC m=+0.079339134 container remove 5aa6067ae00c9cd6ecb8168df10af1118b0e9fa78e3c11cdda24457314ffbdd2 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sad_cartwright, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, release=1770267347, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, distribution-scope=public, description=Red Hat Ceph Storage 7, version=7, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, io.buildah.version=1.42.2)
Feb 20 09:45:29 np0005625203.localdomain systemd[1]: libpod-conmon-5aa6067ae00c9cd6ecb8168df10af1118b0e9fa78e3c11cdda24457314ffbdd2.scope: Deactivated successfully.
Feb 20 09:45:29 np0005625203.localdomain sudo[298480]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:29 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005625204.localdomain
Feb 20 09:45:29 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005625204.localdomain
Feb 20 09:45:29 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:29 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='client.44411 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:29 np0005625203.localdomain ceph-mon[296066]: pgmap v9: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Feb 20 09:45:30 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1810e7ecb066b0afca83479e5eb5859cd78055d16dd521f2768f27486fefc244-merged.mount: Deactivated successfully.
Feb 20 09:45:30 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:30 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:30 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005625204.localdomain
Feb 20 09:45:30 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005625204.localdomain
Feb 20 09:45:30 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:45:30 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.27612 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:45:30 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Saving service mon spec with placement label:mon
Feb 20 09:45:30 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon
Feb 20 09:45:31 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:31 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:31 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005625204 (monmap changed)...
Feb 20 09:45:31 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005625204 (monmap changed)...
Feb 20 09:45:31 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain
Feb 20 09:45:31 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.0 on np0005625204.localdomain
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.3 on np0005625204.localdomain
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: pgmap v10: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='client.27612 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: Saving service mon spec with placement label:mon
Feb 20 09:45:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:31 np0005625203.localdomain sshd[298558]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:45:32 np0005625203.localdomain sshd[298558]: Received disconnect from 5.253.59.68 port 34900:11: Bye Bye [preauth]
Feb 20 09:45:32 np0005625203.localdomain sshd[298558]: Disconnected from authenticating user root 5.253.59.68 port 34900 [preauth]
Feb 20 09:45:32 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.27618 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625203", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:45:32 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:32 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:32 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] update: starting ev 9a8ca13d-dd78-4985-9997-816257107306 (Updating node-proxy deployment (+4 -> 4))
Feb 20 09:45:32 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] complete: finished ev 9a8ca13d-dd78-4985-9997-816257107306 (Updating node-proxy deployment (+4 -> 4))
Feb 20 09:45:32 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Completed event 9a8ca13d-dd78-4985-9997-816257107306 (Updating node-proxy deployment (+4 -> 4)) in 0 seconds
Feb 20 09:45:32 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:45:32 np0005625203.localdomain sudo[298560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:45:32 np0005625203.localdomain sudo[298560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:32 np0005625203.localdomain sudo[298560]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:32 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005625201 (monmap changed)...
Feb 20 09:45:32 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005625201 (monmap changed)...
Feb 20 09:45:32 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain
Feb 20 09:45:32 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain
Feb 20 09:45:33 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:33 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:33 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005625202 (monmap changed)...
Feb 20 09:45:33 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005625202 (monmap changed)...
Feb 20 09:45:33 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:45:33 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:45:34 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:34 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:34 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v12: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:35 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:35 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:45:35.701168) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580735701351, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 11865, "num_deletes": 291, "total_data_size": 21063187, "memory_usage": 21951936, "flush_reason": "Manual Compaction"}
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mon.np0005625204 (monmap changed)...
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='client.27618 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625203", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: pgmap v11: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mon.np0005625201 (monmap changed)...
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mon.np0005625201 on np0005625201.localdomain
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mon.np0005625202 (monmap changed)...
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.200:0/191592331' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580735777253, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 18263132, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 11870, "table_properties": {"data_size": 18199526, "index_size": 34444, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28101, "raw_key_size": 297262, "raw_average_key_size": 26, "raw_value_size": 18008760, "raw_average_value_size": 1604, "num_data_blocks": 1320, "num_entries": 11224, "num_filter_entries": 11224, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 1771580698, "file_creation_time": 1771580735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 76151 microseconds, and 35849 cpu microseconds.
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:45:35.777345) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 18263132 bytes OK
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:45:35.777377) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:45:35.779706) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:45:35.779771) EVENT_LOG_v1 {"time_micros": 1771580735779752, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:45:35.779806) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 20980525, prev total WAL file size 20980525, number of live WAL files 2.
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:45:35.784374) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end)
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(17MB) 8(1887B)]
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580735784492, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 18265019, "oldest_snapshot_seqno": -1}
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 10973 keys, 18259771 bytes, temperature: kUnknown
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580735879586, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 18259771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18196858, "index_size": 34396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27461, "raw_key_size": 292406, "raw_average_key_size": 26, "raw_value_size": 18009411, "raw_average_value_size": 1641, "num_data_blocks": 1319, "num_entries": 10973, "num_filter_entries": 10973, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771580735, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:45:35.879969) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 18259771 bytes
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:45:35.881956) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 191.9 rd, 191.8 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(17.4, 0.0 +0.0 blob) out(17.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 11229, records dropped: 256 output_compression: NoCompression
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:45:35.881985) EVENT_LOG_v1 {"time_micros": 1771580735881973, "job": 4, "event": "compaction_finished", "compaction_time_micros": 95181, "compaction_time_cpu_micros": 45812, "output_level": 6, "num_output_files": 1, "total_output_size": 18259771, "num_input_records": 11229, "num_output_records": 10973, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580735884541, "job": 4, "event": "table_file_deletion", "file_number": 14}
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580735884601, "job": 4, "event": "table_file_deletion", "file_number": 8}
Feb 20 09:45:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:45:35.784260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:45:36 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:36 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:36 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:45:36 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Writing back 50 completed events
Feb 20 09:45:36 np0005625203.localdomain podman[298579]: 2026-02-20 09:45:36.780086946 +0000 UTC m=+0.090523474 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:45:36 np0005625203.localdomain podman[298579]: 2026-02-20 09:45:36.793138434 +0000 UTC m=+0.103574922 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:45:36 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:45:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:45:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:45:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:45:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:45:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:45:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:45:37 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:37 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:38 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:38 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (2) No such file or directory
Feb 20 09:45:38 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v14: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:45:38 np0005625203.localdomain podman[298602]: 2026-02-20 09:45:38.763099711 +0000 UTC m=+0.075823644 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:45:38 np0005625203.localdomain podman[298602]: 2026-02-20 09:45:38.772502525 +0000 UTC m=+0.085226438 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:45:38 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:45:39 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (22) Invalid argument
Feb 20 09:45:39 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:39 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (22) Invalid argument
Feb 20 09:45:39 np0005625203.localdomain sshd[294272]: Received disconnect from 192.168.122.11 port 45470:11: disconnected by user
Feb 20 09:45:39 np0005625203.localdomain sshd[294272]: Disconnected from user tripleo-admin 192.168.122.11 port 45470
Feb 20 09:45:39 np0005625203.localdomain sshd[294247]: pam_unix(sshd:session): session closed for user tripleo-admin
Feb 20 09:45:39 np0005625203.localdomain systemd[1]: session-67.scope: Deactivated successfully.
Feb 20 09:45:39 np0005625203.localdomain systemd[1]: session-67.scope: Consumed 1.824s CPU time.
Feb 20 09:45:39 np0005625203.localdomain systemd-logind[759]: Session 67 logged out. Waiting for processes to exit.
Feb 20 09:45:39 np0005625203.localdomain systemd-logind[759]: Removed session 67.
Feb 20 09:45:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@-1(probing) e12  my rank is now 3 (was -1)
Feb 20 09:45:39 np0005625203.localdomain ceph-mon[296066]: log_channel(cluster) log [INF] : mon.np0005625203 calling monitor election
Feb 20 09:45:39 np0005625203.localdomain ceph-mon[296066]: paxos.3).electionLogic(0) init, first boot, initializing epoch at 1 
Feb 20 09:45:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@3(electing) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:45:40 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:40 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (22) Invalid argument
Feb 20 09:45:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@3(electing) e12 handle_auth_request failed to assign global_id
Feb 20 09:45:40 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@3(electing) e12 handle_auth_request failed to assign global_id
Feb 20 09:45:41 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@3(electing) e12 handle_auth_request failed to assign global_id
Feb 20 09:45:41 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:41 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (22) Invalid argument
Feb 20 09:45:41 np0005625203.localdomain sshd[298625]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:45:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@3(electing) e12 handle_auth_request failed to assign global_id
Feb 20 09:45:42 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:42 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (22) Invalid argument
Feb 20 09:45:42 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:43 np0005625203.localdomain sshd[298625]: Invalid user ivan from 103.61.123.132 port 55974
Feb 20 09:45:43 np0005625203.localdomain sshd[298625]: Received disconnect from 103.61.123.132 port 55974:11: Bye Bye [preauth]
Feb 20 09:45:43 np0005625203.localdomain sshd[298625]: Disconnected from invalid user ivan 103.61.123.132 port 55974 [preauth]
Feb 20 09:45:43 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:43 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625203: (22) Invalid argument
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@3(electing) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@3(peon) e12 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@3(peon) e12 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: pgmap v13: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@3(peon) e12 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: mgrc update_daemon_metadata mon.np0005625203 metadata {addrs=[v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable),ceph_version_short=18.2.1-381.el9cp,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=np0005625203.localdomain,container_image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=rhel,distro_description=Red Hat Enterprise Linux 9.7 (Plow),distro_version=9.7,hostname=np0005625203.localdomain,kernel_description=#1 SMP PREEMPT_DYNAMIC Wed Apr 12 10:45:03 EDT 2023,kernel_version=5.14.0-284.11.1.el9_2.x86_64,mem_swap_kb=1048572,mem_total_kb=16116612,os=Linux}
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@3(peon) e12 handle_auth_request failed to assign global_id
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625201 calling monitor election
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625204 calling monitor election
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625202 calling monitor election
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203 calling monitor election
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: pgmap v15: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: pgmap v16: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625201 is new leader, mons np0005625201,np0005625204,np0005625202,np0005625203 in quorum (ranks 0,1,2,3)
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: monmap epoch 12
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: last_changed 2026-02-20T09:45:39.346453+0000
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: min_mon_release 18 (reef)
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: election_strategy: 1
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: 0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: 1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: 2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: 3: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: osdmap e89: 6 total, 6 up, 6 in
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: mgrmap e36: np0005625203.lonygy(active, since 27s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu, np0005625202.arwxwo
Feb 20 09:45:44 np0005625203.localdomain ceph-mon[296066]: overall HEALTH_OK
Feb 20 09:45:44 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625203 172.18.0.107:0/2465459824; not ready for session (expect reconnect)
Feb 20 09:45:44 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:44 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.27628 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625201", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:45:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:45.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:45.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 20 09:45:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:45.366 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 20 09:45:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:45 np0005625203.localdomain ceph-mon[296066]: pgmap v17: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:45 np0005625203.localdomain ceph-mon[296066]: from='client.27628 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625201", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:45:45 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_report got status from non-daemon mon.np0005625203
Feb 20 09:45:45 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:45:45.546+0000 7f57f2921640 -1 mgr.server handle_report got status from non-daemon mon.np0005625203
Feb 20 09:45:45 np0005625203.localdomain sshd[298627]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.27636 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005625201"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Remove daemons mon.np0005625201
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005625201
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005625201: new quorum should be ['np0005625204', 'np0005625202', 'np0005625203'] (from ['np0005625204', 'np0005625202', 'np0005625203'])
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005625201: new quorum should be ['np0005625204', 'np0005625202', 'np0005625203'] (from ['np0005625204', 'np0005625202', 'np0005625203'])
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005625201 from monmap...
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removing monitor np0005625201 from monmap...
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005625201 from np0005625201.localdomain -- ports []
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005625201 from np0005625201.localdomain -- ports []
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: client.34441 ms_handle_reset on v2:172.18.0.108:3300/0
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0
Feb 20 09:45:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@3(peon) e13  my rank is now 2 (was 3)
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: client.34441 ms_handle_reset on v2:172.18.0.104:3300/0
Feb 20 09:45:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:46.366 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:46 np0005625203.localdomain ceph-mon[296066]: log_channel(cluster) log [INF] : mon.np0005625203 calling monitor election
Feb 20 09:45:46 np0005625203.localdomain ceph-mon[296066]: paxos.2).electionLogic(50) init, last seen epoch 50
Feb 20 09:45:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: client.27594 ms_handle_reset on v2:172.18.0.105:3300/0
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: --2- 172.18.0.107:0/334797078 >> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] conn(0x557a39fa4400 0x557a3acf2680 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: --2- 172.18.0.107:0/2403717933 >> [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] conn(0x557a39fa5000 0x557a3acf1b80 unknown :-1 s=AUTH_CONNECTING pgs=0 cs=0 l=1 rev1=1 crypto rx=0 tx=0 comp rx=0 tx=0).send_auth_request get_initial_auth_request returned -2
Feb 20 09:45:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] scanning for idle connections..
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] cleaning up connections: []
Feb 20 09:45:46 np0005625203.localdomain systemd[290999]: Starting Mark boot as successful...
Feb 20 09:45:46 np0005625203.localdomain systemd[1]: tmp-crun.gUvTmU.mount: Deactivated successfully.
Feb 20 09:45:46 np0005625203.localdomain podman[298629]: 2026-02-20 09:45:46.804150561 +0000 UTC m=+0.097917475 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 20 09:45:46 np0005625203.localdomain systemd[290999]: Finished Mark boot as successful.
Feb 20 09:45:46 np0005625203.localdomain podman[298629]: 2026-02-20 09:45:46.812274036 +0000 UTC m=+0.106040970 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] scanning for idle connections..
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] cleaning up connections: []
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] scanning for idle connections..
Feb 20 09:45:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] cleaning up connections: []
Feb 20 09:45:46 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:45:47 np0005625203.localdomain sshd[298627]: Invalid user vncuser from 152.32.129.236 port 51144
Feb 20 09:45:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:47.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:47 np0005625203.localdomain sshd[298627]: Received disconnect from 152.32.129.236 port 51144:11: Bye Bye [preauth]
Feb 20 09:45:47 np0005625203.localdomain sshd[298627]: Disconnected from invalid user vncuser 152.32.129.236 port 51144 [preauth]
Feb 20 09:45:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:47.362 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:45:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:47.363 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:45:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:47.363 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:45:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:47.363 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:45:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:47.364 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:45:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:47 np0005625203.localdomain sshd[298659]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:45:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:48 np0005625203.localdomain sshd[298659]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:45:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:48 np0005625203.localdomain sshd[298661]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:45:48 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:48 np0005625203.localdomain sshd[298661]: Received disconnect from 185.196.11.208 port 46490:11: Bye Bye [preauth]
Feb 20 09:45:48 np0005625203.localdomain sshd[298661]: Disconnected from authenticating user root 185.196.11.208 port 46490 [preauth]
Feb 20 09:45:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:49 np0005625203.localdomain systemd[1]: Stopping User Manager for UID 1003...
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Activating special unit Exit the Session...
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Stopped target Main User Target.
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Stopped target Basic System.
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Stopped target Paths.
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Stopped target Sockets.
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Stopped target Timers.
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Stopped Mark boot as successful after the user session has run 2 minutes.
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Stopped Daily Cleanup of User's Temporary Directories.
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Closed D-Bus User Message Bus Socket.
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Stopped Create User's Volatile Files and Directories.
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Removed slice User Application Slice.
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Reached target Shutdown.
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Finished Exit the Session.
Feb 20 09:45:49 np0005625203.localdomain systemd[294251]: Reached target Exit the Session.
Feb 20 09:45:49 np0005625203.localdomain systemd[1]: user@1003.service: Deactivated successfully.
Feb 20 09:45:49 np0005625203.localdomain systemd[1]: Stopped User Manager for UID 1003.
Feb 20 09:45:49 np0005625203.localdomain systemd[1]: Stopping User Runtime Directory /run/user/1003...
Feb 20 09:45:49 np0005625203.localdomain systemd[1]: run-user-1003.mount: Deactivated successfully.
Feb 20 09:45:49 np0005625203.localdomain systemd[1]: user-runtime-dir@1003.service: Deactivated successfully.
Feb 20 09:45:49 np0005625203.localdomain systemd[1]: Stopped User Runtime Directory /run/user/1003.
Feb 20 09:45:49 np0005625203.localdomain systemd[1]: Removed slice User Slice of UID 1003.
Feb 20 09:45:49 np0005625203.localdomain systemd[1]: user-1003.slice: Consumed 2.385s CPU time.
Feb 20 09:45:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:50 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:51 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:51 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:51 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:51 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:51 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:51 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:51 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:51 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:51 np0005625203.localdomain sudo[298665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:45:51 np0005625203.localdomain sudo[298665]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:45:51 np0005625203.localdomain sudo[298665]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:51 np0005625203.localdomain sudo[298689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:45:51 np0005625203.localdomain sudo[298689]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:51 np0005625203.localdomain sudo[298689]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:51 np0005625203.localdomain podman[298683]: 2026-02-20 09:45:51.61651307 +0000 UTC m=+0.103872272 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:45:51 np0005625203.localdomain podman[298683]: 2026-02-20 09:45:51.709950612 +0000 UTC m=+0.197309794 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Feb 20 09:45:51 np0005625203.localdomain sudo[298717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:45:51 np0005625203.localdomain sudo[298717]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:51 np0005625203.localdomain sudo[298717]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:51 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:45:51 np0005625203.localdomain sudo[298744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:51 np0005625203.localdomain sudo[298744]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:51 np0005625203.localdomain sudo[298744]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:51 np0005625203.localdomain sudo[298762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:45:51 np0005625203.localdomain sudo[298762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:51 np0005625203.localdomain sudo[298762]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:51 np0005625203.localdomain sudo[298797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:45:51 np0005625203.localdomain sudo[298797]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:51 np0005625203.localdomain sudo[298797]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:52 np0005625203.localdomain sudo[298824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:45:52 np0005625203.localdomain sudo[298824]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:52 np0005625203.localdomain sudo[298824]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:52 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:52 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:52 np0005625203.localdomain sudo[298842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:45:52 np0005625203.localdomain sudo[298842]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:52 np0005625203.localdomain sudo[298842]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:52 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:52 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:52 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:52 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:52 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:52 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:52 np0005625203.localdomain sudo[298860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:45:52 np0005625203.localdomain sudo[298860]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:52 np0005625203.localdomain sudo[298860]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.247 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.884s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:45:52 np0005625203.localdomain sudo[298878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:45:52 np0005625203.localdomain sudo[298878]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:52 np0005625203.localdomain sudo[298878]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:52 np0005625203.localdomain sudo[298898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:45:52 np0005625203.localdomain sudo[298898]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:52 np0005625203.localdomain sudo[298898]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.464 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.466 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12367MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.467 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.467 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:45:52 np0005625203.localdomain sudo[298916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:52 np0005625203.localdomain sudo[298916]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:52 np0005625203.localdomain sudo[298916]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:52 np0005625203.localdomain sudo[298934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:45:52 np0005625203.localdomain sudo[298934]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:52 np0005625203.localdomain ceph-mon[296066]: paxos.2).electionLogic(51) init, last seen epoch 51, mid-election, bumping
Feb 20 09:45:52 np0005625203.localdomain sudo[298934]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:45:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:45:52 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:52 np0005625203.localdomain sudo[298968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:45:52 np0005625203.localdomain sudo[298968]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:52 np0005625203.localdomain sudo[298968]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.718 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.719 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.784 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Refreshing inventories for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 20 09:45:52 np0005625203.localdomain sudo[298986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:45:52 np0005625203.localdomain sudo[298986]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:52 np0005625203.localdomain sudo[298986]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.800 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Updating ProviderTree inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.800 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.852 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Refreshing aggregate associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 20 09:45:52 np0005625203.localdomain sudo[299004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.873 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Refreshing trait associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, traits: COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 20 09:45:52 np0005625203.localdomain sudo[299004]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:52 np0005625203.localdomain sudo[299004]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:52.888 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:45:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:45:53 np0005625203.localdomain ceph-mds[282126]: mds.beacon.mds.np0005625203.zsrwgk missed beacon ack from the monitors
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: Remove daemons mon.np0005625201
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "quorum_status"} : dispatch
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: Safe to remove mon.np0005625201: new quorum should be ['np0005625204', 'np0005625202', 'np0005625203'] (from ['np0005625204', 'np0005625202', 'np0005625203'])
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: Removing monitor np0005625201 from monmap...
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon rm", "name": "np0005625201"} : dispatch
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: Removing daemon mon.np0005625201 from np0005625201.localdomain -- ports []
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625204 calling monitor election
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625202 calling monitor election
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: pgmap v18: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: pgmap v19: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: pgmap v20: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625204 is new leader, mons np0005625204,np0005625202 in quorum (ranks 0,1)
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: monmap epoch 13
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: last_changed 2026-02-20T09:45:46.327222+0000
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: min_mon_release 18 (reef)
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: election_strategy: 1
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: osdmap e89: 6 total, 6 up, 6 in
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mgrmap e36: np0005625203.lonygy(active, since 34s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu, np0005625202.arwxwo
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: Health check failed: 1/3 mons down, quorum np0005625204,np0005625202 (MON_DOWN)
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: Health detail: HEALTH_WARN 1/3 mons down, quorum np0005625204,np0005625202
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: [WRN] MON_DOWN: 1/3 mons down, quorum np0005625204,np0005625202
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]:     mon.np0005625203 (rank 2) addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] is down (out of quorum)
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: Updating np0005625201.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1126955229' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3661915845' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon) e13 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon).osd e89 _set_new_cache_sizes cache_size:1019650440 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:45:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon) e13 handle_auth_request failed to assign global_id
Feb 20 09:45:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:53.977 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:45:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:53.984 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:45:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:54.001 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:45:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:54.003 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:45:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:54.003 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:45:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:54.004 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:54.004 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 20 09:45:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:54.017 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:54 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] update: starting ev 045b7ef5-38a0-4463-9205-b7877e5f60bc (Updating mon deployment (+1 -> 4))
Feb 20 09:45:54 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005625201 on np0005625201.localdomain
Feb 20 09:45:54 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005625201 on np0005625201.localdomain
Feb 20 09:45:54 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.44464 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625201.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:45:54 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Removed label mon from host np0005625201.localdomain
Feb 20 09:45:54 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removed label mon from host np0005625201.localdomain
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203 calling monitor election
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: Updating np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625202 calling monitor election
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: pgmap v21: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625204 calling monitor election
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625204 is new leader, mons np0005625204,np0005625202,np0005625203 in quorum (ranks 0,1,2)
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: monmap epoch 13
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: last_changed 2026-02-20T09:45:46.327222+0000
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: min_mon_release 18 (reef)
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: election_strategy: 1
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: osdmap e89: 6 total, 6 up, 6 in
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: mgrmap e36: np0005625203.lonygy(active, since 37s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu, np0005625202.arwxwo
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum np0005625204,np0005625202)
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: Cluster is now healthy
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: overall HEALTH_OK
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2284102821' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3624088180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3402059240' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:45:54 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:55 np0005625203.localdomain ceph-mon[296066]: Deploying daemon mon.np0005625201 on np0005625201.localdomain
Feb 20 09:45:55 np0005625203.localdomain ceph-mon[296066]: from='client.44464 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625201.localdomain", "label": "mon", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:45:55 np0005625203.localdomain ceph-mon[296066]: Removed label mon from host np0005625201.localdomain
Feb 20 09:45:55 np0005625203.localdomain ceph-mon[296066]: pgmap v22: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1060203723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:45:55 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.44473 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625201.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:45:55 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Removed label mgr from host np0005625201.localdomain
Feb 20 09:45:55 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removed label mgr from host np0005625201.localdomain
Feb 20 09:45:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:56.027 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:56.029 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:56.029 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:45:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:56.029 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:45:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:56.052 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:45:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:56.052 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:56.053 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:56.054 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:56.054 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:56.055 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:56.055 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:45:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:45:56.364 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:45:56 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v23: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:45:56 np0005625203.localdomain ceph-mon[296066]: from='client.44473 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625201.localdomain", "label": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:45:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:45:56 np0005625203.localdomain ceph-mon[296066]: Removed label mgr from host np0005625201.localdomain
Feb 20 09:45:56 np0005625203.localdomain podman[299044]: 2026-02-20 09:45:56.769016954 +0000 UTC m=+0.079475138 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, vendor=Red Hat, Inc., vcs-type=git, version=9.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:45:56 np0005625203.localdomain podman[299044]: 2026-02-20 09:45:56.783939012 +0000 UTC m=+0.094397156 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1770267347, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible)
Feb 20 09:45:56 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:45:56 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.44479 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625201.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:45:56 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Removed label _admin from host np0005625201.localdomain
Feb 20 09:45:56 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removed label _admin from host np0005625201.localdomain
Feb 20 09:45:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon) e13  adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints
Feb 20 09:45:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon) e13  adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints
Feb 20 09:45:57 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] complete: finished ev 045b7ef5-38a0-4463-9205-b7877e5f60bc (Updating mon deployment (+1 -> 4))
Feb 20 09:45:57 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Completed event 045b7ef5-38a0-4463-9205-b7877e5f60bc (Updating mon deployment (+1 -> 4)) in 3 seconds
Feb 20 09:45:57 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] update: starting ev c5d20d35-e578-4e01-a8c4-4925f945dcbe (Updating node-proxy deployment (+4 -> 4))
Feb 20 09:45:57 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] complete: finished ev c5d20d35-e578-4e01-a8c4-4925f945dcbe (Updating node-proxy deployment (+4 -> 4))
Feb 20 09:45:57 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Completed event c5d20d35-e578-4e01-a8c4-4925f945dcbe (Updating node-proxy deployment (+4 -> 4)) in 0 seconds
Feb 20 09:45:57 np0005625203.localdomain sudo[299063]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:45:57 np0005625203.localdomain sudo[299063]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:45:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:45:57 np0005625203.localdomain sudo[299063]: pam_unix(sudo:session): session closed for user root
Feb 20 09:45:57 np0005625203.localdomain podman[299080]: 2026-02-20 09:45:57.459929569 +0000 UTC m=+0.087206160 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:45:57 np0005625203.localdomain podman[299080]: 2026-02-20 09:45:57.474472304 +0000 UTC m=+0.101748815 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 20 09:45:57 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:45:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon) e13  adding peer [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] to list of hints
Feb 20 09:45:57 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625201 172.18.0.105:0/2255462366; not ready for session (expect reconnect)
Feb 20 09:45:57 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625201: (2) No such file or directory
Feb 20 09:45:57 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625201: (22) Invalid argument
Feb 20 09:45:57 np0005625203.localdomain ceph-mon[296066]: log_channel(cluster) log [INF] : mon.np0005625203 calling monitor election
Feb 20 09:45:57 np0005625203.localdomain ceph-mon[296066]: paxos.2).electionLogic(56) init, last seen epoch 56
Feb 20 09:45:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:45:58 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625201 172.18.0.105:0/2255462366; not ready for session (expect reconnect)
Feb 20 09:45:58 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625201: (22) Invalid argument
Feb 20 09:45:58 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:45:59 np0005625203.localdomain podman[240359]: time="2026-02-20T09:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:45:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:45:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17784 "" "Go-http-client/1.1"
Feb 20 09:45:59 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625201 172.18.0.105:0/2255462366; not ready for session (expect reconnect)
Feb 20 09:45:59 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625201: (22) Invalid argument
Feb 20 09:46:00 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625201 172.18.0.105:0/2255462366; not ready for session (expect reconnect)
Feb 20 09:46:00 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625201: (22) Invalid argument
Feb 20 09:46:00 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:01 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625201 172.18.0.105:0/2255462366; not ready for session (expect reconnect)
Feb 20 09:46:01 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625201: (22) Invalid argument
Feb 20 09:46:01 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Writing back 50 completed events
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e14 handle_auth_request failed to assign global_id
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e14 handle_auth_request failed to assign global_id
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625201 172.18.0.105:0/2255462366; not ready for session (expect reconnect)
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625201: (22) Invalid argument
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: paxos.2).electionLogic(57) init, last seen epoch 57, mid-election, bumping
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon) e14 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: from='client.44479 -' entity='client.admin' cmd=[{"prefix": "orch host label rm", "hostname": "np0005625201.localdomain", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: Removed label _admin from host np0005625201.localdomain
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625204 calling monitor election
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625202 calling monitor election
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203 calling monitor election
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: pgmap v24: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625201 calling monitor election
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: pgmap v25: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625204 is new leader, mons np0005625204,np0005625202,np0005625203,np0005625201 in quorum (ranks 0,1,2,3)
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: monmap epoch 14
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: last_changed 2026-02-20T09:45:57.556107+0000
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: min_mon_release 18 (reef)
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: election_strategy: 1
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: 3: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: osdmap e89: 6 total, 6 up, 6 in
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mgrmap e36: np0005625203.lonygy(active, since 46s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu, np0005625202.arwxwo
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: overall HEALTH_OK
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Removing np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removing np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon) e14 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3151353263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon) e14 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:46:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3151353263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:46:02 np0005625203.localdomain sudo[299100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:46:02 np0005625203.localdomain sudo[299100]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:02 np0005625203.localdomain sudo[299100]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Removing np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removing np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:46:02 np0005625203.localdomain sudo[299118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:46:02 np0005625203.localdomain sudo[299118]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:02 np0005625203.localdomain sudo[299118]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Removing np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:46:02 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removing np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:46:02 np0005625203.localdomain sudo[299136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:02 np0005625203.localdomain sudo[299136]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:02 np0005625203.localdomain sudo[299136]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:03 np0005625203.localdomain sudo[299154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:03 np0005625203.localdomain sudo[299154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:03 np0005625203.localdomain sudo[299154]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:03 np0005625203.localdomain sudo[299172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:03 np0005625203.localdomain sudo[299172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:03 np0005625203.localdomain sudo[299172]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:03 np0005625203.localdomain sudo[299206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:03 np0005625203.localdomain sudo[299206]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:03 np0005625203.localdomain sudo[299206]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:03 np0005625203.localdomain sudo[299224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:03 np0005625203.localdomain sudo[299224]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:03 np0005625203.localdomain sudo[299224]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:03 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:03 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:03 np0005625203.localdomain sudo[299242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:46:03 np0005625203.localdomain sudo[299242]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:03 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:03 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:03 np0005625203.localdomain sudo[299242]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:03 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:03 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:03 np0005625203.localdomain sudo[299260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:46:03 np0005625203.localdomain sudo[299260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:03 np0005625203.localdomain sudo[299260]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:03 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625201 172.18.0.105:0/2255462366; not ready for session (expect reconnect)
Feb 20 09:46:03 np0005625203.localdomain sudo[299278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:46:03 np0005625203.localdomain sudo[299278]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:03 np0005625203.localdomain sudo[299278]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:03 np0005625203.localdomain sudo[299296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: pgmap v26: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: Removing np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3151353263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3151353263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: Removing np0005625201.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: Removing np0005625201.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625201"} : dispatch
Feb 20 09:46:03 np0005625203.localdomain sudo[299296]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:03 np0005625203.localdomain sudo[299296]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:03 np0005625203.localdomain sudo[299314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:03 np0005625203.localdomain sudo[299314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:03 np0005625203.localdomain sudo[299314]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:03 np0005625203.localdomain sudo[299332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:03 np0005625203.localdomain sudo[299332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:03 np0005625203.localdomain sudo[299332]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:03 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon).osd e89 _set_new_cache_sizes cache_size:1020048559 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:46:03 np0005625203.localdomain sudo[299366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:04 np0005625203.localdomain sudo[299366]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:04 np0005625203.localdomain sudo[299366]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:04 np0005625203.localdomain sudo[299384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:04 np0005625203.localdomain sudo[299384]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:04 np0005625203.localdomain sudo[299384]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:04 np0005625203.localdomain sudo[299402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:04 np0005625203.localdomain sudo[299402]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:04 np0005625203.localdomain sudo[299402]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:04 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_report got status from non-daemon mon.np0005625201
Feb 20 09:46:04 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:46:04.565+0000 7f57f2921640 -1 mgr.server handle_report got status from non-daemon mon.np0005625201
Feb 20 09:46:04 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:04.906 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:46:05 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:05 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:05 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:05 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] update: starting ev e5567d8c-04b0-486d-91e8-d175eff2ca8a (Updating mgr deployment (-1 -> 3))
Feb 20 09:46:05 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Removing daemon mgr.np0005625201.mtnyvu from np0005625201.localdomain -- ports [8765]
Feb 20 09:46:05 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removing daemon mgr.np0005625201.mtnyvu from np0005625201.localdomain -- ports [8765]
Feb 20 09:46:06 np0005625203.localdomain ceph-mon[296066]: pgmap v27: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:06 np0005625203.localdomain ceph-mon[296066]: Removing daemon mgr.np0005625201.mtnyvu from np0005625201.localdomain -- ports [8765]
Feb 20 09:46:06 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:46:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:46:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:46:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:46:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:46:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:46:07 np0005625203.localdomain ceph-mon[296066]: pgmap v28: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:46:07.662 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:46:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:46:07.662 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:46:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:46:07.663 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:46:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:46:07 np0005625203.localdomain podman[299420]: 2026-02-20 09:46:07.774575571 +0000 UTC m=+0.086032954 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:46:07 np0005625203.localdomain podman[299420]: 2026-02-20 09:46:07.788411944 +0000 UTC m=+0.099869317 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:46:07 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.np0005625201.mtnyvu
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removing key for mgr.np0005625201.mtnyvu
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] complete: finished ev e5567d8c-04b0-486d-91e8-d175eff2ca8a (Updating mgr deployment (-1 -> 3))
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Completed event e5567d8c-04b0-486d-91e8-d175eff2ca8a (Updating mgr deployment (-1 -> 3)) in 3 seconds
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] update: starting ev dcbd7c7b-ba3e-4734-b5e2-83ce17c3ddee (Updating mon deployment (-1 -> 3))
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005625201: new quorum should be ['np0005625204', 'np0005625202', 'np0005625203'] (from ['np0005625204', 'np0005625202', 'np0005625203'])
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005625201: new quorum should be ['np0005625204', 'np0005625202', 'np0005625203'] (from ['np0005625204', 'np0005625202', 'np0005625203'])
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005625201 from monmap...
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removing monitor np0005625201 from monmap...
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005625201 from np0005625201.localdomain -- ports []
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005625201 from np0005625201.localdomain -- ports []
Feb 20 09:46:08 np0005625203.localdomain ceph-mon[296066]: log_channel(cluster) log [INF] : mon.np0005625203 calling monitor election
Feb 20 09:46:08 np0005625203.localdomain ceph-mon[296066]: paxos.2).electionLogic(60) init, last seen epoch 60
Feb 20 09:46:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:46:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(electing) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:46:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon) e15 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054641 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.44494 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005625201.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Added label _no_schedule to host np0005625201.localdomain
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Added label _no_schedule to host np0005625201.localdomain
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005625201.localdomain
Feb 20 09:46:08 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005625201.localdomain
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: Removing key for mgr.np0005625201.mtnyvu
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: Safe to remove mon.np0005625201: new quorum should be ['np0005625204', 'np0005625202', 'np0005625203'] (from ['np0005625204', 'np0005625202', 'np0005625203'])
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: Removing monitor np0005625201 from monmap...
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: Removing daemon mon.np0005625201 from np0005625201.localdomain -- ports []
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625204 calling monitor election
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203 calling monitor election
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625202 calling monitor election
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625204 is new leader, mons np0005625204,np0005625202,np0005625203 in quorum (ranks 0,1,2)
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: monmap epoch 15
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: last_changed 2026-02-20T09:46:08.177805+0000
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: min_mon_release 18 (reef)
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: election_strategy: 1
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: 0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: 1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: 2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: osdmap e89: 6 total, 6 up, 6 in
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: mgrmap e36: np0005625203.lonygy(active, since 51s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu, np0005625202.arwxwo
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: overall HEALTH_OK
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: pgmap v29: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: from='client.44494 -' entity='client.admin' cmd=[{"prefix": "orch host drain", "hostname": "np0005625201.localdomain", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:46:09 np0005625203.localdomain systemd[1]: tmp-crun.Os6vQt.mount: Deactivated successfully.
Feb 20 09:46:09 np0005625203.localdomain podman[299442]: 2026-02-20 09:46:09.770987934 +0000 UTC m=+0.081177253 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:46:09 np0005625203.localdomain podman[299442]: 2026-02-20 09:46:09.779635322 +0000 UTC m=+0.089824631 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:46:09 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:46:09 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] complete: finished ev dcbd7c7b-ba3e-4734-b5e2-83ce17c3ddee (Updating mon deployment (-1 -> 3))
Feb 20 09:46:09 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Completed event dcbd7c7b-ba3e-4734-b5e2-83ce17c3ddee (Updating mon deployment (-1 -> 3)) in 2 seconds
Feb 20 09:46:09 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] update: starting ev 72a1ba2e-224f-4b66-96e1-6d946d87a2cf (Updating node-proxy deployment (+3 -> 3))
Feb 20 09:46:09 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] complete: finished ev 72a1ba2e-224f-4b66-96e1-6d946d87a2cf (Updating node-proxy deployment (+3 -> 3))
Feb 20 09:46:09 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Completed event 72a1ba2e-224f-4b66-96e1-6d946d87a2cf (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Feb 20 09:46:10 np0005625203.localdomain sudo[299464]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:46:10 np0005625203.localdomain sudo[299464]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:10 np0005625203.localdomain sudo[299464]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.265746) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580770265782, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1123, "num_deletes": 256, "total_data_size": 1317514, "memory_usage": 1338720, "flush_reason": "Manual Compaction"}
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580770277257, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 842561, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11871, "largest_seqno": 12993, "table_properties": {"data_size": 837545, "index_size": 2231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14235, "raw_average_key_size": 21, "raw_value_size": 825894, "raw_average_value_size": 1236, "num_data_blocks": 91, "num_entries": 668, "num_filter_entries": 668, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580739, "oldest_key_time": 1771580739, "file_creation_time": 1771580770, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 11560 microseconds, and 3541 cpu microseconds.
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.277303) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 842561 bytes OK
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.277327) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.279317) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.279338) EVENT_LOG_v1 {"time_micros": 1771580770279331, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.279359) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1311341, prev total WAL file size 1311341, number of live WAL files 2.
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.280086) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031323733' seq:72057594037927935, type:22 .. '6B760031353239' seq:0, type:0; will stop at (end)
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(822KB)], [15(17MB)]
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580770280129, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 19102332, "oldest_snapshot_seqno": -1}
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 11094 keys, 18043057 bytes, temperature: kUnknown
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580770373776, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 18043057, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17980151, "index_size": 34069, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 297635, "raw_average_key_size": 26, "raw_value_size": 17791117, "raw_average_value_size": 1603, "num_data_blocks": 1284, "num_entries": 11094, "num_filter_entries": 11094, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771580770, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.374177) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 18043057 bytes
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.376180) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.6 rd, 192.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 17.4 +0.0 blob) out(17.2 +0.0 blob), read-write-amplify(44.1) write-amplify(21.4) OK, records in: 11641, records dropped: 547 output_compression: NoCompression
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.376210) EVENT_LOG_v1 {"time_micros": 1771580770376197, "job": 6, "event": "compaction_finished", "compaction_time_micros": 93831, "compaction_time_cpu_micros": 46312, "output_level": 6, "num_output_files": 1, "total_output_size": 18043057, "num_input_records": 11641, "num_output_records": 11094, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580770376592, "job": 6, "event": "table_file_deletion", "file_number": 17}
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580770379390, "job": 6, "event": "table_file_deletion", "file_number": 15}
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.280036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.379517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.379525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.379527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.379531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:10.379533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:10 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:10 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.44497 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005625201.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: Added label _no_schedule to host np0005625201.localdomain
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: Added label SpecialHostLabels.DRAIN_CONF_KEYRING to host np0005625201.localdomain
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:10 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:46:11 np0005625203.localdomain sshd[299482]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:46:11 np0005625203.localdomain sshd[299482]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:46:11 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:11 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:11 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:11 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:11 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:11 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:11 np0005625203.localdomain sudo[299484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:46:11 np0005625203.localdomain sudo[299484]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:11 np0005625203.localdomain sudo[299484]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:11 np0005625203.localdomain sudo[299502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:46:11 np0005625203.localdomain sudo[299502]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:11 np0005625203.localdomain sudo[299502]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:11 np0005625203.localdomain sudo[299520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:11 np0005625203.localdomain sudo[299520]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:11 np0005625203.localdomain sudo[299520]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:11 np0005625203.localdomain sudo[299538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:11 np0005625203.localdomain sudo[299538]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:11 np0005625203.localdomain ceph-mon[296066]: pgmap v30: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:11 np0005625203.localdomain ceph-mon[296066]: from='client.44497 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "host_pattern": "np0005625201.localdomain", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:46:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:46:11 np0005625203.localdomain sudo[299538]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:11 np0005625203.localdomain sudo[299556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:11 np0005625203.localdomain sudo[299556]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:12 np0005625203.localdomain sudo[299556]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:12 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.44503 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005625201.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:12 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Removed host np0005625201.localdomain
Feb 20 09:46:12 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removed host np0005625201.localdomain
Feb 20 09:46:12 np0005625203.localdomain sudo[299590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:12 np0005625203.localdomain sudo[299590]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:12 np0005625203.localdomain sudo[299590]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:12 np0005625203.localdomain sudo[299608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:12 np0005625203.localdomain sudo[299608]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:12 np0005625203.localdomain sudo[299608]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:12 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:12 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:12 np0005625203.localdomain sudo[299626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:46:12 np0005625203.localdomain sudo[299626]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:12 np0005625203.localdomain sudo[299626]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:12 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:12 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:12 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:12 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:12 np0005625203.localdomain sudo[299644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:46:12 np0005625203.localdomain sudo[299644]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:12 np0005625203.localdomain sudo[299644]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:12 np0005625203.localdomain sudo[299662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:46:12 np0005625203.localdomain sudo[299662]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:12 np0005625203.localdomain sudo[299662]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:12 np0005625203.localdomain sudo[299680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:12 np0005625203.localdomain sudo[299680]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:12 np0005625203.localdomain sudo[299680]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:12 np0005625203.localdomain sudo[299698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:12 np0005625203.localdomain sudo[299698]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:12 np0005625203.localdomain sudo[299698]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:12 np0005625203.localdomain sudo[299716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:12 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:12 np0005625203.localdomain sudo[299716]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:12 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Writing back 50 completed events
Feb 20 09:46:12 np0005625203.localdomain sudo[299716]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:12 np0005625203.localdomain sudo[299750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:12 np0005625203.localdomain sudo[299750]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:12 np0005625203.localdomain sudo[299750]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:12 np0005625203.localdomain sudo[299768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:12 np0005625203.localdomain sudo[299768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:12 np0005625203.localdomain sudo[299768]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:12 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:12 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:12 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain"} : dispatch
Feb 20 09:46:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain"} : dispatch
Feb 20 09:46:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain"}]': finished
Feb 20 09:46:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:12 np0005625203.localdomain sudo[299786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:12 np0005625203.localdomain sudo[299786]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:12 np0005625203.localdomain sudo[299786]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:13 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] update: starting ev 038a5eef-db84-4414-84c3-96a96635bd62 (Updating node-proxy deployment (+3 -> 3))
Feb 20 09:46:13 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] complete: finished ev 038a5eef-db84-4414-84c3-96a96635bd62 (Updating node-proxy deployment (+3 -> 3))
Feb 20 09:46:13 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Completed event 038a5eef-db84-4414-84c3-96a96635bd62 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Feb 20 09:46:13 np0005625203.localdomain sudo[299804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:46:13 np0005625203.localdomain sudo[299804]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:13 np0005625203.localdomain sudo[299804]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:13 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:46:13 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:46:13 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:46:13 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054730 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: from='client.44503 -' entity='client.admin' cmd=[{"prefix": "orch host rm", "hostname": "np0005625201.localdomain", "force": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: Removed host np0005625201.localdomain
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: pgmap v31: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:14 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)...
Feb 20 09:46:14 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)...
Feb 20 09:46:14 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:46:14 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:46:14 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:14 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:46:14 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:46:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 20 09:46:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:15 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)...
Feb 20 09:46:15 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)...
Feb 20 09:46:15 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:46:15 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:46:15 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.2 (monmap changed)...
Feb 20 09:46:15 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:46:15 np0005625203.localdomain ceph-mon[296066]: pgmap v32: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:15 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.5 (monmap changed)...
Feb 20 09:46:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Feb 20 09:46:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:15 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [balancer INFO root] Optimize plan auto_2026-02-20_09:46:16
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [balancer INFO root] do_upmap
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [balancer INFO root] pools ['backups', 'manila_metadata', 'manila_data', 'vms', 'volumes', 'images', '.mgr']
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [balancer INFO root] prepared 0/10 changes
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] _maybe_adjust
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 3.080724804578448e-05 of space, bias 1.0, pg target 0.006161449609156895 quantized to 1 (current 1)
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003325274375348967 of space, bias 1.0, pg target 0.6650548750697934 quantized to 32 (current 32)
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014449417225013959 of space, bias 1.0, pg target 0.2885066972594454 quantized to 32 (current 32)
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] Pool 'manila_data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 45071990784
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [pg_autoscaler INFO root] Pool 'manila_metadata' root_id -1 using 2.453674623115578e-06 of space, bias 4.0, pg target 0.0019596681323283084 quantized to 16 (current 16)
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] scanning for idle connections..
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] cleaning up connections: []
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] scanning for idle connections..
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] cleaning up connections: []
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] scanning for idle connections..
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] cleaning up connections: []
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: vms, start_after=
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: volumes, start_after=
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: images, start_after=
Feb 20 09:46:16 np0005625203.localdomain ceph-mgr[285471]: [rbd_support INFO root] load_schedules: backups, start_after=
Feb 20 09:46:17 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:46:17 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:46:17 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:46:17 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: pgmap v33: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:46:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:46:17 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Writing back 50 completed events
Feb 20 09:46:17 np0005625203.localdomain systemd[1]: tmp-crun.Ey1yXf.mount: Deactivated successfully.
Feb 20 09:46:17 np0005625203.localdomain podman[299822]: 2026-02-20 09:46:17.756220852 +0000 UTC m=+0.070842563 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Feb 20 09:46:17 np0005625203.localdomain podman[299822]: 2026-02-20 09:46:17.76198242 +0000 UTC m=+0.076604121 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:46:17 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:46:17 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mon.np0005625202 (monmap changed)...
Feb 20 09:46:17 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mon.np0005625202 (monmap changed)...
Feb 20 09:46:18 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:46:18 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:46:18 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:18 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:46:18 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:46:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:46:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:46:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:18 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:46:18 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:46:18 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:46:18 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:46:18 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:46:18 np0005625203.localdomain sudo[299840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:46:18 np0005625203.localdomain sudo[299840]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:18 np0005625203.localdomain sudo[299840]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:18 np0005625203.localdomain sudo[299858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:18 np0005625203.localdomain sudo[299858]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:19 np0005625203.localdomain podman[299894]: 
Feb 20 09:46:19 np0005625203.localdomain podman[299894]: 2026-02-20 09:46:19.434093156 +0000 UTC m=+0.067139569 container create 008eabde59771f7dcc699b3daa73051b589627b3a5cc142216c738f442c96aac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_franklin, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, version=7, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, ceph=True, io.openshift.expose-services=, distribution-scope=public, name=rhceph)
Feb 20 09:46:19 np0005625203.localdomain systemd[1]: Started libpod-conmon-008eabde59771f7dcc699b3daa73051b589627b3a5cc142216c738f442c96aac.scope.
Feb 20 09:46:19 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:46:19 np0005625203.localdomain podman[299894]: 2026-02-20 09:46:19.50823724 +0000 UTC m=+0.141283633 container init 008eabde59771f7dcc699b3daa73051b589627b3a5cc142216c738f442c96aac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_franklin, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, version=7, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, ceph=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, vcs-type=git, io.buildah.version=1.42.2, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1770267347)
Feb 20 09:46:19 np0005625203.localdomain podman[299894]: 2026-02-20 09:46:19.410084043 +0000 UTC m=+0.043130446 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:46:19 np0005625203.localdomain podman[299894]: 2026-02-20 09:46:19.519591732 +0000 UTC m=+0.152638095 container start 008eabde59771f7dcc699b3daa73051b589627b3a5cc142216c738f442c96aac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_franklin, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, architecture=x86_64, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., release=1770267347, distribution-scope=public, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:46:19 np0005625203.localdomain podman[299894]: 2026-02-20 09:46:19.51986167 +0000 UTC m=+0.152908053 container attach 008eabde59771f7dcc699b3daa73051b589627b3a5cc142216c738f442c96aac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_franklin, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, distribution-scope=public, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, io.buildah.version=1.42.2, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc.)
Feb 20 09:46:19 np0005625203.localdomain beautiful_franklin[299910]: 167 167
Feb 20 09:46:19 np0005625203.localdomain systemd[1]: libpod-008eabde59771f7dcc699b3daa73051b589627b3a5cc142216c738f442c96aac.scope: Deactivated successfully.
Feb 20 09:46:19 np0005625203.localdomain podman[299894]: 2026-02-20 09:46:19.524253326 +0000 UTC m=+0.157299769 container died 008eabde59771f7dcc699b3daa73051b589627b3a5cc142216c738f442c96aac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_franklin, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., io.buildah.version=1.42.2, name=rhceph, GIT_CLEAN=True, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, GIT_BRANCH=main, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 09:46:19 np0005625203.localdomain podman[299915]: 2026-02-20 09:46:19.618171743 +0000 UTC m=+0.084568179 container remove 008eabde59771f7dcc699b3daa73051b589627b3a5cc142216c738f442c96aac (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=beautiful_franklin, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, build-date=2026-02-09T10:25:24Z, name=rhceph, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, RELEASE=main, ceph=True, GIT_BRANCH=main, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=)
Feb 20 09:46:19 np0005625203.localdomain systemd[1]: libpod-conmon-008eabde59771f7dcc699b3daa73051b589627b3a5cc142216c738f442c96aac.scope: Deactivated successfully.
Feb 20 09:46:19 np0005625203.localdomain sudo[299858]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:19 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Feb 20 09:46:19 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Feb 20 09:46:19 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:46:19 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:46:19 np0005625203.localdomain sudo[299931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:46:19 np0005625203.localdomain sudo[299931]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:19 np0005625203.localdomain sudo[299931]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mon.np0005625202 (monmap changed)...
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: pgmap v34: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 20 09:46:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:19 np0005625203.localdomain sudo[299949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:19 np0005625203.localdomain sudo[299949]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:20 np0005625203.localdomain podman[299985]: 
Feb 20 09:46:20 np0005625203.localdomain podman[299985]: 2026-02-20 09:46:20.309319031 +0000 UTC m=+0.080508963 container create 99f2cd4b409e73b2c218296cdd86a9d5126f197bb0d8e049acaf9d8ee318d8cc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_aryabhata, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, RELEASE=main, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, name=rhceph, GIT_CLEAN=True, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, io.openshift.expose-services=, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:46:20 np0005625203.localdomain systemd[1]: Started libpod-conmon-99f2cd4b409e73b2c218296cdd86a9d5126f197bb0d8e049acaf9d8ee318d8cc.scope.
Feb 20 09:46:20 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:46:20 np0005625203.localdomain podman[299985]: 2026-02-20 09:46:20.372382703 +0000 UTC m=+0.143572625 container init 99f2cd4b409e73b2c218296cdd86a9d5126f197bb0d8e049acaf9d8ee318d8cc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_aryabhata, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, vcs-type=git, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, GIT_CLEAN=True, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:46:20 np0005625203.localdomain podman[299985]: 2026-02-20 09:46:20.275652499 +0000 UTC m=+0.046842431 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:46:20 np0005625203.localdomain podman[299985]: 2026-02-20 09:46:20.381186795 +0000 UTC m=+0.152376717 container start 99f2cd4b409e73b2c218296cdd86a9d5126f197bb0d8e049acaf9d8ee318d8cc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_aryabhata, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.buildah.version=1.42.2, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.expose-services=, GIT_BRANCH=main)
Feb 20 09:46:20 np0005625203.localdomain podman[299985]: 2026-02-20 09:46:20.381472734 +0000 UTC m=+0.152662746 container attach 99f2cd4b409e73b2c218296cdd86a9d5126f197bb0d8e049acaf9d8ee318d8cc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_aryabhata, release=1770267347, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, ceph=True, GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7)
Feb 20 09:46:20 np0005625203.localdomain heuristic_aryabhata[300000]: 167 167
Feb 20 09:46:20 np0005625203.localdomain systemd[1]: libpod-99f2cd4b409e73b2c218296cdd86a9d5126f197bb0d8e049acaf9d8ee318d8cc.scope: Deactivated successfully.
Feb 20 09:46:20 np0005625203.localdomain podman[299985]: 2026-02-20 09:46:20.384414825 +0000 UTC m=+0.155604757 container died 99f2cd4b409e73b2c218296cdd86a9d5126f197bb0d8e049acaf9d8ee318d8cc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_aryabhata, CEPH_POINT_RELEASE=, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_BRANCH=main, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, release=1770267347, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git)
Feb 20 09:46:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-94f095d080abd8e5cf1bd3219477fccf13a886326164b5a6fb64b49aff9be108-merged.mount: Deactivated successfully.
Feb 20 09:46:20 np0005625203.localdomain systemd[1]: tmp-crun.8lCJYt.mount: Deactivated successfully.
Feb 20 09:46:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-27f159de13b86341a883251b3e0d05b834c29b6fcbb2cbedef65059e5adfda1a-merged.mount: Deactivated successfully.
Feb 20 09:46:20 np0005625203.localdomain podman[300005]: 2026-02-20 09:46:20.482648895 +0000 UTC m=+0.090341046 container remove 99f2cd4b409e73b2c218296cdd86a9d5126f197bb0d8e049acaf9d8ee318d8cc (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=heuristic_aryabhata, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, name=rhceph, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., GIT_CLEAN=True, release=1770267347, RELEASE=main, CEPH_POINT_RELEASE=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, ceph=True)
Feb 20 09:46:20 np0005625203.localdomain systemd[1]: libpod-conmon-99f2cd4b409e73b2c218296cdd86a9d5126f197bb0d8e049acaf9d8ee318d8cc.scope: Deactivated successfully.
Feb 20 09:46:20 np0005625203.localdomain sudo[299949]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:20 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)...
Feb 20 09:46:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)...
Feb 20 09:46:20 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:46:20 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:46:20 np0005625203.localdomain sudo[300029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:46:20 np0005625203.localdomain sudo[300029]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:20 np0005625203.localdomain sudo[300029]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:20 np0005625203.localdomain sudo[300047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:20 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.1 (monmap changed)...
Feb 20 09:46:20 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:46:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Feb 20 09:46:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:20 np0005625203.localdomain sudo[300047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:21 np0005625203.localdomain podman[300081]: 
Feb 20 09:46:21 np0005625203.localdomain podman[300081]: 2026-02-20 09:46:21.312169266 +0000 UTC m=+0.085300380 container create cd189d6c43422f45bb2a57ec69a4c46d401959d0a3572e61325b20de71d8de2f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_diffie, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, build-date=2026-02-09T10:25:24Z, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, CEPH_POINT_RELEASE=)
Feb 20 09:46:21 np0005625203.localdomain systemd[1]: Started libpod-conmon-cd189d6c43422f45bb2a57ec69a4c46d401959d0a3572e61325b20de71d8de2f.scope.
Feb 20 09:46:21 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:46:21 np0005625203.localdomain podman[300081]: 2026-02-20 09:46:21.378323273 +0000 UTC m=+0.151454387 container init cd189d6c43422f45bb2a57ec69a4c46d401959d0a3572e61325b20de71d8de2f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_diffie, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, CEPH_POINT_RELEASE=, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, com.redhat.component=rhceph-container, ceph=True)
Feb 20 09:46:21 np0005625203.localdomain podman[300081]: 2026-02-20 09:46:21.280559438 +0000 UTC m=+0.053690622 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:46:21 np0005625203.localdomain podman[300081]: 2026-02-20 09:46:21.388663234 +0000 UTC m=+0.161794368 container start cd189d6c43422f45bb2a57ec69a4c46d401959d0a3572e61325b20de71d8de2f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_diffie, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.buildah.version=1.42.2, ceph=True, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc.)
Feb 20 09:46:21 np0005625203.localdomain podman[300081]: 2026-02-20 09:46:21.388983844 +0000 UTC m=+0.162114978 container attach cd189d6c43422f45bb2a57ec69a4c46d401959d0a3572e61325b20de71d8de2f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_diffie, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, RELEASE=main, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, CEPH_POINT_RELEASE=, architecture=x86_64, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 09:46:21 np0005625203.localdomain romantic_diffie[300096]: 167 167
Feb 20 09:46:21 np0005625203.localdomain systemd[1]: libpod-cd189d6c43422f45bb2a57ec69a4c46d401959d0a3572e61325b20de71d8de2f.scope: Deactivated successfully.
Feb 20 09:46:21 np0005625203.localdomain podman[300081]: 2026-02-20 09:46:21.393466362 +0000 UTC m=+0.166597526 container died cd189d6c43422f45bb2a57ec69a4c46d401959d0a3572e61325b20de71d8de2f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_diffie, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, GIT_BRANCH=main, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, RELEASE=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, version=7, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, release=1770267347, GIT_CLEAN=True)
Feb 20 09:46:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-730cb845691dce18d4aa3f00b089a204dbe36579fdc930852836d4bfae3705d1-merged.mount: Deactivated successfully.
Feb 20 09:46:21 np0005625203.localdomain podman[300101]: 2026-02-20 09:46:21.492856538 +0000 UTC m=+0.085164297 container remove cd189d6c43422f45bb2a57ec69a4c46d401959d0a3572e61325b20de71d8de2f (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=romantic_diffie, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, architecture=x86_64, release=1770267347, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, GIT_BRANCH=main, RELEASE=main, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True)
Feb 20 09:46:21 np0005625203.localdomain systemd[1]: libpod-conmon-cd189d6c43422f45bb2a57ec69a4c46d401959d0a3572e61325b20de71d8de2f.scope: Deactivated successfully.
Feb 20 09:46:21 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.54131 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:21 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Saving service mon spec with placement label:mon
Feb 20 09:46:21 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Saving service mon spec with placement label:mon
Feb 20 09:46:21 np0005625203.localdomain sudo[300047]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:21 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)...
Feb 20 09:46:21 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)...
Feb 20 09:46:21 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain
Feb 20 09:46:21 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain
Feb 20 09:46:21 np0005625203.localdomain sudo[300124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:46:21 np0005625203.localdomain sudo[300124]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:21 np0005625203.localdomain sudo[300124]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:46:21 np0005625203.localdomain ceph-mon[296066]: pgmap v35: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:21 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.4 (monmap changed)...
Feb 20 09:46:21 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:46:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:21 np0005625203.localdomain sudo[300143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:21 np0005625203.localdomain sudo[300143]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:21 np0005625203.localdomain podman[300142]: 2026-02-20 09:46:21.933332379 +0000 UTC m=+0.117331432 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:46:21 np0005625203.localdomain podman[300142]: 2026-02-20 09:46:21.972441989 +0000 UTC m=+0.156441052 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:46:21 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:46:22 np0005625203.localdomain podman[300198]: 
Feb 20 09:46:22 np0005625203.localdomain podman[300198]: 2026-02-20 09:46:22.343607485 +0000 UTC m=+0.075372103 container create d7fd7fc10d49238e45abab0f008d1cab7489aed28059741e97366e9251cda7a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_goodall, io.openshift.tags=rhceph ceph, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, io.openshift.expose-services=, ceph=True, name=rhceph, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 09:46:22 np0005625203.localdomain systemd[1]: Started libpod-conmon-d7fd7fc10d49238e45abab0f008d1cab7489aed28059741e97366e9251cda7a4.scope.
Feb 20 09:46:22 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:46:22 np0005625203.localdomain podman[300198]: 2026-02-20 09:46:22.409425633 +0000 UTC m=+0.141190291 container init d7fd7fc10d49238e45abab0f008d1cab7489aed28059741e97366e9251cda7a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_goodall, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_CLEAN=True, vcs-type=git, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, RELEASE=main, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=)
Feb 20 09:46:22 np0005625203.localdomain podman[300198]: 2026-02-20 09:46:22.312961227 +0000 UTC m=+0.044725875 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:46:22 np0005625203.localdomain podman[300198]: 2026-02-20 09:46:22.421170016 +0000 UTC m=+0.152934634 container start d7fd7fc10d49238e45abab0f008d1cab7489aed28059741e97366e9251cda7a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_goodall, CEPH_POINT_RELEASE=, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.openshift.tags=rhceph ceph, release=1770267347, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, vcs-type=git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, io.openshift.expose-services=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image.)
Feb 20 09:46:22 np0005625203.localdomain podman[300198]: 2026-02-20 09:46:22.421757914 +0000 UTC m=+0.153522542 container attach d7fd7fc10d49238e45abab0f008d1cab7489aed28059741e97366e9251cda7a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_goodall, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, CEPH_POINT_RELEASE=, GIT_BRANCH=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, ceph=True, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7)
Feb 20 09:46:22 np0005625203.localdomain practical_goodall[300214]: 167 167
Feb 20 09:46:22 np0005625203.localdomain systemd[1]: libpod-d7fd7fc10d49238e45abab0f008d1cab7489aed28059741e97366e9251cda7a4.scope: Deactivated successfully.
Feb 20 09:46:22 np0005625203.localdomain podman[300198]: 2026-02-20 09:46:22.424995814 +0000 UTC m=+0.156760512 container died d7fd7fc10d49238e45abab0f008d1cab7489aed28059741e97366e9251cda7a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_goodall, io.buildah.version=1.42.2, vcs-type=git, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, name=rhceph, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, release=1770267347, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 09:46:22 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0ae884c285a6bae9b27693d98de1fbea1c6d517ce38620cd134276af93999708-merged.mount: Deactivated successfully.
Feb 20 09:46:22 np0005625203.localdomain podman[300219]: 2026-02-20 09:46:22.528384864 +0000 UTC m=+0.091157912 container remove d7fd7fc10d49238e45abab0f008d1cab7489aed28059741e97366e9251cda7a4 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=practical_goodall, build-date=2026-02-09T10:25:24Z, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, version=7, release=1770267347, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=)
Feb 20 09:46:22 np0005625203.localdomain systemd[1]: libpod-conmon-d7fd7fc10d49238e45abab0f008d1cab7489aed28059741e97366e9251cda7a4.scope: Deactivated successfully.
Feb 20 09:46:22 np0005625203.localdomain sudo[300143]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:22 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625203.lonygy (monmap changed)...
Feb 20 09:46:22 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625203.lonygy (monmap changed)...
Feb 20 09:46:22 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain
Feb 20 09:46:22 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain
Feb 20 09:46:22 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:22 np0005625203.localdomain sudo[300236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:46:22 np0005625203.localdomain sudo[300236]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:22 np0005625203.localdomain sudo[300236]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:22 np0005625203.localdomain sudo[300254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:22 np0005625203.localdomain sudo[300254]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:22 np0005625203.localdomain ceph-mon[296066]: from='client.54131 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:22 np0005625203.localdomain ceph-mon[296066]: Saving service mon spec with placement label:mon
Feb 20 09:46:22 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)...
Feb 20 09:46:22 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain
Feb 20 09:46:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:46:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:23 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.44512 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625204", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:46:23 np0005625203.localdomain podman[300290]: 
Feb 20 09:46:23 np0005625203.localdomain podman[300290]: 2026-02-20 09:46:23.194708534 +0000 UTC m=+0.076010493 container create 93052912c06e1197256d2a9626864b738c1e0bada06ab345f29b35f64d94c45e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_zhukovsky, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, com.redhat.component=rhceph-container, RELEASE=main, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, GIT_CLEAN=True, distribution-scope=public, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:46:23 np0005625203.localdomain systemd[1]: Started libpod-conmon-93052912c06e1197256d2a9626864b738c1e0bada06ab345f29b35f64d94c45e.scope.
Feb 20 09:46:23 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:46:23 np0005625203.localdomain podman[300290]: 2026-02-20 09:46:23.259561011 +0000 UTC m=+0.140862970 container init 93052912c06e1197256d2a9626864b738c1e0bada06ab345f29b35f64d94c45e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_zhukovsky, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, RELEASE=main, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, name=rhceph, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, build-date=2026-02-09T10:25:24Z, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:46:23 np0005625203.localdomain podman[300290]: 2026-02-20 09:46:23.162453696 +0000 UTC m=+0.043755695 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:46:23 np0005625203.localdomain podman[300290]: 2026-02-20 09:46:23.268392495 +0000 UTC m=+0.149694454 container start 93052912c06e1197256d2a9626864b738c1e0bada06ab345f29b35f64d94c45e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_zhukovsky, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, io.buildah.version=1.42.2, ceph=True, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, distribution-scope=public, io.openshift.tags=rhceph ceph, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:46:23 np0005625203.localdomain podman[300290]: 2026-02-20 09:46:23.268681984 +0000 UTC m=+0.149983973 container attach 93052912c06e1197256d2a9626864b738c1e0bada06ab345f29b35f64d94c45e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_zhukovsky, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., release=1770267347, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, name=rhceph, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7)
Feb 20 09:46:23 np0005625203.localdomain admiring_zhukovsky[300305]: 167 167
Feb 20 09:46:23 np0005625203.localdomain systemd[1]: libpod-93052912c06e1197256d2a9626864b738c1e0bada06ab345f29b35f64d94c45e.scope: Deactivated successfully.
Feb 20 09:46:23 np0005625203.localdomain podman[300290]: 2026-02-20 09:46:23.274315318 +0000 UTC m=+0.155617297 container died 93052912c06e1197256d2a9626864b738c1e0bada06ab345f29b35f64d94c45e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_zhukovsky, description=Red Hat Ceph Storage 7, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, GIT_BRANCH=main, distribution-scope=public, io.buildah.version=1.42.2, io.openshift.tags=rhceph ceph, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, io.openshift.expose-services=, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=)
Feb 20 09:46:23 np0005625203.localdomain podman[300310]: 2026-02-20 09:46:23.37808615 +0000 UTC m=+0.086805418 container remove 93052912c06e1197256d2a9626864b738c1e0bada06ab345f29b35f64d94c45e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=admiring_zhukovsky, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, name=rhceph, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.buildah.version=1.42.2, RELEASE=main, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7)
Feb 20 09:46:23 np0005625203.localdomain systemd[1]: libpod-conmon-93052912c06e1197256d2a9626864b738c1e0bada06ab345f29b35f64d94c45e.scope: Deactivated successfully.
Feb 20 09:46:23 np0005625203.localdomain sudo[300254]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f00aee3080d1a2389bf3c667b1b9112f0b3b1fbe1349402d580a8e67b72f12d8-merged.mount: Deactivated successfully.
Feb 20 09:46:23 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625204 (monmap changed)...
Feb 20 09:46:23 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625204 (monmap changed)...
Feb 20 09:46:23 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain
Feb 20 09:46:23 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain
Feb 20 09:46:23 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)...
Feb 20 09:46:23 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain
Feb 20 09:46:23 np0005625203.localdomain ceph-mon[296066]: pgmap v36: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.44515 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005625204"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Remove daemons mon.np0005625204
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Remove daemons mon.np0005625204
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.services.cephadmservice] Safe to remove mon.np0005625204: new quorum should be ['np0005625202', 'np0005625203'] (from ['np0005625202', 'np0005625203'])
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Safe to remove mon.np0005625204: new quorum should be ['np0005625202', 'np0005625203'] (from ['np0005625202', 'np0005625203'])
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.services.cephadmservice] Removing monitor np0005625204 from monmap...
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removing monitor np0005625204 from monmap...
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Removing daemon mon.np0005625204 from np0005625204.localdomain -- ports []
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Removing daemon mon.np0005625204 from np0005625204.localdomain -- ports []
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.103:3300/0
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@2(peon) e16  my rank is now 1 (was 2)
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.104:3300/0
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: client.34441 ms_handle_reset on v2:172.18.0.104:3300/0
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(probing) e16 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625202"} v 0)
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(probing) e16 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0)
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: client.27594 ms_handle_reset on v2:172.18.0.108:3300/0
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(probing) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: log_channel(cluster) log [INF] : mon.np0005625203 calling monitor election
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: paxos.1).electionLogic(62) init, last seen epoch 62
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e16 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e16 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on np0005625204.localdomain
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on np0005625204.localdomain
Feb 20 09:46:24 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625204 (monmap changed)...
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: from='client.44515 -' entity='client.admin' cmd=[{"prefix": "orch daemon rm", "names": ["mon.np0005625204"], "force": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: Remove daemons mon.np0005625204
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "quorum_status"} : dispatch
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: Safe to remove mon.np0005625204: new quorum should be ['np0005625202', 'np0005625203'] (from ['np0005625202', 'np0005625203'])
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: Removing monitor np0005625204 from monmap...
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon rm", "name": "np0005625204"} : dispatch
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: Removing daemon mon.np0005625204 from np0005625204.localdomain -- ports []
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203 calling monitor election
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625202 calling monitor election
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625202 is new leader, mons np0005625202,np0005625203 in quorum (ranks 0,1)
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: monmap epoch 16
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: last_changed 2026-02-20T09:46:24.360760+0000
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: min_mon_release 18 (reef)
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: election_strategy: 1
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: osdmap e89: 6 total, 6 up, 6 in
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: mgrmap e36: np0005625203.lonygy(active, since 68s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu, np0005625202.arwxwo
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: overall HEALTH_OK
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 20 09:46:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:25 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.0 (monmap changed)...
Feb 20 09:46:25 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.0 on np0005625204.localdomain
Feb 20 09:46:25 np0005625203.localdomain ceph-mon[296066]: pgmap v37: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:26 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:26 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:26 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring osd.3 (monmap changed)...
Feb 20 09:46:26 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring osd.3 (monmap changed)...
Feb 20 09:46:26 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.3"} v 0)
Feb 20 09:46:26 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Feb 20 09:46:26 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:26 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:26 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.3 on np0005625204.localdomain
Feb 20 09:46:26 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.3 on np0005625204.localdomain
Feb 20 09:46:26 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.3 (monmap changed)...
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.3 on np0005625204.localdomain
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: pgmap v38: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:27 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)...
Feb 20 09:46:27 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)...
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:27 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:27 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain
Feb 20 09:46:27 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain
Feb 20 09:46:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:46:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:46:27 np0005625203.localdomain podman[300329]: 2026-02-20 09:46:27.777836606 +0000 UTC m=+0.089977665 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, version=9.7, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Feb 20 09:46:27 np0005625203.localdomain podman[300329]: 2026-02-20 09:46:27.790309753 +0000 UTC m=+0.102450812 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.expose-services=, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Feb 20 09:46:27 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:46:27 np0005625203.localdomain podman[300328]: 2026-02-20 09:46:27.879572235 +0000 UTC m=+0.191464886 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Feb 20 09:46:27 np0005625203.localdomain podman[300328]: 2026-02-20 09:46:27.916318723 +0000 UTC m=+0.228211394 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Feb 20 09:46:27 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:28 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625204.exgrzx (monmap changed)...
Feb 20 09:46:28 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625204.exgrzx (monmap changed)...
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:28 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain
Feb 20 09:46:28 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)...
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:28 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:46:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:46:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:46:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:46:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:46:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17792 "" "Go-http-client/1.1"
Feb 20 09:46:29 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:29 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:29 np0005625203.localdomain sudo[300367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:46:29 np0005625203.localdomain sudo[300367]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:29 np0005625203.localdomain sudo[300367]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:29 np0005625203.localdomain sudo[300385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:46:29 np0005625203.localdomain sudo[300385]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:29 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)...
Feb 20 09:46:29 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain
Feb 20 09:46:29 np0005625203.localdomain ceph-mon[296066]: pgmap v39: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:30 np0005625203.localdomain sudo[300385]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:30 np0005625203.localdomain sshd[300436]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:46:30 np0005625203.localdomain sshd[300436]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:46:30 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:31 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:31 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:31 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 20 09:46:31 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:46:31 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:31 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:31 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:31 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:31 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:31 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:31 np0005625203.localdomain sshd[300452]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:46:31 np0005625203.localdomain sudo[300438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:46:31 np0005625203.localdomain sudo[300438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:31 np0005625203.localdomain sudo[300438]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:31 np0005625203.localdomain ceph-mon[296066]: pgmap v40: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:46:31 np0005625203.localdomain sudo[300457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:46:31 np0005625203.localdomain sudo[300457]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:31 np0005625203.localdomain sudo[300457]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:32 np0005625203.localdomain sudo[300475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:32 np0005625203.localdomain sudo[300475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:32 np0005625203.localdomain sudo[300475]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:32 np0005625203.localdomain sudo[300493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:32 np0005625203.localdomain sudo[300493]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:32 np0005625203.localdomain sudo[300493]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:32 np0005625203.localdomain sudo[300512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:32 np0005625203.localdomain sudo[300512]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:32 np0005625203.localdomain sudo[300512]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:32 np0005625203.localdomain sudo[300546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:32 np0005625203.localdomain sudo[300546]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:32 np0005625203.localdomain sudo[300546]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:32 np0005625203.localdomain sudo[300564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:32 np0005625203.localdomain sudo[300564]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:32 np0005625203.localdomain sudo[300564]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:32 np0005625203.localdomain sshd[300452]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:46:32 np0005625203.localdomain sudo[300582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:46:32 np0005625203.localdomain sudo[300582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:32 np0005625203.localdomain sudo[300582]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:32 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:32 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:32 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:32 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:32 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:32 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:32 np0005625203.localdomain sudo[300600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:46:32 np0005625203.localdomain sudo[300600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:32 np0005625203.localdomain sudo[300600]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:32 np0005625203.localdomain sudo[300618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:46:32 np0005625203.localdomain sudo[300618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:32 np0005625203.localdomain sudo[300618]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:32 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:32 np0005625203.localdomain sudo[300636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:32 np0005625203.localdomain sudo[300636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:32 np0005625203.localdomain sudo[300636]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:32 np0005625203.localdomain sudo[300654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:32 np0005625203.localdomain sudo[300654]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:32 np0005625203.localdomain sudo[300654]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:32 np0005625203.localdomain sudo[300672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:32 np0005625203.localdomain sudo[300672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:32 np0005625203.localdomain sudo[300672]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:32 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:32 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:32 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:33 np0005625203.localdomain sudo[300706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:33 np0005625203.localdomain sudo[300706]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:33 np0005625203.localdomain sudo[300706]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:33 np0005625203.localdomain sudo[300724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:33 np0005625203.localdomain sudo[300724]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:33 np0005625203.localdomain sudo[300724]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:33 np0005625203.localdomain sudo[300742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:33 np0005625203.localdomain sudo[300742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:33 np0005625203.localdomain sudo[300742]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0)
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0)
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 20 09:46:33 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] update: starting ev 5fdf9d8b-0ec9-4cba-8829-ecc56276508b (Updating node-proxy deployment (+3 -> 3))
Feb 20 09:46:33 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] complete: finished ev 5fdf9d8b-0ec9-4cba-8829-ecc56276508b (Updating node-proxy deployment (+3 -> 3))
Feb 20 09:46:33 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Completed event 5fdf9d8b-0ec9-4cba-8829-ecc56276508b (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:46:33 np0005625203.localdomain sudo[300760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:46:33 np0005625203.localdomain sudo[300760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:33 np0005625203.localdomain sudo[300760]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:33 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:46:33 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:33 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:46:33 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: pgmap v41: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:34 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)...
Feb 20 09:46:34 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)...
Feb 20 09:46:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Feb 20 09:46:34 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 20 09:46:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:34 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:34 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:46:34 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:46:34 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:34 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:46:34 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:46:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 20 09:46:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:35 np0005625203.localdomain sshd[300778]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:35 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)...
Feb 20 09:46:35 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)...
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:35 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:46:35 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.2 (monmap changed)...
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: pgmap v42: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Feb 20 09:46:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:36 np0005625203.localdomain sshd[300778]: Invalid user claude from 194.107.115.2 port 34670
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:36 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:46:36 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:36 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:46:36 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:46:36 np0005625203.localdomain sshd[300778]: Received disconnect from 194.107.115.2 port 34670:11: Bye Bye [preauth]
Feb 20 09:46:36 np0005625203.localdomain sshd[300778]: Disconnected from invalid user claude 194.107.115.2 port 34670 [preauth]
Feb 20 09:46:36 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.5 (monmap changed)...
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:46:36 np0005625203.localdomain ceph-mon[296066]: pgmap v43: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:46:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:46:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:46:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:46:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:46:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.296108) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580797296155, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 1314, "num_deletes": 257, "total_data_size": 2254242, "memory_usage": 2285056, "flush_reason": "Manual Compaction"}
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580797308654, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 1274777, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12998, "largest_seqno": 14307, "table_properties": {"data_size": 1268926, "index_size": 3001, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 15422, "raw_average_key_size": 21, "raw_value_size": 1255990, "raw_average_value_size": 1756, "num_data_blocks": 131, "num_entries": 715, "num_filter_entries": 715, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580770, "oldest_key_time": 1771580770, "file_creation_time": 1771580797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 12600 microseconds, and 4194 cpu microseconds.
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.308705) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 1274777 bytes OK
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.308731) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.312197) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.312222) EVENT_LOG_v1 {"time_micros": 1771580797312215, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.312245) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 2247391, prev total WAL file size 2247715, number of live WAL files 2.
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.313119) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353139' seq:72057594037927935, type:22 .. '6C6F676D0033373732' seq:0, type:0; will stop at (end)
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(1244KB)], [18(17MB)]
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580797313179, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 19317834, "oldest_snapshot_seqno": -1}
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:37 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:46:37 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:37 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:46:37 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 11265 keys, 19176781 bytes, temperature: kUnknown
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580797400434, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 19176781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19111058, "index_size": 36438, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28229, "raw_key_size": 303020, "raw_average_key_size": 26, "raw_value_size": 18917405, "raw_average_value_size": 1679, "num_data_blocks": 1386, "num_entries": 11265, "num_filter_entries": 11265, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771580797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.400759) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 19176781 bytes
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.402277) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.1 rd, 219.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 17.2 +0.0 blob) out(18.3 +0.0 blob), read-write-amplify(30.2) write-amplify(15.0) OK, records in: 11809, records dropped: 544 output_compression: NoCompression
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.402299) EVENT_LOG_v1 {"time_micros": 1771580797402289, "job": 8, "event": "compaction_finished", "compaction_time_micros": 87369, "compaction_time_cpu_micros": 50335, "output_level": 6, "num_output_files": 1, "total_output_size": 19176781, "num_input_records": 11809, "num_output_records": 11265, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580797402597, "job": 8, "event": "table_file_deletion", "file_number": 20}
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580797404616, "job": 8, "event": "table_file_deletion", "file_number": 18}
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.313014) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.404731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.404742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.404746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.404749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:37.404766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:37 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.44518 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005625204.localdomain:172.18.0.105", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:37 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Deploying daemon mon.np0005625204 on np0005625204.localdomain
Feb 20 09:46:37 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Deploying daemon mon.np0005625204 on np0005625204.localdomain
Feb 20 09:46:37 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Writing back 50 completed events
Feb 20 09:46:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:38 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:46:38 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:38 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:46:38 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:46:38 np0005625203.localdomain sudo[300780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:46:38 np0005625203.localdomain sudo[300780]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:46:38 np0005625203.localdomain sudo[300780]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='client.44518 -' entity='client.admin' cmd=[{"prefix": "orch daemon add", "daemon_type": "mon", "placement": "np0005625204.localdomain:172.18.0.105", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: Deploying daemon mon.np0005625204 on np0005625204.localdomain
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:38 np0005625203.localdomain sudo[300799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:38 np0005625203.localdomain sudo[300799]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:38 np0005625203.localdomain podman[300798]: 2026-02-20 09:46:38.400628315 +0000 UTC m=+0.104055541 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:46:38 np0005625203.localdomain podman[300798]: 2026-02-20 09:46:38.41533729 +0000 UTC m=+0.118764596 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:46:38 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:46:38 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:38 np0005625203.localdomain podman[300854]: 
Feb 20 09:46:38 np0005625203.localdomain podman[300854]: 2026-02-20 09:46:38.854000555 +0000 UTC m=+0.076797207 container create 2b91df449c2fad0da28ca93e0a2489cfa44d2c6a76c743ab7fa335dfd9b23803 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_golick, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, version=7, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, RELEASE=main, io.openshift.tags=rhceph ceph, vcs-type=git, com.redhat.component=rhceph-container, io.openshift.expose-services=, distribution-scope=public, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=)
Feb 20 09:46:38 np0005625203.localdomain systemd[1]: Started libpod-conmon-2b91df449c2fad0da28ca93e0a2489cfa44d2c6a76c743ab7fa335dfd9b23803.scope.
Feb 20 09:46:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:46:38 np0005625203.localdomain podman[300854]: 2026-02-20 09:46:38.821086786 +0000 UTC m=+0.043883498 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:46:38 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:46:38 np0005625203.localdomain podman[300854]: 2026-02-20 09:46:38.959790979 +0000 UTC m=+0.182587641 container init 2b91df449c2fad0da28ca93e0a2489cfa44d2c6a76c743ab7fa335dfd9b23803 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_golick, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, description=Red Hat Ceph Storage 7, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=rhceph, ceph=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1770267347)
Feb 20 09:46:38 np0005625203.localdomain podman[300854]: 2026-02-20 09:46:38.970655626 +0000 UTC m=+0.193452308 container start 2b91df449c2fad0da28ca93e0a2489cfa44d2c6a76c743ab7fa335dfd9b23803 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_golick, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, release=1770267347, io.buildah.version=1.42.2, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, name=rhceph, architecture=x86_64, GIT_CLEAN=True, CEPH_POINT_RELEASE=, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7)
Feb 20 09:46:38 np0005625203.localdomain podman[300854]: 2026-02-20 09:46:38.971923674 +0000 UTC m=+0.194720386 container attach 2b91df449c2fad0da28ca93e0a2489cfa44d2c6a76c743ab7fa335dfd9b23803 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_golick, GIT_CLEAN=True, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.tags=rhceph ceph, ceph=True, description=Red Hat Ceph Storage 7, release=1770267347, io.openshift.expose-services=, RELEASE=main, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.component=rhceph-container, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:46:38 np0005625203.localdomain stupefied_golick[300869]: 167 167
Feb 20 09:46:38 np0005625203.localdomain systemd[1]: libpod-2b91df449c2fad0da28ca93e0a2489cfa44d2c6a76c743ab7fa335dfd9b23803.scope: Deactivated successfully.
Feb 20 09:46:38 np0005625203.localdomain podman[300854]: 2026-02-20 09:46:38.975922469 +0000 UTC m=+0.198719211 container died 2b91df449c2fad0da28ca93e0a2489cfa44d2c6a76c743ab7fa335dfd9b23803 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_golick, vendor=Red Hat, Inc., io.openshift.expose-services=, name=rhceph, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, RELEASE=main, io.openshift.tags=rhceph ceph, release=1770267347, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_BRANCH=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, build-date=2026-02-09T10:25:24Z, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 09:46:39 np0005625203.localdomain podman[300874]: 2026-02-20 09:46:39.048299758 +0000 UTC m=+0.058687378 container remove 2b91df449c2fad0da28ca93e0a2489cfa44d2c6a76c743ab7fa335dfd9b23803 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=stupefied_golick, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=)
Feb 20 09:46:39 np0005625203.localdomain systemd[1]: libpod-conmon-2b91df449c2fad0da28ca93e0a2489cfa44d2c6a76c743ab7fa335dfd9b23803.scope: Deactivated successfully.
Feb 20 09:46:39 np0005625203.localdomain sudo[300799]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0)
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0)
Feb 20 09:46:39 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Feb 20 09:46:39 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:39 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:46:39 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:46:39 np0005625203.localdomain sudo[300891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:46:39 np0005625203.localdomain sudo[300891]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:39 np0005625203.localdomain sudo[300891]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:39 np0005625203.localdomain sudo[300909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:39 np0005625203.localdomain sudo[300909]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: pgmap v44: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:39 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1210e889fa7d57f91431bbe36ff4a4d54858873eca856fabdaa763677025a274-merged.mount: Deactivated successfully.
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:39 np0005625203.localdomain podman[300944]: 
Feb 20 09:46:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:39 np0005625203.localdomain podman[300944]: 2026-02-20 09:46:39.820758033 +0000 UTC m=+0.077915192 container create 41f21fe68e292d492b0261021d1d4d879658bee886f7077af17e8a91a70fbb31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_banach, io.buildah.version=1.42.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, ceph=True, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, RELEASE=main, build-date=2026-02-09T10:25:24Z, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, io.openshift.tags=rhceph ceph)
Feb 20 09:46:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:46:39 np0005625203.localdomain systemd[1]: Started libpod-conmon-41f21fe68e292d492b0261021d1d4d879658bee886f7077af17e8a91a70fbb31.scope.
Feb 20 09:46:39 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:46:39 np0005625203.localdomain podman[300944]: 2026-02-20 09:46:39.788071231 +0000 UTC m=+0.045228400 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:46:39 np0005625203.localdomain podman[300944]: 2026-02-20 09:46:39.889272894 +0000 UTC m=+0.146430053 container init 41f21fe68e292d492b0261021d1d4d879658bee886f7077af17e8a91a70fbb31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_banach, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, release=1770267347, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_BRANCH=main, CEPH_POINT_RELEASE=, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, vcs-type=git, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, name=rhceph, ceph=True, description=Red Hat Ceph Storage 7)
Feb 20 09:46:39 np0005625203.localdomain podman[300944]: 2026-02-20 09:46:39.898552901 +0000 UTC m=+0.155710070 container start 41f21fe68e292d492b0261021d1d4d879658bee886f7077af17e8a91a70fbb31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_banach, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, RELEASE=main, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, name=rhceph, ceph=True)
Feb 20 09:46:39 np0005625203.localdomain podman[300944]: 2026-02-20 09:46:39.898905632 +0000 UTC m=+0.156062831 container attach 41f21fe68e292d492b0261021d1d4d879658bee886f7077af17e8a91a70fbb31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_banach, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, version=7, io.openshift.tags=rhceph ceph, release=1770267347, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 09:46:39 np0005625203.localdomain interesting_banach[300960]: 167 167
Feb 20 09:46:39 np0005625203.localdomain systemd[1]: libpod-41f21fe68e292d492b0261021d1d4d879658bee886f7077af17e8a91a70fbb31.scope: Deactivated successfully.
Feb 20 09:46:39 np0005625203.localdomain podman[300944]: 2026-02-20 09:46:39.904925188 +0000 UTC m=+0.162082347 container died 41f21fe68e292d492b0261021d1d4d879658bee886f7077af17e8a91a70fbb31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_banach, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, vcs-type=git, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:46:39 np0005625203.localdomain podman[300959]: 2026-02-20 09:46:39.965980307 +0000 UTC m=+0.100941264 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:46:39 np0005625203.localdomain podman[300959]: 2026-02-20 09:46:39.979546947 +0000 UTC m=+0.114507874 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:46:40 np0005625203.localdomain podman[300975]: 2026-02-20 09:46:40.013936001 +0000 UTC m=+0.095676502 container remove 41f21fe68e292d492b0261021d1d4d879658bee886f7077af17e8a91a70fbb31 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=interesting_banach, vendor=Red Hat, Inc., ceph=True, release=1770267347, vcs-type=git, version=7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=)
Feb 20 09:46:40 np0005625203.localdomain systemd[1]: libpod-conmon-41f21fe68e292d492b0261021d1d4d879658bee886f7077af17e8a91a70fbb31.scope: Deactivated successfully.
Feb 20 09:46:40 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:46:40 np0005625203.localdomain sudo[300909]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0)
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0)
Feb 20 09:46:40 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring osd.4 (monmap changed)...
Feb 20 09:46:40 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring osd.4 (monmap changed)...
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "osd.4"} v 0)
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:40 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:46:40 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:46:40 np0005625203.localdomain sudo[301009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:46:40 np0005625203.localdomain sudo[301009]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:40 np0005625203.localdomain sudo[301009]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-138c050d67445b5e345b131523cd6641a2dc694ef758be53420d38f1db02d10d-merged.mount: Deactivated successfully.
Feb 20 09:46:40 np0005625203.localdomain sudo[301027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:40 np0005625203.localdomain sudo[301027]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:40 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.1 (monmap changed)...
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Feb 20 09:46:40 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:40 np0005625203.localdomain podman[301062]: 
Feb 20 09:46:40 np0005625203.localdomain podman[301062]: 2026-02-20 09:46:40.975133506 +0000 UTC m=+0.079793730 container create 12b4cb59d437687fb6645b35f2e84056ee6d0eb77eb727d80054f05fad8f2e7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_vaughan, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_BRANCH=main, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, RELEASE=main, description=Red Hat Ceph Storage 7, name=rhceph, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., GIT_CLEAN=True, version=7, vcs-type=git, io.buildah.version=1.42.2, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.tags=rhceph ceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container)
Feb 20 09:46:41 np0005625203.localdomain systemd[1]: Started libpod-conmon-12b4cb59d437687fb6645b35f2e84056ee6d0eb77eb727d80054f05fad8f2e7c.scope.
Feb 20 09:46:41 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:46:41 np0005625203.localdomain podman[301062]: 2026-02-20 09:46:41.039936251 +0000 UTC m=+0.144596475 container init 12b4cb59d437687fb6645b35f2e84056ee6d0eb77eb727d80054f05fad8f2e7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_vaughan, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, GIT_CLEAN=True, distribution-scope=public, vendor=Red Hat, Inc., version=7, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, release=1770267347, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:46:41 np0005625203.localdomain podman[301062]: 2026-02-20 09:46:40.943622111 +0000 UTC m=+0.048282415 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:46:41 np0005625203.localdomain podman[301062]: 2026-02-20 09:46:41.04925882 +0000 UTC m=+0.153919014 container start 12b4cb59d437687fb6645b35f2e84056ee6d0eb77eb727d80054f05fad8f2e7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_vaughan, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, GIT_BRANCH=main, distribution-scope=public, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7)
Feb 20 09:46:41 np0005625203.localdomain podman[301062]: 2026-02-20 09:46:41.049546369 +0000 UTC m=+0.154206603 container attach 12b4cb59d437687fb6645b35f2e84056ee6d0eb77eb727d80054f05fad8f2e7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_vaughan, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_CLEAN=True, name=rhceph, io.openshift.expose-services=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, RELEASE=main, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., ceph=True, build-date=2026-02-09T10:25:24Z)
Feb 20 09:46:41 np0005625203.localdomain sweet_vaughan[301079]: 167 167
Feb 20 09:46:41 np0005625203.localdomain podman[301062]: 2026-02-20 09:46:41.052112719 +0000 UTC m=+0.156772943 container died 12b4cb59d437687fb6645b35f2e84056ee6d0eb77eb727d80054f05fad8f2e7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_vaughan, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, version=7, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, release=1770267347, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7)
Feb 20 09:46:41 np0005625203.localdomain systemd[1]: libpod-12b4cb59d437687fb6645b35f2e84056ee6d0eb77eb727d80054f05fad8f2e7c.scope: Deactivated successfully.
Feb 20 09:46:41 np0005625203.localdomain podman[301084]: 2026-02-20 09:46:41.1200273 +0000 UTC m=+0.059044578 container remove 12b4cb59d437687fb6645b35f2e84056ee6d0eb77eb727d80054f05fad8f2e7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=sweet_vaughan, name=rhceph, vcs-type=git, architecture=x86_64, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, version=7, io.openshift.tags=rhceph ceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, distribution-scope=public, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.openshift.expose-services=, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:46:41 np0005625203.localdomain systemd[1]: libpod-conmon-12b4cb59d437687fb6645b35f2e84056ee6d0eb77eb727d80054f05fad8f2e7c.scope: Deactivated successfully.
Feb 20 09:46:41 np0005625203.localdomain sudo[301027]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0)
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0)
Feb 20 09:46:41 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)...
Feb 20 09:46:41 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)...
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:41 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain
Feb 20 09:46:41 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain
Feb 20 09:46:41 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-125566f702cf074fcf4678eae20aee079a8bc55ae06ed632bab92fe555c24d3c-merged.mount: Deactivated successfully.
Feb 20 09:46:41 np0005625203.localdomain sudo[301106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:46:41 np0005625203.localdomain sudo[301106]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:41 np0005625203.localdomain sudo[301106]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:41 np0005625203.localdomain sudo[301124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:41 np0005625203.localdomain sudo[301124]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.4 (monmap changed)...
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: pgmap v45: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:41 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:42 np0005625203.localdomain podman[301158]: 
Feb 20 09:46:42 np0005625203.localdomain podman[301158]: 2026-02-20 09:46:42.06324523 +0000 UTC m=+0.127911540 container create 69d400b18b9fd7725c2b4276c15144c04dc69788a091acfa7d715fe6bf86dc3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_newton, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, name=rhceph, RELEASE=main, release=1770267347, com.redhat.component=rhceph-container, vcs-type=git, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:46:42 np0005625203.localdomain podman[301158]: 2026-02-20 09:46:41.991284702 +0000 UTC m=+0.055951042 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:46:42 np0005625203.localdomain systemd[1]: Started libpod-conmon-69d400b18b9fd7725c2b4276c15144c04dc69788a091acfa7d715fe6bf86dc3e.scope.
Feb 20 09:46:42 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:46:42 np0005625203.localdomain podman[301158]: 2026-02-20 09:46:42.136844697 +0000 UTC m=+0.201511007 container init 69d400b18b9fd7725c2b4276c15144c04dc69788a091acfa7d715fe6bf86dc3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_newton, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.buildah.version=1.42.2, io.openshift.expose-services=, GIT_CLEAN=True, release=1770267347, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, GIT_BRANCH=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, architecture=x86_64, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-09T10:25:24Z, RELEASE=main, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7)
Feb 20 09:46:42 np0005625203.localdomain podman[301158]: 2026-02-20 09:46:42.146526116 +0000 UTC m=+0.211192416 container start 69d400b18b9fd7725c2b4276c15144c04dc69788a091acfa7d715fe6bf86dc3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_newton, io.openshift.tags=rhceph ceph, name=rhceph, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, vcs-type=git, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, com.redhat.component=rhceph-container, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z)
Feb 20 09:46:42 np0005625203.localdomain podman[301158]: 2026-02-20 09:46:42.146816555 +0000 UTC m=+0.211482865 container attach 69d400b18b9fd7725c2b4276c15144c04dc69788a091acfa7d715fe6bf86dc3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_newton, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, release=1770267347, ceph=True, version=7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.tags=rhceph ceph, vcs-type=git, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.expose-services=, name=rhceph, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, architecture=x86_64)
Feb 20 09:46:42 np0005625203.localdomain pensive_newton[301173]: 167 167
Feb 20 09:46:42 np0005625203.localdomain systemd[1]: libpod-69d400b18b9fd7725c2b4276c15144c04dc69788a091acfa7d715fe6bf86dc3e.scope: Deactivated successfully.
Feb 20 09:46:42 np0005625203.localdomain podman[301158]: 2026-02-20 09:46:42.150563432 +0000 UTC m=+0.215229742 container died 69d400b18b9fd7725c2b4276c15144c04dc69788a091acfa7d715fe6bf86dc3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_newton, io.buildah.version=1.42.2, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, version=7, architecture=x86_64, GIT_CLEAN=True, name=rhceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, ceph=True, description=Red Hat Ceph Storage 7, release=1770267347, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 09:46:42 np0005625203.localdomain podman[301178]: 2026-02-20 09:46:42.251290239 +0000 UTC m=+0.088316735 container remove 69d400b18b9fd7725c2b4276c15144c04dc69788a091acfa7d715fe6bf86dc3e (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=pensive_newton, io.openshift.expose-services=, GIT_CLEAN=True, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, ceph=True, RELEASE=main, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, name=rhceph, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:46:42 np0005625203.localdomain systemd[1]: libpod-conmon-69d400b18b9fd7725c2b4276c15144c04dc69788a091acfa7d715fe6bf86dc3e.scope: Deactivated successfully.
Feb 20 09:46:42 np0005625203.localdomain sudo[301124]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0)
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0)
Feb 20 09:46:42 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mgr.np0005625203.lonygy (monmap changed)...
Feb 20 09:46:42 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mgr.np0005625203.lonygy (monmap changed)...
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:42 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain
Feb 20 09:46:42 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain
Feb 20 09:46:42 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-b8cbecd70a55cb2f2bf8037c01e0f2e6696e59b7ee5a975fa23875adacc57599-merged.mount: Deactivated successfully.
Feb 20 09:46:42 np0005625203.localdomain sudo[301195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:46:42 np0005625203.localdomain sudo[301195]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:42 np0005625203.localdomain sudo[301195]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:42 np0005625203.localdomain sudo[301213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:42 np0005625203.localdomain sudo[301213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:42 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)...
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:46:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:42 np0005625203.localdomain podman[301248]: 
Feb 20 09:46:42 np0005625203.localdomain podman[301248]: 2026-02-20 09:46:42.920672774 +0000 UTC m=+0.053026462 container create 461254e8b6c550368d5e70de2d59eaada045794a8760a9be98fbaec0f4872d75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_wu, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, name=rhceph, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, RELEASE=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.buildah.version=1.42.2, release=1770267347, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:46:42 np0005625203.localdomain systemd[1]: Started libpod-conmon-461254e8b6c550368d5e70de2d59eaada045794a8760a9be98fbaec0f4872d75.scope.
Feb 20 09:46:42 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:46:42 np0005625203.localdomain podman[301248]: 2026-02-20 09:46:42.979736602 +0000 UTC m=+0.112090310 container init 461254e8b6c550368d5e70de2d59eaada045794a8760a9be98fbaec0f4872d75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_wu, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, name=rhceph, io.buildah.version=1.42.2, version=7, io.openshift.tags=rhceph ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, ceph=True, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, release=1770267347, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, GIT_CLEAN=True, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, description=Red Hat Ceph Storage 7)
Feb 20 09:46:42 np0005625203.localdomain podman[301248]: 2026-02-20 09:46:42.991668051 +0000 UTC m=+0.124021769 container start 461254e8b6c550368d5e70de2d59eaada045794a8760a9be98fbaec0f4872d75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_wu, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-type=git, io.openshift.tags=rhceph ceph, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, architecture=x86_64, io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, RELEASE=main, name=rhceph, io.openshift.expose-services=, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True)
Feb 20 09:46:42 np0005625203.localdomain affectionate_wu[301263]: 167 167
Feb 20 09:46:42 np0005625203.localdomain systemd[1]: libpod-461254e8b6c550368d5e70de2d59eaada045794a8760a9be98fbaec0f4872d75.scope: Deactivated successfully.
Feb 20 09:46:42 np0005625203.localdomain podman[301248]: 2026-02-20 09:46:42.896346361 +0000 UTC m=+0.028700129 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:46:42 np0005625203.localdomain podman[301248]: 2026-02-20 09:46:42.994764556 +0000 UTC m=+0.127118264 container attach 461254e8b6c550368d5e70de2d59eaada045794a8760a9be98fbaec0f4872d75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_wu, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., GIT_CLEAN=True, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, release=1770267347, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, CEPH_POINT_RELEASE=, GIT_BRANCH=main, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=Red Hat Ceph Storage 7)
Feb 20 09:46:42 np0005625203.localdomain podman[301248]: 2026-02-20 09:46:42.998663208 +0000 UTC m=+0.131016936 container died 461254e8b6c550368d5e70de2d59eaada045794a8760a9be98fbaec0f4872d75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_wu, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, name=rhceph, GIT_BRANCH=main, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, architecture=x86_64, io.buildah.version=1.42.2, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 09:46:43 np0005625203.localdomain podman[301268]: 2026-02-20 09:46:43.090391946 +0000 UTC m=+0.082603807 container remove 461254e8b6c550368d5e70de2d59eaada045794a8760a9be98fbaec0f4872d75 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=affectionate_wu, name=rhceph, vendor=Red Hat, Inc., release=1770267347, CEPH_POINT_RELEASE=, version=7, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=rhceph ceph, vcs-type=git, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, RELEASE=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, ceph=True)
Feb 20 09:46:43 np0005625203.localdomain systemd[1]: libpod-conmon-461254e8b6c550368d5e70de2d59eaada045794a8760a9be98fbaec0f4872d75.scope: Deactivated successfully.
Feb 20 09:46:43 np0005625203.localdomain sudo[301213]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0)
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0)
Feb 20 09:46:43 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625204 (monmap changed)...
Feb 20 09:46:43 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625204 (monmap changed)...
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:43 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain
Feb 20 09:46:43 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain
Feb 20 09:46:43 np0005625203.localdomain systemd[1]: tmp-crun.SRgvWH.mount: Deactivated successfully.
Feb 20 09:46:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c55df4e3032ea98549679d4c4074ca134b2f8be286e7158f4c8c63126c93b135-merged.mount: Deactivated successfully.
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)...
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: pgmap v46: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:46:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:44 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:44 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625204 (monmap changed)...
Feb 20 09:46:44 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain
Feb 20 09:46:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:45 np0005625203.localdomain ceph-mon[296066]: pgmap v47: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] update: starting ev 6b47ed69-2425-4668-9a8e-69493e840cb1 (Updating node-proxy deployment (+3 -> 3))
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] complete: finished ev 6b47ed69-2425-4668-9a8e-69493e840cb1 (Updating node-proxy deployment (+3 -> 3))
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Completed event 6b47ed69-2425-4668-9a8e-69493e840cb1 (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:46:46 np0005625203.localdomain sudo[301283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:46:46 np0005625203.localdomain sudo[301283]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:46 np0005625203.localdomain sudo[301283]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:46.363 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625204 172.18.0.108:0/1674332196; not ready for session (expect reconnect)
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e16 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0)
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625204: (2) No such file or directory
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(probing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625202"} v 0)
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(probing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625203"} v 0)
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: log_channel(cluster) log [INF] : mon.np0005625203 calling monitor election
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: paxos.1).electionLogic(64) init, last seen epoch 64
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0)
Feb 20 09:46:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625204: (22) Invalid argument
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] scanning for idle connections..
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] cleaning up connections: []
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] scanning for idle connections..
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f57cb76c370>)]
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] scanning for idle connections..
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7f57cb76c190>)]
Feb 20 09:46:46 np0005625203.localdomain ceph-mgr[285471]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Feb 20 09:46:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:47.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:46:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:47.360 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:46:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:47.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:46:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:47.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:46:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:47.361 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:46:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:47.362 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:46:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:47 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625204 172.18.0.108:0/1674332196; not ready for session (expect reconnect)
Feb 20 09:46:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0)
Feb 20 09:46:47 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:47 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625204: (22) Invalid argument
Feb 20 09:46:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:47 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Writing back 50 completed events
Feb 20 09:46:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 20 09:46:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:48 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625204 172.18.0.108:0/1674332196; not ready for session (expect reconnect)
Feb 20 09:46:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0)
Feb 20 09:46:48 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625204: (22) Invalid argument
Feb 20 09:46:48 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:46:48 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v49: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Feb 20 09:46:48 np0005625203.localdomain podman[301312]: 2026-02-20 09:46:48.786106677 +0000 UTC m=+0.095431745 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Feb 20 09:46:48 np0005625203.localdomain podman[301312]: 2026-02-20 09:46:48.824447303 +0000 UTC m=+0.133772401 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:46:48 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:46:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:49 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625204 172.18.0.108:0/1674332196; not ready for session (expect reconnect)
Feb 20 09:46:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0)
Feb 20 09:46:49 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:49 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625204: (22) Invalid argument
Feb 20 09:46:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:50 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625204 172.18.0.108:0/1674332196; not ready for session (expect reconnect)
Feb 20 09:46:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0)
Feb 20 09:46:50 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:50 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625204: (22) Invalid argument
Feb 20 09:46:50 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v50: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Feb 20 09:46:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_auth_request failed to assign global_id
Feb 20 09:46:51 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625204 172.18.0.108:0/1674332196; not ready for session (expect reconnect)
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0)
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:51 np0005625203.localdomain ceph-mgr[285471]: mgr finish mon failed to return metadata for mon.np0005625204: (22) Invalid argument
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(electing) e17 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625202 calling monitor election
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203 calling monitor election
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: pgmap v48: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625204 calling monitor election
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: pgmap v49: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: pgmap v50: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625202 is new leader, mons np0005625202,np0005625203,np0005625204 in quorum (ranks 0,1,2)
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: monmap epoch 17
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: last_changed 2026-02-20T09:46:46.606881+0000
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: created 2026-02-20T07:36:51.191305+0000
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: min_mon_release 18 (reef)
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: election_strategy: 1
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: 0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: 1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: 2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625204
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: fsmap cephfs:1 {0=mds.np0005625203.zsrwgk=up:active} 2 up:standby
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: osdmap e89: 6 total, 6 up, 6 in
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: mgrmap e36: np0005625203.lonygy(active, since 95s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu, np0005625202.arwxwo
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: overall HEALTH_OK
Feb 20 09:46:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Feb 20 09:46:52 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/1873611277' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 20 09:46:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:46:52 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/198507552' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:46:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:52.238 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.877s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:46:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:52.451 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:46:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:52.453 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12390MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:46:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:52.453 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:46:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:52.454 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:46:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:52.515 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:46:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:52.515 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:46:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:52.538 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:46:52 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_open ignoring open from mon.np0005625204 172.18.0.108:0/1674332196; not ready for session (expect reconnect)
Feb 20 09:46:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon metadata", "id": "np0005625204"} v 0)
Feb 20 09:46:52 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:46:52 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v51: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Feb 20 09:46:52 np0005625203.localdomain ceph-mon[296066]: mgrmap e37: np0005625203.lonygy(active, since 95s), standbys: np0005625204.exgrzx, np0005625201.mtnyvu, np0005625202.arwxwo
Feb 20 09:46:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.200:0/1873611277' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 20 09:46:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2223140922' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:46:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/198507552' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:46:52 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/542821844' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:46:52 np0005625203.localdomain podman[301341]: 2026-02-20 09:46:52.786840617 +0000 UTC m=+0.101601176 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Feb 20 09:46:52 np0005625203.localdomain podman[301341]: 2026-02-20 09:46:52.854994506 +0000 UTC m=+0.169755075 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:46:52 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3691043963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:46:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:53.033 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:46:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:53.039 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:46:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:53.053 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:46:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:53.056 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:46:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:53.056 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:46:53 np0005625203.localdomain ceph-mgr[285471]: log_channel(audit) log [DBG] : from='client.44535 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:53 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO root] Reconfig service osd.default_drive_group
Feb 20 09:46:53 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfig service osd.default_drive_group
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:53 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:53 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:53 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:53 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:53 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0)
Feb 20 09:46:53 np0005625203.localdomain sudo[301389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:46:53 np0005625203.localdomain sudo[301389]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:53 np0005625203.localdomain sudo[301389]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.520783) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580813520833, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 819, "num_deletes": 251, "total_data_size": 1868838, "memory_usage": 1975032, "flush_reason": "Manual Compaction"}
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580813529592, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 1121562, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14312, "largest_seqno": 15126, "table_properties": {"data_size": 1117455, "index_size": 1770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10667, "raw_average_key_size": 21, "raw_value_size": 1108686, "raw_average_value_size": 2239, "num_data_blocks": 75, "num_entries": 495, "num_filter_entries": 495, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580797, "oldest_key_time": 1771580797, "file_creation_time": 1771580813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 8871 microseconds, and 4564 cpu microseconds.
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.529651) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 1121562 bytes OK
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.529676) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.531576) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.531602) EVENT_LOG_v1 {"time_micros": 1771580813531596, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.531628) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 1864282, prev total WAL file size 1866154, number of live WAL files 2.
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.534753) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(1095KB)], [21(18MB)]
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580813534806, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 20298343, "oldest_snapshot_seqno": -1}
Feb 20 09:46:53 np0005625203.localdomain sudo[301407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:46:53 np0005625203.localdomain sudo[301407]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:53 np0005625203.localdomain sudo[301407]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:53 np0005625203.localdomain sudo[301425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:53 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:46:53.613+0000 7f57f2921640 -1 mgr.server handle_report got status from non-daemon mon.np0005625204
Feb 20 09:46:53 np0005625203.localdomain sudo[301425]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 11224 keys, 16369134 bytes, temperature: kUnknown
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580813614754, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 16369134, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16305457, "index_size": 34520, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28101, "raw_key_size": 302953, "raw_average_key_size": 26, "raw_value_size": 16114251, "raw_average_value_size": 1435, "num_data_blocks": 1304, "num_entries": 11224, "num_filter_entries": 11224, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771580813, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:46:53 np0005625203.localdomain sudo[301425]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:46:53 np0005625203.localdomain ceph-mgr[285471]: mgr.server handle_report got status from non-daemon mon.np0005625204
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.615136) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 16369134 bytes
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.617090) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 253.4 rd, 204.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 18.3 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(32.7) write-amplify(14.6) OK, records in: 11760, records dropped: 536 output_compression: NoCompression
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.617121) EVENT_LOG_v1 {"time_micros": 1771580813617106, "job": 10, "event": "compaction_finished", "compaction_time_micros": 80093, "compaction_time_cpu_micros": 50477, "output_level": 6, "num_output_files": 1, "total_output_size": 16369134, "num_input_records": 11760, "num_output_records": 11224, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.534646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.617339) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.617345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.617347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.617349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:46:53.617351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580813617524, "job": 0, "event": "table_file_deletion", "file_number": 23}
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580813620321, "job": 0, "event": "table_file_deletion", "file_number": 21}
Feb 20 09:46:53 np0005625203.localdomain sudo[301443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:53 np0005625203.localdomain sudo[301443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:53 np0005625203.localdomain sudo[301443]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:53 np0005625203.localdomain sudo[301461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:53 np0005625203.localdomain sudo[301461]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:53 np0005625203.localdomain sudo[301461]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: pgmap v51: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1604508930' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3691043963' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1416339821' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:53 np0005625203.localdomain sudo[301495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:53 np0005625203.localdomain sudo[301495]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:46:53 np0005625203.localdomain sudo[301495]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:53 np0005625203.localdomain sudo[301513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:46:53 np0005625203.localdomain sudo[301513]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:53 np0005625203.localdomain sudo[301513]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:54 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:54 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:54 np0005625203.localdomain sudo[301531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:46:54 np0005625203.localdomain sudo[301531]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:54 np0005625203.localdomain sudo[301531]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:54 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:54 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:54 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:54 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:54 np0005625203.localdomain sudo[301549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:46:54 np0005625203.localdomain sudo[301549]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:54 np0005625203.localdomain sudo[301549]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:54 np0005625203.localdomain sudo[301567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:46:54 np0005625203.localdomain sudo[301567]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:54 np0005625203.localdomain sudo[301567]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:54 np0005625203.localdomain sudo[301585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:54 np0005625203.localdomain sudo[301585]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:54 np0005625203.localdomain sudo[301585]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:54 np0005625203.localdomain sudo[301603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:46:54 np0005625203.localdomain sudo[301603]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:54 np0005625203.localdomain sudo[301603]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:54 np0005625203.localdomain sudo[301621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:54 np0005625203.localdomain sudo[301621]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:54 np0005625203.localdomain sudo[301621]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:54 np0005625203.localdomain sudo[301655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:54 np0005625203.localdomain sudo[301655]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:54 np0005625203.localdomain sudo[301655]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:54 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v52: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Feb 20 09:46:54 np0005625203.localdomain sudo[301673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:46:54 np0005625203.localdomain sudo[301673]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:54 np0005625203.localdomain sudo[301673]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:54 np0005625203.localdomain sudo[301691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:54 np0005625203.localdomain sudo[301691]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:54 np0005625203.localdomain sudo[301691]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain.devices.0}] v 0)
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: from='client.44535 -' entity='client.admin' cmd=[{"prefix": "orch", "action": "reconfig", "service_name": "osd.default_drive_group", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: Reconfig service osd.default_drive_group
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625203.localdomain}] v 0)
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain.devices.0}] v 0)
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625204.localdomain}] v 0)
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Feb 20 09:46:54 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] update: starting ev 8366d6fc-0203-44cd-994c-ea5150abd01c (Updating node-proxy deployment (+3 -> 3))
Feb 20 09:46:54 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] complete: finished ev 8366d6fc-0203-44cd-994c-ea5150abd01c (Updating node-proxy deployment (+3 -> 3))
Feb 20 09:46:54 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Completed event 8366d6fc-0203-44cd-994c-ea5150abd01c (Updating node-proxy deployment (+3 -> 3)) in 0 seconds
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Feb 20 09:46:54 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:46:55 np0005625203.localdomain sudo[301709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:46:55 np0005625203.localdomain sudo[301709]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:55 np0005625203.localdomain sudo[301709]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:55 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:46:55 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:55 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:46:55 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: pgmap v52: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625202", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:46:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:56 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring osd.2 (monmap changed)...
Feb 20 09:46:56 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring osd.2 (monmap changed)...
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:56 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:46:56 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:46:56 np0005625203.localdomain ceph-mgr[285471]: log_channel(cluster) log [DBG] : pgmap v53: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Feb 20 09:46:56 np0005625203.localdomain ceph-mgr[285471]: [progress INFO root] Writing back 50 completed events
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625202 (monmap changed)...
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625202 on np0005625202.localdomain
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:57 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring osd.5 (monmap changed)...
Feb 20 09:46:57 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring osd.5 (monmap changed)...
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "auth get", "entity": "osd.5"} v 0)
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Feb 20 09:46:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:57.057 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:46:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:57.057 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:46:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:57.058 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:46:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:57.058 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:57 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:46:57 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:46:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:57.081 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:46:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:57.082 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:46:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:57.082 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:46:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:57.083 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:46:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:57.083 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:46:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:57.083 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:46:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:46:57.084 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.2 (monmap changed)...
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.2 on np0005625202.localdomain
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: pgmap v53: 177 pgs: 177 active+clean; 104 MiB data, 587 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get", "entity": "osd.5"} : dispatch
Feb 20 09:46:57 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain.devices.0}] v 0)
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.np0005625202.localdomain}] v 0)
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [INF] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: [cephadm INFO cephadm.serve] Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: log_channel(cephadm) log [INF] : Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 e90: 6 total, 6 up, 6 in
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr handle_mgr_map I was active but no longer am
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  e: '/usr/bin/ceph-mgr'
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  0: '/usr/bin/ceph-mgr'
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  1: '-n'
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  2: 'mgr.np0005625203.lonygy'
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  3: '-f'
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  4: '--setuser'
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  5: 'ceph'
Feb 20 09:46:58 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:46:58.253+0000 7f584eb32640 -1 mgr handle_mgr_map I was active but no longer am
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  6: '--setgroup'
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  7: 'ceph'
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  8: '--default-log-to-file=false'
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  9: '--default-log-to-journald=true'
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  10: '--default-log-to-stderr=false'
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr respawn  exe_path /proc/self/exe
Feb 20 09:46:58 np0005625203.localdomain sshd[297250]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:46:58 np0005625203.localdomain systemd[1]: session-69.scope: Deactivated successfully.
Feb 20 09:46:58 np0005625203.localdomain systemd[1]: session-69.scope: Consumed 23.456s CPU time.
Feb 20 09:46:58 np0005625203.localdomain systemd-logind[759]: Session 69 logged out. Waiting for processes to exit.
Feb 20 09:46:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:46:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:46:58 np0005625203.localdomain systemd-logind[759]: Removed session 69.
Feb 20 09:46:58 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: ignoring --setuser ceph since I am not root
Feb 20 09:46:58 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: ignoring --setgroup ceph since I am not root
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: ceph version 18.2.1-381.el9cp (984f410e2a30899deb131725765b62212b1621db) reef (stable), process ceph-mgr, pid 2
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: pidfile_write: ignore empty --pid-file
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'alerts'
Feb 20 09:46:58 np0005625203.localdomain podman[301727]: 2026-02-20 09:46:58.442437838 +0000 UTC m=+0.089109307 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Feb 20 09:46:58 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:46:58.477+0000 7f432c787140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'balancer'
Feb 20 09:46:58 np0005625203.localdomain podman[301727]: 2026-02-20 09:46:58.482309093 +0000 UTC m=+0.128980532 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Feb 20 09:46:58 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:46:58 np0005625203.localdomain podman[301729]: 2026-02-20 09:46:58.495950155 +0000 UTC m=+0.138709904 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, release=1770267347, architecture=x86_64, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7)
Feb 20 09:46:58 np0005625203.localdomain podman[301729]: 2026-02-20 09:46:58.536280113 +0000 UTC m=+0.179039832 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, config_id=openstack_network_exporter, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 09:46:58 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 20 09:46:58 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:46:58.554+0000 7f432c787140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Feb 20 09:46:58 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'cephadm'
Feb 20 09:46:58 np0005625203.localdomain sshd[301790]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:46:58 np0005625203.localdomain sshd[301790]: Accepted publickey for ceph-admin from 192.168.122.108 port 58400 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 09:46:58 np0005625203.localdomain systemd-logind[759]: New session 70 of user ceph-admin.
Feb 20 09:46:58 np0005625203.localdomain systemd[1]: Started Session 70 of User ceph-admin.
Feb 20 09:46:58 np0005625203.localdomain sshd[301790]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.5 (monmap changed)...
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.5 on np0005625202.localdomain
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' 
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 ' entity='mgr.np0005625203.lonygy' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.26986 172.18.0.107:0/211199661' entity='mgr.np0005625203.lonygy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.200:0/1025406798' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: Activating manager daemon np0005625204.exgrzx
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: osdmap e90: 6 total, 6 up, 6 in
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.200:0/1025406798' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: mgrmap e38: np0005625204.exgrzx(active, starting, since 0.0312476s), standbys: np0005625201.mtnyvu, np0005625202.arwxwo
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mds metadata", "who": "mds.np0005625202.akhmop"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mds metadata", "who": "mds.np0005625204.wnsphl"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mds metadata", "who": "mds.np0005625203.zsrwgk"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mgr metadata", "who": "np0005625201.mtnyvu", "id": "np0005625201.mtnyvu"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mds metadata"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "osd metadata"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mon metadata"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: Manager daemon np0005625204.exgrzx is now available
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: removing stray HostCache host record np0005625201.localdomain.devices.0
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"}]': finished
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd='[{"prefix":"config-key del","key":"mgr/cephadm/host.np0005625201.localdomain.devices.0"}]': finished
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625204.exgrzx/mirror_snapshot_schedule"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625204.exgrzx/mirror_snapshot_schedule"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625204.exgrzx/trash_purge_schedule"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625204.exgrzx/trash_purge_schedule"} : dispatch
Feb 20 09:46:58 np0005625203.localdomain sudo[301794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:46:58 np0005625203.localdomain sudo[301794]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:58 np0005625203.localdomain sudo[301794]: pam_unix(sudo:session): session closed for user root
Feb 20 09:46:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:46:58 np0005625203.localdomain sudo[301812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:46:58 np0005625203.localdomain sudo[301812]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:46:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:46:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:46:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:46:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:46:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17786 "" "Go-http-client/1.1"
Feb 20 09:46:59 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'crash'
Feb 20 09:46:59 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 20 09:46:59 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'dashboard'
Feb 20 09:46:59 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:46:59.304+0000 7f432c787140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Feb 20 09:46:59 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'devicehealth'
Feb 20 09:46:59 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 20 09:46:59 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:46:59.838+0000 7f432c787140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Feb 20 09:46:59 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'diskprediction_local'
Feb 20 09:46:59 np0005625203.localdomain podman[301907]: 2026-02-20 09:46:59.856078536 +0000 UTC m=+0.106882588 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, io.openshift.tags=rhceph ceph, version=7, com.redhat.component=rhceph-container, ceph=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-type=git, GIT_BRANCH=main, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, RELEASE=main, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, name=rhceph)
Feb 20 09:46:59 np0005625203.localdomain sshd[301925]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:46:59 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Feb 20 09:46:59 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Feb 20 09:46:59 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]:   from numpy import show_config as show_numpy_config
Feb 20 09:46:59 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 20 09:46:59 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:46:59.973+0000 7f432c787140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Feb 20 09:46:59 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'influx'
Feb 20 09:46:59 np0005625203.localdomain podman[301907]: 2026-02-20 09:46:59.982484868 +0000 UTC m=+0.233288970 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=rhceph-container, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, ceph=True, GIT_BRANCH=main, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, build-date=2026-02-09T10:25:24Z, vcs-type=git, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, architecture=x86_64, version=7, RELEASE=main, CEPH_POINT_RELEASE=)
Feb 20 09:47:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 20 09:47:00 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:00.032+0000 7f432c787140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Feb 20 09:47:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'insights'
Feb 20 09:47:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'iostat'
Feb 20 09:47:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 20 09:47:00 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:00.144+0000 7f432c787140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Feb 20 09:47:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'k8sevents'
Feb 20 09:47:00 np0005625203.localdomain ceph-mon[296066]: mgrmap e39: np0005625204.exgrzx(active, since 1.04393s), standbys: np0005625201.mtnyvu, np0005625202.arwxwo
Feb 20 09:47:00 np0005625203.localdomain ceph-mon[296066]: pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:00 np0005625203.localdomain sshd[301925]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:47:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'localpool'
Feb 20 09:47:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'mds_autoscaler'
Feb 20 09:47:00 np0005625203.localdomain sudo[301812]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'mirroring'
Feb 20 09:47:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'nfs'
Feb 20 09:47:00 np0005625203.localdomain sudo[302030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:47:00 np0005625203.localdomain sudo[302030]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:00 np0005625203.localdomain sudo[302030]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:00 np0005625203.localdomain sudo[302048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:47:00 np0005625203.localdomain sudo[302048]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:00 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:00.868+0000 7f432c787140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 20 09:47:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Feb 20 09:47:00 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'orchestrator'
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:01.012+0000 7f432c787140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'osd_perf_query'
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:01.074+0000 7f432c787140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'osd_support'
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:01.129+0000 7f432c787140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'pg_autoscaler'
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:01.194+0000 7f432c787140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'progress'
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:01.252+0000 7f432c787140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'prometheus'
Feb 20 09:47:01 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:46:59] ENGINE Bus STARTING
Feb 20 09:47:01 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:46:59] ENGINE Serving on http://172.18.0.108:8765
Feb 20 09:47:01 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:46:59] ENGINE Serving on https://172.18.0.108:7150
Feb 20 09:47:01 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:46:59] ENGINE Bus STARTED
Feb 20 09:47:01 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:46:59] ENGINE Client ('172.18.0.108', 51320) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 20 09:47:01 np0005625203.localdomain ceph-mon[296066]: pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:01 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:01 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:01 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:01 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:01 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:01 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:01 np0005625203.localdomain sudo[302048]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'rbd_support'
Feb 20 09:47:01 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:01.548+0000 7f432c787140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain sudo[302099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:47:01 np0005625203.localdomain sudo[302099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:01 np0005625203.localdomain sudo[302099]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:01.629+0000 7f432c787140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'restful'
Feb 20 09:47:01 np0005625203.localdomain sudo[302117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Feb 20 09:47:01 np0005625203.localdomain sudo[302117]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'rgw'
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:01.943+0000 7f432c787140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Feb 20 09:47:01 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'rook'
Feb 20 09:47:02 np0005625203.localdomain sudo[302117]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:02 np0005625203.localdomain sudo[302154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:47:02 np0005625203.localdomain sudo[302154]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:02 np0005625203.localdomain sudo[302154]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:02 np0005625203.localdomain sudo[302172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:47:02 np0005625203.localdomain sudo[302172]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:02 np0005625203.localdomain sudo[302172]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 20 09:47:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:02.341+0000 7f432c787140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'selftest'
Feb 20 09:47:02 np0005625203.localdomain sudo[302190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:47:02 np0005625203.localdomain sudo[302190]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:02 np0005625203.localdomain sudo[302190]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 20 09:47:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:02.401+0000 7f432c787140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'snap_schedule'
Feb 20 09:47:02 np0005625203.localdomain sudo[302208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:47:02 np0005625203.localdomain sudo[302208]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:02 np0005625203.localdomain sudo[302208]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'stats'
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'status'
Feb 20 09:47:02 np0005625203.localdomain sudo[302226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:47:02 np0005625203.localdomain sudo[302226]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:02 np0005625203.localdomain sudo[302226]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module status has missing NOTIFY_TYPES member
Feb 20 09:47:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:02.595+0000 7f432c787140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'telegraf'
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: mgrmap e40: np0005625204.exgrzx(active, since 3s), standbys: np0005625201.mtnyvu, np0005625202.arwxwo
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3721573066' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3721573066' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:47:02 np0005625203.localdomain ceph-mon[296066]: pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 20 09:47:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:02.653+0000 7f432c787140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'telemetry'
Feb 20 09:47:02 np0005625203.localdomain sudo[302260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:47:02 np0005625203.localdomain sudo[302260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:02 np0005625203.localdomain sudo[302260]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:02 np0005625203.localdomain sudo[302278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:47:02 np0005625203.localdomain sudo[302278]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:02 np0005625203.localdomain sudo[302278]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 20 09:47:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:02.781+0000 7f432c787140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'test_orchestrator'
Feb 20 09:47:02 np0005625203.localdomain sudo[302296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:47:02 np0005625203.localdomain sudo[302296]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:02 np0005625203.localdomain sudo[302296]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:02 np0005625203.localdomain sudo[302314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:47:02 np0005625203.localdomain sudo[302314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:02 np0005625203.localdomain sudo[302314]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 20 09:47:02 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:02.925+0000 7f432c787140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Feb 20 09:47:02 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'volumes'
Feb 20 09:47:02 np0005625203.localdomain sudo[302332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:47:03 np0005625203.localdomain sudo[302332]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:03 np0005625203.localdomain sudo[302332]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:03 np0005625203.localdomain sudo[302350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:47:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 20 09:47:03 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:03.119+0000 7f432c787140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Feb 20 09:47:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Loading python module 'zabbix'
Feb 20 09:47:03 np0005625203.localdomain ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-mgr-np0005625203-lonygy[285467]: 2026-02-20T09:47:03.176+0000 7f432c787140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 20 09:47:03 np0005625203.localdomain ceph-mgr[285471]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Feb 20 09:47:03 np0005625203.localdomain ceph-mgr[285471]: ms_deliver_dispatch: unhandled message 0x55b04926f600 mon_map magic: 0 from mon.1 v2:172.18.0.104:3300/0
Feb 20 09:47:03 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.108:6810/689946273
Feb 20 09:47:03 np0005625203.localdomain sudo[302350]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:03 np0005625203.localdomain sudo[302350]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:03 np0005625203.localdomain sudo[302368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:47:03 np0005625203.localdomain sudo[302368]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:03 np0005625203.localdomain sudo[302368]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:03 np0005625203.localdomain sudo[302386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:47:03 np0005625203.localdomain sudo[302386]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:03 np0005625203.localdomain sudo[302386]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:03 np0005625203.localdomain sudo[302420]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:47:03 np0005625203.localdomain sudo[302420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:03 np0005625203.localdomain sudo[302420]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:03 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:47:03 np0005625203.localdomain sudo[302438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:47:03 np0005625203.localdomain sudo[302438]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:03 np0005625203.localdomain sudo[302438]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:04 np0005625203.localdomain sshd[302472]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:47:04 np0005625203.localdomain sudo[302456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:47:04 np0005625203.localdomain sudo[302456]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:04 np0005625203.localdomain sudo[302456]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:04 np0005625203.localdomain sudo[302476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:47:04 np0005625203.localdomain sudo[302476]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:04 np0005625203.localdomain sudo[302476]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:04 np0005625203.localdomain ceph-mgr[285471]: client.0 ms_handle_reset on v2:172.18.0.108:6810/689946273
Feb 20 09:47:04 np0005625203.localdomain sudo[302494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:47:04 np0005625203.localdomain sudo[302494]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:04 np0005625203.localdomain sudo[302494]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:04 np0005625203.localdomain sudo[302512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:47:04 np0005625203.localdomain sudo[302512]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:04 np0005625203.localdomain sudo[302512]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:04 np0005625203.localdomain sudo[302530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:47:04 np0005625203.localdomain sudo[302530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:04 np0005625203.localdomain sudo[302530]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:04 np0005625203.localdomain sudo[302548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:47:04 np0005625203.localdomain sudo[302548]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:04 np0005625203.localdomain sudo[302548]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:04 np0005625203.localdomain sudo[302582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:47:04 np0005625203.localdomain sudo[302582]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:04 np0005625203.localdomain sudo[302582]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:04 np0005625203.localdomain sudo[302600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:47:04 np0005625203.localdomain sudo[302600]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:04 np0005625203.localdomain sudo[302600]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:04 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:47:04 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:47:04 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:47:04 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:47:04 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:47:04 np0005625203.localdomain ceph-mon[296066]: Standby manager daemon np0005625203.lonygy started
Feb 20 09:47:04 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:47:04 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:47:04 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:47:04 np0005625203.localdomain ceph-mon[296066]: pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 0 B/s wr, 23 op/s
Feb 20 09:47:04 np0005625203.localdomain sudo[302618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 20 09:47:04 np0005625203.localdomain sudo[302618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:04 np0005625203.localdomain sudo[302618]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:04 np0005625203.localdomain sudo[302636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:47:04 np0005625203.localdomain sudo[302636]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:04 np0005625203.localdomain sudo[302636]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:04 np0005625203.localdomain sudo[302654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:47:04 np0005625203.localdomain sudo[302654]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:04 np0005625203.localdomain sudo[302654]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:04 np0005625203.localdomain sudo[302672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:47:04 np0005625203.localdomain sudo[302672]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:04 np0005625203.localdomain sudo[302672]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:05 np0005625203.localdomain sudo[302690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:47:05 np0005625203.localdomain sudo[302690]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:05 np0005625203.localdomain sudo[302690]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:05 np0005625203.localdomain sudo[302708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:47:05 np0005625203.localdomain sudo[302708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:05 np0005625203.localdomain sudo[302708]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:05 np0005625203.localdomain sudo[302742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:47:05 np0005625203.localdomain sudo[302742]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:05 np0005625203.localdomain sudo[302742]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:05 np0005625203.localdomain sshd[302472]: Invalid user n8n from 34.131.211.42 port 40138
Feb 20 09:47:05 np0005625203.localdomain sudo[302760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:47:05 np0005625203.localdomain sudo[302760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:05 np0005625203.localdomain sudo[302760]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:05 np0005625203.localdomain sudo[302778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:47:05 np0005625203.localdomain sudo[302778]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:05 np0005625203.localdomain sudo[302778]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:05 np0005625203.localdomain sshd[302472]: Received disconnect from 34.131.211.42 port 40138:11: Bye Bye [preauth]
Feb 20 09:47:05 np0005625203.localdomain sshd[302472]: Disconnected from invalid user n8n 34.131.211.42 port 40138 [preauth]
Feb 20 09:47:05 np0005625203.localdomain ceph-mon[296066]: mgrmap e41: np0005625204.exgrzx(active, since 6s), standbys: np0005625201.mtnyvu, np0005625202.arwxwo, np0005625203.lonygy
Feb 20 09:47:05 np0005625203.localdomain sudo[302796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:47:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} : dispatch
Feb 20 09:47:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:05 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:47:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:47:05 np0005625203.localdomain sudo[302796]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:05 np0005625203.localdomain sudo[302796]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 0 B/s wr, 18 op/s
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mds.mds.np0005625202.akhmop (monmap changed)...
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mds.mds.np0005625202.akhmop on np0005625202.localdomain
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625202.akhmop", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: Health check failed: 1 stray daemon(s) not managed by cephadm (CEPHADM_STRAY_DAEMON)
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: Health check failed: 1 stray host(s) with 1 daemon(s) not managed by cephadm (CEPHADM_STRAY_HOST)
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625202.arwxwo", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:47:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:47:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:47:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:47:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:47:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:47:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:47:07 np0005625203.localdomain sudo[302814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:47:07 np0005625203.localdomain sudo[302814]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:07 np0005625203.localdomain sudo[302814]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:47:07.663 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:47:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:47:07.663 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:47:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:47:07.664 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:47:07 np0005625203.localdomain sudo[302832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:47:07 np0005625203.localdomain sudo[302832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:08 np0005625203.localdomain podman[302868]: 
Feb 20 09:47:08 np0005625203.localdomain podman[302868]: 2026-02-20 09:47:08.146070982 +0000 UTC m=+0.081527743 container create 3f63dd7945c32daa3880e9de6d3896cf07931b7f93dfd0172ee901924c541c7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_ishizaka, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, distribution-scope=public, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, release=1770267347, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=7, GIT_BRANCH=main, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=)
Feb 20 09:47:08 np0005625203.localdomain systemd[1]: Started libpod-conmon-3f63dd7945c32daa3880e9de6d3896cf07931b7f93dfd0172ee901924c541c7c.scope.
Feb 20 09:47:08 np0005625203.localdomain podman[302868]: 2026-02-20 09:47:08.110962976 +0000 UTC m=+0.046419767 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:47:08 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:47:08 np0005625203.localdomain podman[302868]: 2026-02-20 09:47:08.238279396 +0000 UTC m=+0.173736167 container init 3f63dd7945c32daa3880e9de6d3896cf07931b7f93dfd0172ee901924c541c7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_ishizaka, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, description=Red Hat Ceph Storage 7, release=1770267347, io.openshift.tags=rhceph ceph, architecture=x86_64, version=7, GIT_BRANCH=main, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, com.redhat.component=rhceph-container, distribution-scope=public, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, CEPH_POINT_RELEASE=, GIT_CLEAN=True, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git)
Feb 20 09:47:08 np0005625203.localdomain podman[302868]: 2026-02-20 09:47:08.250262927 +0000 UTC m=+0.185719698 container start 3f63dd7945c32daa3880e9de6d3896cf07931b7f93dfd0172ee901924c541c7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_ishizaka, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, GIT_CLEAN=True, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, version=7, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, architecture=x86_64, build-date=2026-02-09T10:25:24Z, distribution-scope=public, RELEASE=main, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:47:08 np0005625203.localdomain podman[302868]: 2026-02-20 09:47:08.250546646 +0000 UTC m=+0.186003457 container attach 3f63dd7945c32daa3880e9de6d3896cf07931b7f93dfd0172ee901924c541c7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_ishizaka, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9)
Feb 20 09:47:08 np0005625203.localdomain reverent_ishizaka[302882]: 167 167
Feb 20 09:47:08 np0005625203.localdomain systemd[1]: libpod-3f63dd7945c32daa3880e9de6d3896cf07931b7f93dfd0172ee901924c541c7c.scope: Deactivated successfully.
Feb 20 09:47:08 np0005625203.localdomain podman[302868]: 2026-02-20 09:47:08.255948972 +0000 UTC m=+0.191405793 container died 3f63dd7945c32daa3880e9de6d3896cf07931b7f93dfd0172ee901924c541c7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_ishizaka, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, architecture=x86_64, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, vcs-type=git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main, io.buildah.version=1.42.2, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, name=rhceph, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, io.openshift.expose-services=, distribution-scope=public)
Feb 20 09:47:08 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625202.arwxwo (monmap changed)...
Feb 20 09:47:08 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625202.arwxwo on np0005625202.localdomain
Feb 20 09:47:08 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:08 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:47:08 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:08 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:08 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625203", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:47:08 np0005625203.localdomain podman[302887]: 2026-02-20 09:47:08.360390945 +0000 UTC m=+0.091800472 container remove 3f63dd7945c32daa3880e9de6d3896cf07931b7f93dfd0172ee901924c541c7c (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=reverent_ishizaka, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, io.openshift.expose-services=, ceph=True, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_BRANCH=main, vcs-type=git, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, architecture=x86_64, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, release=1770267347, io.k8s.description=Red Hat Ceph Storage 7)
Feb 20 09:47:08 np0005625203.localdomain systemd[1]: libpod-conmon-3f63dd7945c32daa3880e9de6d3896cf07931b7f93dfd0172ee901924c541c7c.scope: Deactivated successfully.
Feb 20 09:47:08 np0005625203.localdomain sudo[302832]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:08 np0005625203.localdomain sudo[302903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:47:08 np0005625203.localdomain sudo[302903]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:47:08 np0005625203.localdomain sudo[302903]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:08 np0005625203.localdomain sudo[302922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:47:08 np0005625203.localdomain sudo[302922]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:08 np0005625203.localdomain podman[302921]: 2026-02-20 09:47:08.668006055 +0000 UTC m=+0.094073812 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:47:08 np0005625203.localdomain podman[302921]: 2026-02-20 09:47:08.684291008 +0000 UTC m=+0.110358755 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:47:08 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:47:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:47:09 np0005625203.localdomain podman[302978]: 
Feb 20 09:47:09 np0005625203.localdomain podman[302978]: 2026-02-20 09:47:09.087841307 +0000 UTC m=+0.075367673 container create e0debddc7e875c50bc36fff61e2b16aff88ef4690bebdae178c22b2c4060fa4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_einstein, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, io.buildah.version=1.42.2, ceph=True, version=7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, io.k8s.description=Red Hat Ceph Storage 7, distribution-scope=public, GIT_BRANCH=main, release=1770267347, com.redhat.component=rhceph-container, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, io.openshift.expose-services=, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, RELEASE=main)
Feb 20 09:47:09 np0005625203.localdomain systemd[1]: Started libpod-conmon-e0debddc7e875c50bc36fff61e2b16aff88ef4690bebdae178c22b2c4060fa4d.scope.
Feb 20 09:47:09 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:47:09 np0005625203.localdomain podman[302978]: 2026-02-20 09:47:09.053973699 +0000 UTC m=+0.041500115 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:47:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-bcf3b006dcb03a544dabebf0a4feb12f413b7c3efbddc42fd0ae1e8e92a70a8a-merged.mount: Deactivated successfully.
Feb 20 09:47:09 np0005625203.localdomain podman[302978]: 2026-02-20 09:47:09.157658768 +0000 UTC m=+0.145185154 container init e0debddc7e875c50bc36fff61e2b16aff88ef4690bebdae178c22b2c4060fa4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_einstein, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, io.buildah.version=1.42.2, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=rhceph ceph, ceph=True, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, distribution-scope=public, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, description=Red Hat Ceph Storage 7, GIT_BRANCH=main, com.redhat.component=rhceph-container, RELEASE=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2026-02-09T10:25:24Z, vcs-type=git, CEPH_POINT_RELEASE=, io.k8s.description=Red Hat Ceph Storage 7, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:47:09 np0005625203.localdomain podman[302978]: 2026-02-20 09:47:09.166273484 +0000 UTC m=+0.153799830 container start e0debddc7e875c50bc36fff61e2b16aff88ef4690bebdae178c22b2c4060fa4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_einstein, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, version=7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., release=1770267347, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, vcs-type=git, ceph=True, io.buildah.version=1.42.2, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main)
Feb 20 09:47:09 np0005625203.localdomain systemd[1]: libpod-e0debddc7e875c50bc36fff61e2b16aff88ef4690bebdae178c22b2c4060fa4d.scope: Deactivated successfully.
Feb 20 09:47:09 np0005625203.localdomain podman[302978]: 2026-02-20 09:47:09.166516833 +0000 UTC m=+0.154043169 container attach e0debddc7e875c50bc36fff61e2b16aff88ef4690bebdae178c22b2c4060fa4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_einstein, GIT_REPO=https://github.com/ceph/ceph-container.git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, CEPH_POINT_RELEASE=, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, RELEASE=main, architecture=x86_64, distribution-scope=public, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, release=1770267347, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc.)
Feb 20 09:47:09 np0005625203.localdomain amazing_einstein[302991]: 167 167
Feb 20 09:47:09 np0005625203.localdomain podman[302978]: 2026-02-20 09:47:09.173989773 +0000 UTC m=+0.161516169 container died e0debddc7e875c50bc36fff61e2b16aff88ef4690bebdae178c22b2c4060fa4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_einstein, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, vcs-type=git, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, version=7, vendor=Red Hat, Inc., io.k8s.description=Red Hat Ceph Storage 7, GIT_CLEAN=True, RELEASE=main, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, GIT_BRANCH=main, release=1770267347, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:47:09 np0005625203.localdomain systemd[1]: tmp-crun.Imgkmv.mount: Deactivated successfully.
Feb 20 09:47:09 np0005625203.localdomain podman[302998]: 2026-02-20 09:47:09.282681587 +0000 UTC m=+0.099011005 container remove e0debddc7e875c50bc36fff61e2b16aff88ef4690bebdae178c22b2c4060fa4d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=amazing_einstein, io.buildah.version=1.42.2, RELEASE=main, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, vcs-type=git, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, distribution-scope=public, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7)
Feb 20 09:47:09 np0005625203.localdomain systemd[1]: libpod-conmon-e0debddc7e875c50bc36fff61e2b16aff88ef4690bebdae178c22b2c4060fa4d.scope: Deactivated successfully.
Feb 20 09:47:09 np0005625203.localdomain ceph-mon[296066]: pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 0 B/s wr, 14 op/s
Feb 20 09:47:09 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625203 (monmap changed)...
Feb 20 09:47:09 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625203 on np0005625203.localdomain
Feb 20 09:47:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:09 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.1 (monmap changed)...
Feb 20 09:47:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Feb 20 09:47:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:09 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.1 on np0005625203.localdomain
Feb 20 09:47:09 np0005625203.localdomain sudo[302922]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:09 np0005625203.localdomain sudo[303021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:47:09 np0005625203.localdomain sudo[303021]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:09 np0005625203.localdomain sudo[303021]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:09 np0005625203.localdomain sudo[303039]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:47:09 np0005625203.localdomain sudo[303039]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:10 np0005625203.localdomain podman[303075]: 
Feb 20 09:47:10 np0005625203.localdomain podman[303075]: 2026-02-20 09:47:10.094356165 +0000 UTC m=+0.081128671 container create fe8fb83e0d4f458238a84f8a102bbedd21eabacbd0afdca28d48e8fd98efa1b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_elbakyan, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, RELEASE=main, build-date=2026-02-09T10:25:24Z, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, ceph=True, io.openshift.expose-services=, architecture=x86_64, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:47:10 np0005625203.localdomain systemd[1]: Started libpod-conmon-fe8fb83e0d4f458238a84f8a102bbedd21eabacbd0afdca28d48e8fd98efa1b5.scope.
Feb 20 09:47:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:47:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:47:10 np0005625203.localdomain podman[303075]: 2026-02-20 09:47:10.154248569 +0000 UTC m=+0.141021075 container init fe8fb83e0d4f458238a84f8a102bbedd21eabacbd0afdca28d48e8fd98efa1b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_elbakyan, CEPH_POINT_RELEASE=, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, description=Red Hat Ceph Storage 7, RELEASE=main, io.buildah.version=1.42.2, com.redhat.component=rhceph-container, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, name=rhceph, build-date=2026-02-09T10:25:24Z, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_REPO=https://github.com/ceph/ceph-container.git, vendor=Red Hat, Inc., GIT_BRANCH=main, release=1770267347, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, distribution-scope=public, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 09:47:10 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f9e715a9a63eb0c8521bfd09b4c95aa29d5c9c5eab31ac7471dd82a755fbc100-merged.mount: Deactivated successfully.
Feb 20 09:47:10 np0005625203.localdomain podman[303075]: 2026-02-20 09:47:10.063199901 +0000 UTC m=+0.049972487 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:47:10 np0005625203.localdomain podman[303075]: 2026-02-20 09:47:10.165074754 +0000 UTC m=+0.151847260 container start fe8fb83e0d4f458238a84f8a102bbedd21eabacbd0afdca28d48e8fd98efa1b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_elbakyan, ceph=True, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_CLEAN=True, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, CEPH_POINT_RELEASE=, vcs-type=git, com.redhat.component=rhceph-container, version=7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, build-date=2026-02-09T10:25:24Z, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., architecture=x86_64)
Feb 20 09:47:10 np0005625203.localdomain podman[303075]: 2026-02-20 09:47:10.165346432 +0000 UTC m=+0.152118938 container attach fe8fb83e0d4f458238a84f8a102bbedd21eabacbd0afdca28d48e8fd98efa1b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_elbakyan, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7, RELEASE=main, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, name=rhceph, description=Red Hat Ceph Storage 7, io.k8s.description=Red Hat Ceph Storage 7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, distribution-scope=public, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, architecture=x86_64, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., GIT_CLEAN=True, io.openshift.expose-services=)
Feb 20 09:47:10 np0005625203.localdomain priceless_elbakyan[303091]: 167 167
Feb 20 09:47:10 np0005625203.localdomain podman[303075]: 2026-02-20 09:47:10.169264203 +0000 UTC m=+0.156036689 container died fe8fb83e0d4f458238a84f8a102bbedd21eabacbd0afdca28d48e8fd98efa1b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_elbakyan, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_BRANCH=main, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, CEPH_POINT_RELEASE=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=rhceph-container, io.openshift.expose-services=, GIT_CLEAN=True, distribution-scope=public, vcs-type=git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, name=rhceph, build-date=2026-02-09T10:25:24Z, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347)
Feb 20 09:47:10 np0005625203.localdomain systemd[1]: libpod-fe8fb83e0d4f458238a84f8a102bbedd21eabacbd0afdca28d48e8fd98efa1b5.scope: Deactivated successfully.
Feb 20 09:47:10 np0005625203.localdomain podman[303106]: 2026-02-20 09:47:10.290521376 +0000 UTC m=+0.107765846 container remove fe8fb83e0d4f458238a84f8a102bbedd21eabacbd0afdca28d48e8fd98efa1b5 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=priceless_elbakyan, GIT_REPO=https://github.com/ceph/ceph-container.git, io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, version=7, CEPH_POINT_RELEASE=, name=rhceph, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_BRANCH=main, vcs-type=git, GIT_CLEAN=True, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, release=1770267347, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., distribution-scope=public, io.buildah.version=1.42.2, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, RELEASE=main)
Feb 20 09:47:10 np0005625203.localdomain systemd[1]: libpod-conmon-fe8fb83e0d4f458238a84f8a102bbedd21eabacbd0afdca28d48e8fd98efa1b5.scope: Deactivated successfully.
Feb 20 09:47:10 np0005625203.localdomain podman[303093]: 2026-02-20 09:47:10.249251499 +0000 UTC m=+0.103227596 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:47:10 np0005625203.localdomain podman[303093]: 2026-02-20 09:47:10.333605009 +0000 UTC m=+0.187581106 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:47:10 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:47:10 np0005625203.localdomain sudo[303039]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:10 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:10 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:10 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:10 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:10 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "osd.4"} : dispatch
Feb 20 09:47:10 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:10 np0005625203.localdomain sudo[303142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:47:10 np0005625203.localdomain sudo[303142]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:10 np0005625203.localdomain sudo[303142]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:10 np0005625203.localdomain sudo[303160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:47:10 np0005625203.localdomain sudo[303160]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:11 np0005625203.localdomain systemd[1]: tmp-crun.8TKPMM.mount: Deactivated successfully.
Feb 20 09:47:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0d810028b1d7cfdd0ebd632937f14f6e63aeec51ba9db0870ba03eadca6c7fef-merged.mount: Deactivated successfully.
Feb 20 09:47:11 np0005625203.localdomain podman[303194]: 
Feb 20 09:47:11 np0005625203.localdomain podman[303194]: 2026-02-20 09:47:11.17830442 +0000 UTC m=+0.079848132 container create 66035cc6a6911af25d63c10623a24db34701308ad78becc2868e8c143dc2a2ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_dhawan, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, RELEASE=main, io.openshift.expose-services=, io.buildah.version=1.42.2, name=rhceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1770267347, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_CLEAN=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.k8s.description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, GIT_REPO=https://github.com/ceph/ceph-container.git, ceph=True)
Feb 20 09:47:11 np0005625203.localdomain systemd[1]: Started libpod-conmon-66035cc6a6911af25d63c10623a24db34701308ad78becc2868e8c143dc2a2ae.scope.
Feb 20 09:47:11 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:47:11 np0005625203.localdomain podman[303194]: 2026-02-20 09:47:11.145233686 +0000 UTC m=+0.046777438 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:47:11 np0005625203.localdomain podman[303194]: 2026-02-20 09:47:11.255722295 +0000 UTC m=+0.157266017 container init 66035cc6a6911af25d63c10623a24db34701308ad78becc2868e8c143dc2a2ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_dhawan, vendor=Red Hat, Inc., org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_CLEAN=True, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=rhceph-container, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, release=1770267347, io.openshift.tags=rhceph ceph, RELEASE=main, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.buildah.version=1.42.2)
Feb 20 09:47:11 np0005625203.localdomain podman[303194]: 2026-02-20 09:47:11.268013016 +0000 UTC m=+0.169556798 container start 66035cc6a6911af25d63c10623a24db34701308ad78becc2868e8c143dc2a2ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_dhawan, url=https://catalog.redhat.com/en/search?searchType=containers, name=rhceph, ceph=True, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, GIT_REPO=https://github.com/ceph/ceph-container.git, release=1770267347, vendor=Red Hat, Inc., description=Red Hat Ceph Storage 7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.description=Red Hat Ceph Storage 7, version=7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, io.openshift.tags=rhceph ceph, CEPH_POINT_RELEASE=, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., build-date=2026-02-09T10:25:24Z, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, com.redhat.component=rhceph-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, vcs-type=git, GIT_BRANCH=main, RELEASE=main)
Feb 20 09:47:11 np0005625203.localdomain podman[303194]: 2026-02-20 09:47:11.268615815 +0000 UTC m=+0.170159577 container attach 66035cc6a6911af25d63c10623a24db34701308ad78becc2868e8c143dc2a2ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_dhawan, RELEASE=main, release=1770267347, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, name=rhceph, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., io.openshift.expose-services=, version=7, com.redhat.component=rhceph-container, GIT_BRANCH=main, vendor=Red Hat, Inc., ceph=True, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, description=Red Hat Ceph Storage 7, io.buildah.version=1.42.2, vcs-type=git, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0)
Feb 20 09:47:11 np0005625203.localdomain quirky_dhawan[303209]: 167 167
Feb 20 09:47:11 np0005625203.localdomain systemd[1]: libpod-66035cc6a6911af25d63c10623a24db34701308ad78becc2868e8c143dc2a2ae.scope: Deactivated successfully.
Feb 20 09:47:11 np0005625203.localdomain podman[303194]: 2026-02-20 09:47:11.273128055 +0000 UTC m=+0.174671797 container died 66035cc6a6911af25d63c10623a24db34701308ad78becc2868e8c143dc2a2ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_dhawan, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.created=2026-02-09T10:25:24Z, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, CEPH_POINT_RELEASE=, RELEASE=main, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.42.2, GIT_CLEAN=True, name=rhceph, ceph=True, build-date=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.tags=rhceph ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7, com.redhat.component=rhceph-container, vcs-type=git)
Feb 20 09:47:11 np0005625203.localdomain podman[303214]: 2026-02-20 09:47:11.365005107 +0000 UTC m=+0.080260734 container remove 66035cc6a6911af25d63c10623a24db34701308ad78becc2868e8c143dc2a2ae (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=quirky_dhawan, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, name=rhceph, ceph=True, com.redhat.component=rhceph-container, io.openshift.tags=rhceph ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, CEPH_POINT_RELEASE=, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, url=https://catalog.redhat.com/en/search?searchType=containers, version=7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, release=1770267347, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, GIT_BRANCH=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:47:11 np0005625203.localdomain systemd[1]: libpod-conmon-66035cc6a6911af25d63c10623a24db34701308ad78becc2868e8c143dc2a2ae.scope: Deactivated successfully.
Feb 20 09:47:11 np0005625203.localdomain sudo[303160]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 0 B/s wr, 11 op/s
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.4 (monmap changed)...
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.4 on np0005625203.localdomain
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625203.zsrwgk", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625203.lonygy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:47:11 np0005625203.localdomain sudo[303230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:47:11 np0005625203.localdomain sudo[303230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:11 np0005625203.localdomain sudo[303230]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:11 np0005625203.localdomain sudo[303248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:47:11 np0005625203.localdomain sudo[303248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:12 np0005625203.localdomain podman[303282]: 
Feb 20 09:47:12 np0005625203.localdomain podman[303282]: 2026-02-20 09:47:12.062501012 +0000 UTC m=+0.079440359 container create 5b7b89c75671ac8a2b49028b91d6bc1630eb85aa2618e184fd66c119fdce3d9a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_archimedes, io.k8s.description=Red Hat Ceph Storage 7, version=7, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, name=rhceph, GIT_BRANCH=main, ceph=True, release=1770267347, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.buildah.version=1.42.2, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=rhceph ceph, architecture=x86_64, vendor=Red Hat, Inc.)
Feb 20 09:47:12 np0005625203.localdomain systemd[1]: Started libpod-conmon-5b7b89c75671ac8a2b49028b91d6bc1630eb85aa2618e184fd66c119fdce3d9a.scope.
Feb 20 09:47:12 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:47:12 np0005625203.localdomain podman[303282]: 2026-02-20 09:47:12.124622515 +0000 UTC m=+0.141561832 container init 5b7b89c75671ac8a2b49028b91d6bc1630eb85aa2618e184fd66c119fdce3d9a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_archimedes, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=7, io.openshift.expose-services=, GIT_REPO=https://github.com/ceph/ceph-container.git, architecture=x86_64, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, com.redhat.component=rhceph-container, distribution-scope=public, GIT_BRANCH=main, build-date=2026-02-09T10:25:24Z, ceph=True, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_CLEAN=True, name=rhceph, RELEASE=main, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, vcs-type=git, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, io.openshift.tags=rhceph ceph, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 09:47:12 np0005625203.localdomain podman[303282]: 2026-02-20 09:47:12.030299636 +0000 UTC m=+0.047238973 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:47:12 np0005625203.localdomain podman[303282]: 2026-02-20 09:47:12.132366225 +0000 UTC m=+0.149305542 container start 5b7b89c75671ac8a2b49028b91d6bc1630eb85aa2618e184fd66c119fdce3d9a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_archimedes, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-09T10:25:24Z, vendor=Red Hat, Inc., version=7, description=Red Hat Ceph Storage 7, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, name=rhceph, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.tags=rhceph ceph, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, architecture=x86_64, GIT_BRANCH=main, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, release=1770267347, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z)
Feb 20 09:47:12 np0005625203.localdomain podman[303282]: 2026-02-20 09:47:12.133073196 +0000 UTC m=+0.150012513 container attach 5b7b89c75671ac8a2b49028b91d6bc1630eb85aa2618e184fd66c119fdce3d9a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_archimedes, vendor=Red Hat, Inc., GIT_CLEAN=True, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, architecture=x86_64, vcs-type=git, name=rhceph, GIT_BRANCH=main, org.opencontainers.image.created=2026-02-09T10:25:24Z, RELEASE=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.buildah.version=1.42.2, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., release=1770267347, description=Red Hat Ceph Storage 7, version=7, io.openshift.tags=rhceph ceph)
Feb 20 09:47:12 np0005625203.localdomain magical_archimedes[303297]: 167 167
Feb 20 09:47:12 np0005625203.localdomain systemd[1]: libpod-5b7b89c75671ac8a2b49028b91d6bc1630eb85aa2618e184fd66c119fdce3d9a.scope: Deactivated successfully.
Feb 20 09:47:12 np0005625203.localdomain podman[303282]: 2026-02-20 09:47:12.136907895 +0000 UTC m=+0.153847232 container died 5b7b89c75671ac8a2b49028b91d6bc1630eb85aa2618e184fd66c119fdce3d9a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_archimedes, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, vcs-type=git, io.openshift.tags=rhceph ceph, com.redhat.component=rhceph-container, build-date=2026-02-09T10:25:24Z, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, distribution-scope=public, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, io.k8s.description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, version=7, release=1770267347, name=rhceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, CEPH_POINT_RELEASE=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, ceph=True, description=Red Hat Ceph Storage 7, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., RELEASE=main)
Feb 20 09:47:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c8fedfe168cc5e4069eb24061493ce338d0c0194de6d7a14124566b3c1f6e4a8-merged.mount: Deactivated successfully.
Feb 20 09:47:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-b7e9ad6e415facdcf574193b8f40fc5816be6e9b38fb95bb201bd2289661a6d0-merged.mount: Deactivated successfully.
Feb 20 09:47:12 np0005625203.localdomain podman[303302]: 2026-02-20 09:47:12.255837105 +0000 UTC m=+0.111038266 container remove 5b7b89c75671ac8a2b49028b91d6bc1630eb85aa2618e184fd66c119fdce3d9a (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=magical_archimedes, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.tags=rhceph ceph, ceph=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_CLEAN=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., vcs-type=git, build-date=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, vendor=Red Hat, Inc., io.buildah.version=1.42.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=Red Hat Ceph Storage 7, version=7, RELEASE=main, GIT_BRANCH=main, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=Red Hat Ceph Storage 7, architecture=x86_64, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:47:12 np0005625203.localdomain systemd[1]: libpod-conmon-5b7b89c75671ac8a2b49028b91d6bc1630eb85aa2618e184fd66c119fdce3d9a.scope: Deactivated successfully.
Feb 20 09:47:12 np0005625203.localdomain sudo[303248]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:12 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mds.mds.np0005625203.zsrwgk (monmap changed)...
Feb 20 09:47:12 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mds.mds.np0005625203.zsrwgk on np0005625203.localdomain
Feb 20 09:47:12 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625203.lonygy (monmap changed)...
Feb 20 09:47:12 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625203.lonygy on np0005625203.localdomain
Feb 20 09:47:12 np0005625203.localdomain ceph-mon[296066]: pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 0 B/s wr, 10 op/s
Feb 20 09:47:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:12 np0005625203.localdomain ceph-mon[296066]: Reconfiguring crash.np0005625204 (monmap changed)...
Feb 20 09:47:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:47:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:12 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon crash.np0005625204 on np0005625204.localdomain
Feb 20 09:47:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "client.crash.np0005625204", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Feb 20 09:47:13 np0005625203.localdomain sshd[303317]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:47:13 np0005625203.localdomain sshd[303317]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:47:13 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:47:14 np0005625203.localdomain ceph-mon[296066]: from='client.44559 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:47:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:14 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.0 (monmap changed)...
Feb 20 09:47:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Feb 20 09:47:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:14 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.0 on np0005625204.localdomain
Feb 20 09:47:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:15 np0005625203.localdomain ceph-mon[296066]: pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 0 B/s wr, 10 op/s
Feb 20 09:47:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:15 np0005625203.localdomain ceph-mon[296066]: Reconfiguring osd.3 (monmap changed)...
Feb 20 09:47:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "osd.3"} : dispatch
Feb 20 09:47:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:15 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon osd.3 on np0005625204.localdomain
Feb 20 09:47:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:16 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mds.mds.np0005625204.wnsphl (monmap changed)...
Feb 20 09:47:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:47:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:16 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mds.mds.np0005625204.wnsphl on np0005625204.localdomain
Feb 20 09:47:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mds.mds.np0005625204.wnsphl", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Feb 20 09:47:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.189 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:47:17.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:47:17 np0005625203.localdomain ceph-mon[296066]: pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:17 np0005625203.localdomain ceph-mon[296066]: from='client.44562 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 09:47:17 np0005625203.localdomain ceph-mon[296066]: Saving service mon spec with placement label:mon
Feb 20 09:47:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:17 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mgr.np0005625204.exgrzx (monmap changed)...
Feb 20 09:47:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:47:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "mgr services"} : dispatch
Feb 20 09:47:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get-or-create", "entity": "mgr.np0005625204.exgrzx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Feb 20 09:47:17 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mgr.np0005625204.exgrzx on np0005625204.localdomain
Feb 20 09:47:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:18 np0005625203.localdomain ceph-mon[296066]: from='client.44571 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "daemon_id": "np0005625204", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:47:18 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mon.np0005625204 (monmap changed)...
Feb 20 09:47:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:47:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:47:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:18 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mon.np0005625204 on np0005625204.localdomain
Feb 20 09:47:18 np0005625203.localdomain sudo[303319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:47:18 np0005625203.localdomain sudo[303319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:18 np0005625203.localdomain sudo[303319]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:18 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:47:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:47:19 np0005625203.localdomain systemd[1]: tmp-crun.V82830.mount: Deactivated successfully.
Feb 20 09:47:19 np0005625203.localdomain podman[303337]: 2026-02-20 09:47:19.258222704 +0000 UTC m=+0.094243548 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Feb 20 09:47:19 np0005625203.localdomain podman[303337]: 2026-02-20 09:47:19.268570564 +0000 UTC m=+0.104591458 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 20 09:47:19 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:47:19 np0005625203.localdomain ceph-mon[296066]: pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:47:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:47:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:47:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:47:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:19 np0005625203.localdomain sudo[303355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:47:19 np0005625203.localdomain sudo[303355]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:19 np0005625203.localdomain sudo[303355]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:19 np0005625203.localdomain sudo[303373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 _orch deploy --fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:47:19 np0005625203.localdomain sudo[303373]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:47:20 np0005625203.localdomain podman[303408]: 
Feb 20 09:47:20 np0005625203.localdomain podman[303408]: 2026-02-20 09:47:20.293642657 +0000 UTC m=+0.079095169 container create becaaef5f5cffb910bada72f3470698e82d072f7b434d747deecf130e3723124 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_hellman, name=rhceph, com.redhat.component=rhceph-container, description=Red Hat Ceph Storage 7, version=7, build-date=2026-02-09T10:25:24Z, vcs-type=git, GIT_BRANCH=main, GIT_CLEAN=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, CEPH_POINT_RELEASE=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, distribution-scope=public, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, release=1770267347, io.buildah.version=1.42.2)
Feb 20 09:47:20 np0005625203.localdomain systemd[1]: Started libpod-conmon-becaaef5f5cffb910bada72f3470698e82d072f7b434d747deecf130e3723124.scope.
Feb 20 09:47:20 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:47:20 np0005625203.localdomain podman[303408]: 2026-02-20 09:47:20.259900812 +0000 UTC m=+0.045353344 image pull  registry.redhat.io/rhceph/rhceph-7-rhel9:latest
Feb 20 09:47:20 np0005625203.localdomain podman[303408]: 2026-02-20 09:47:20.363217289 +0000 UTC m=+0.148669791 container init becaaef5f5cffb910bada72f3470698e82d072f7b434d747deecf130e3723124 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_hellman, vcs-type=git, GIT_BRANCH=main, io.openshift.tags=rhceph ceph, release=1770267347, RELEASE=main, com.redhat.component=rhceph-container, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=7, GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, CEPH_POINT_RELEASE=, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, vendor=Red Hat, Inc., GIT_REPO=https://github.com/ceph/ceph-container.git, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, ceph=True, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=rhceph, build-date=2026-02-09T10:25:24Z)
Feb 20 09:47:20 np0005625203.localdomain podman[303408]: 2026-02-20 09:47:20.372852008 +0000 UTC m=+0.158304520 container start becaaef5f5cffb910bada72f3470698e82d072f7b434d747deecf130e3723124 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_hellman, name=rhceph, build-date=2026-02-09T10:25:24Z, RELEASE=main, GIT_CLEAN=True, CEPH_POINT_RELEASE=, vcs-type=git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.tags=rhceph ceph, description=Red Hat Ceph Storage 7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=rhceph-container, ceph=True, GIT_REPO=https://github.com/ceph/ceph-container.git, org.opencontainers.image.created=2026-02-09T10:25:24Z, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, io.openshift.expose-services=, distribution-scope=public, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7, version=7, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2)
Feb 20 09:47:20 np0005625203.localdomain podman[303408]: 2026-02-20 09:47:20.373096716 +0000 UTC m=+0.158549258 container attach becaaef5f5cffb910bada72f3470698e82d072f7b434d747deecf130e3723124 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_hellman, GIT_CLEAN=True, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, release=1770267347, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, com.redhat.component=rhceph-container, org.opencontainers.image.created=2026-02-09T10:25:24Z, description=Red Hat Ceph Storage 7, architecture=x86_64, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., RELEASE=main, io.buildah.version=1.42.2, CEPH_POINT_RELEASE=, version=7, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.expose-services=, build-date=2026-02-09T10:25:24Z, io.openshift.tags=rhceph ceph, GIT_REPO=https://github.com/ceph/ceph-container.git, name=rhceph, GIT_BRANCH=main)
Feb 20 09:47:20 np0005625203.localdomain xenodochial_hellman[303422]: 167 167
Feb 20 09:47:20 np0005625203.localdomain systemd[1]: libpod-becaaef5f5cffb910bada72f3470698e82d072f7b434d747deecf130e3723124.scope: Deactivated successfully.
Feb 20 09:47:20 np0005625203.localdomain podman[303408]: 2026-02-20 09:47:20.376699187 +0000 UTC m=+0.162151719 container died becaaef5f5cffb910bada72f3470698e82d072f7b434d747deecf130e3723124 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_hellman, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., name=rhceph, vcs-type=git, release=1770267347, GIT_CLEAN=True, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, RELEASE=main, io.k8s.description=Red Hat Ceph Storage 7, architecture=x86_64, io.openshift.tags=rhceph ceph, GIT_BRANCH=main, io.buildah.version=1.42.2, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, url=https://catalog.redhat.com/en/search?searchType=containers, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, CEPH_POINT_RELEASE=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., distribution-scope=public, build-date=2026-02-09T10:25:24Z, GIT_REPO=https://github.com/ceph/ceph-container.git, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, version=7)
Feb 20 09:47:20 np0005625203.localdomain podman[303427]: 2026-02-20 09:47:20.473220354 +0000 UTC m=+0.084422934 container remove becaaef5f5cffb910bada72f3470698e82d072f7b434d747deecf130e3723124 (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=xenodochial_hellman, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, build-date=2026-02-09T10:25:24Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.openshift.tags=rhceph ceph, RELEASE=main, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vendor=Red Hat, Inc., CEPH_POINT_RELEASE=, architecture=x86_64, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, GIT_BRANCH=main, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, ceph=True, vcs-type=git, com.redhat.component=rhceph-container, GIT_CLEAN=True, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.42.2, org.opencontainers.image.created=2026-02-09T10:25:24Z, name=rhceph, distribution-scope=public, io.k8s.description=Red Hat Ceph Storage 7, description=Red Hat Ceph Storage 7, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, GIT_REPO=https://github.com/ceph/ceph-container.git, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., version=7)
Feb 20 09:47:20 np0005625203.localdomain systemd[1]: libpod-conmon-becaaef5f5cffb910bada72f3470698e82d072f7b434d747deecf130e3723124.scope: Deactivated successfully.
Feb 20 09:47:20 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mon.np0005625202 (monmap changed)...
Feb 20 09:47:20 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mon.np0005625202 on np0005625202.localdomain
Feb 20 09:47:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Feb 20 09:47:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Feb 20 09:47:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:47:20 np0005625203.localdomain sudo[303373]: pam_unix(sudo:session): session closed for user root
Feb 20 09:47:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f132ad69928f60d71a5f105001f0c1aa528610d572d89432e1357c6648dfd83b-merged.mount: Deactivated successfully.
Feb 20 09:47:21 np0005625203.localdomain ceph-mon[296066]: pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:21 np0005625203.localdomain ceph-mon[296066]: Reconfiguring mon.np0005625203 (monmap changed)...
Feb 20 09:47:21 np0005625203.localdomain ceph-mon[296066]: Reconfiguring daemon mon.np0005625203 on np0005625203.localdomain
Feb 20 09:47:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:23 np0005625203.localdomain ceph-mon[296066]: pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:23 np0005625203.localdomain ceph-mon[296066]: mgrmap e42: np0005625204.exgrzx(active, since 24s), standbys: np0005625202.arwxwo, np0005625203.lonygy
Feb 20 09:47:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:47:23 np0005625203.localdomain systemd[1]: tmp-crun.BJyGi8.mount: Deactivated successfully.
Feb 20 09:47:23 np0005625203.localdomain podman[303444]: 2026-02-20 09:47:23.7875072 +0000 UTC m=+0.099682256 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Feb 20 09:47:23 np0005625203.localdomain podman[303444]: 2026-02-20 09:47:23.825532946 +0000 UTC m=+0.137708012 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 09:47:23 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:47:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:47:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:47:25 np0005625203.localdomain ceph-mon[296066]: pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:27 np0005625203.localdomain ceph-mon[296066]: pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:28 np0005625203.localdomain ceph-mon[296066]: pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:47:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:47:28 np0005625203.localdomain podman[303470]: 2026-02-20 09:47:28.770254327 +0000 UTC m=+0.087030414 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 09:47:28 np0005625203.localdomain podman[303470]: 2026-02-20 09:47:28.783274881 +0000 UTC m=+0.100050998 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 20 09:47:28 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:47:28 np0005625203.localdomain podman[303471]: 2026-02-20 09:47:28.873163092 +0000 UTC m=+0.187241995 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, vcs-type=git, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7)
Feb 20 09:47:28 np0005625203.localdomain podman[303471]: 2026-02-20 09:47:28.891195571 +0000 UTC m=+0.205274494 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., release=1770267347, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, container_name=openstack_network_exporter, version=9.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 20 09:47:28 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:47:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:47:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:47:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:47:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:47:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:47:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17779 "" "Go-http-client/1.1"
Feb 20 09:47:30 np0005625203.localdomain ceph-mon[296066]: pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:33 np0005625203.localdomain ceph-mon[296066]: pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:47:35 np0005625203.localdomain ceph-mon[296066]: pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:47:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:47:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:47:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:47:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:47:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:47:37 np0005625203.localdomain ceph-mon[296066]: pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:38 np0005625203.localdomain sshd[303511]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:47:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:47:39 np0005625203.localdomain ceph-mon[296066]: pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:47:39 np0005625203.localdomain podman[303513]: 2026-02-20 09:47:39.761331264 +0000 UTC m=+0.080195543 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:47:39 np0005625203.localdomain podman[303513]: 2026-02-20 09:47:39.770306852 +0000 UTC m=+0.089171121 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:47:39 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:47:39 np0005625203.localdomain sshd[303511]: Invalid user vps from 118.99.80.29 port 21376
Feb 20 09:47:40 np0005625203.localdomain sshd[303511]: Received disconnect from 118.99.80.29 port 21376:11: Bye Bye [preauth]
Feb 20 09:47:40 np0005625203.localdomain sshd[303511]: Disconnected from invalid user vps 118.99.80.29 port 21376 [preauth]
Feb 20 09:47:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:47:40 np0005625203.localdomain podman[303535]: 2026-02-20 09:47:40.765292893 +0000 UTC m=+0.081560585 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:47:40 np0005625203.localdomain podman[303535]: 2026-02-20 09:47:40.773676873 +0000 UTC m=+0.089944565 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:47:40 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:47:41 np0005625203.localdomain ceph-mon[296066]: pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:43 np0005625203.localdomain ceph-mon[296066]: pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:47:45 np0005625203.localdomain ceph-mon[296066]: pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:46.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:47:46 np0005625203.localdomain ceph-mon[296066]: pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:48.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:47:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:48.358 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:47:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:48.359 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:47:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:48.359 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:47:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:48.359 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:47:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:48.360 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:47:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:47:48 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4031544902' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:47:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:48.817 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:47:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:47:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:49.025 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:47:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:49.027 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12416MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:47:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:49.028 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:47:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:49.028 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:47:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:49.090 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:47:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:49.090 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:47:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:49.114 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:47:49 np0005625203.localdomain ceph-mon[296066]: pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/4031544902' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:47:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3781215475' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:47:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:47:49 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1476854404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:47:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:49.604 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:47:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:49.611 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:47:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:49.629 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:47:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:49.631 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:47:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:49.631 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:47:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:47:49 np0005625203.localdomain sshd[303609]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:47:49 np0005625203.localdomain podman[303603]: 2026-02-20 09:47:49.756680697 +0000 UTC m=+0.077130828 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Feb 20 09:47:49 np0005625203.localdomain podman[303603]: 2026-02-20 09:47:49.786413927 +0000 UTC m=+0.106864028 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Feb 20 09:47:49 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:47:50 np0005625203.localdomain sshd[303609]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:47:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1476854404' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:47:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/190951353' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:47:51 np0005625203.localdomain ceph-mon[296066]: pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1447312241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:47:53 np0005625203.localdomain ceph-mon[296066]: pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3443497250' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:47:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:53.633 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:47:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:53.633 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:47:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:47:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:54.338 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:47:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:47:54 np0005625203.localdomain podman[303623]: 2026-02-20 09:47:54.767399172 +0000 UTC m=+0.084592079 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 20 09:47:54 np0005625203.localdomain podman[303623]: 2026-02-20 09:47:54.849767291 +0000 UTC m=+0.166960208 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller)
Feb 20 09:47:54 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:47:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:55.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:47:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:55.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:47:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:55.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:47:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:55.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:47:55 np0005625203.localdomain ceph-mon[296066]: pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.200:0/3650643353' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Feb 20 09:47:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:56.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:47:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:56.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:47:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:56.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:47:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:56.361 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:47:56 np0005625203.localdomain sshd[303649]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:47:56 np0005625203.localdomain sshd[303650]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:47:56 np0005625203.localdomain sshd[303649]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:47:57 np0005625203.localdomain sshd[303650]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:47:57 np0005625203.localdomain ceph-mon[296066]: pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:47:58.356 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:47:58 np0005625203.localdomain ceph-mon[296066]: pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:47:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:47:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:47:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:47:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:47:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:47:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17793 "" "Go-http-client/1.1"
Feb 20 09:47:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:47:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:47:59 np0005625203.localdomain systemd[1]: tmp-crun.aZ9V4i.mount: Deactivated successfully.
Feb 20 09:47:59 np0005625203.localdomain podman[303653]: 2026-02-20 09:47:59.772979467 +0000 UTC m=+0.092299858 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 20 09:47:59 np0005625203.localdomain podman[303653]: 2026-02-20 09:47:59.808252159 +0000 UTC m=+0.127572560 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute)
Feb 20 09:47:59 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:47:59 np0005625203.localdomain podman[303654]: 2026-02-20 09:47:59.815453791 +0000 UTC m=+0.132271174 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_id=openstack_network_exporter, vcs-type=git, release=1770267347, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7)
Feb 20 09:47:59 np0005625203.localdomain podman[303654]: 2026-02-20 09:47:59.895723685 +0000 UTC m=+0.212541068 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., version=9.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, distribution-scope=public, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, architecture=x86_64)
Feb 20 09:47:59 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:48:00 np0005625203.localdomain ceph-mon[296066]: pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:01 np0005625203.localdomain ceph-mon[296066]: from='client.44589 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:48:02 np0005625203.localdomain ceph-mon[296066]: pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2727431459' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:48:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2727431459' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:48:03 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:48:04 np0005625203.localdomain ceph-mon[296066]: pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:06 np0005625203.localdomain ceph-mon[296066]: pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:48:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:48:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:48:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:48:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:48:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:48:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:48:07.664 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:48:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:48:07.664 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:48:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:48:07.665 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:48:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.200:0/2546430745' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Feb 20 09:48:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:48:09 np0005625203.localdomain ceph-mon[296066]: pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:48:10 np0005625203.localdomain podman[303692]: 2026-02-20 09:48:10.758545024 +0000 UTC m=+0.080156362 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:48:10 np0005625203.localdomain podman[303692]: 2026-02-20 09:48:10.797340454 +0000 UTC m=+0.118951812 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:48:10 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:48:10 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:48:10 np0005625203.localdomain podman[303714]: 2026-02-20 09:48:10.915066148 +0000 UTC m=+0.079009647 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:48:10 np0005625203.localdomain podman[303714]: 2026-02-20 09:48:10.922456896 +0000 UTC m=+0.086400425 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:48:10 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:48:11 np0005625203.localdomain ceph-mon[296066]: pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:13 np0005625203.localdomain ceph-mon[296066]: pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:13 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:48:14 np0005625203.localdomain ceph-mon[296066]: pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:14 np0005625203.localdomain ceph-mon[296066]: from='client.44610 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Feb 20 09:48:14 np0005625203.localdomain sshd[303737]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:48:15 np0005625203.localdomain sshd[303737]: Invalid user ubuntu from 5.253.59.68 port 38484
Feb 20 09:48:15 np0005625203.localdomain sshd[303737]: Received disconnect from 5.253.59.68 port 38484:11: Bye Bye [preauth]
Feb 20 09:48:15 np0005625203.localdomain sshd[303737]: Disconnected from invalid user ubuntu 5.253.59.68 port 38484 [preauth]
Feb 20 09:48:16 np0005625203.localdomain ceph-mon[296066]: pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:18 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:48:19 np0005625203.localdomain ceph-mon[296066]: pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:48:20 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.200:0/712476891' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:48:20 np0005625203.localdomain sudo[303739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:48:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:48:20 np0005625203.localdomain sudo[303739]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:20 np0005625203.localdomain sudo[303739]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:20 np0005625203.localdomain sudo[303762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:48:20 np0005625203.localdomain sudo[303762]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:20 np0005625203.localdomain podman[303756]: 2026-02-20 09:48:20.769039315 +0000 UTC m=+0.083905798 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Feb 20 09:48:20 np0005625203.localdomain podman[303756]: 2026-02-20 09:48:20.802235922 +0000 UTC m=+0.117102415 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Feb 20 09:48:20 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:48:21 np0005625203.localdomain sudo[303762]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:21 np0005625203.localdomain ceph-mon[296066]: pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:21 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.200:0/712476891' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:48:21 np0005625203.localdomain sudo[303825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:48:21 np0005625203.localdomain sudo[303825]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:21 np0005625203.localdomain sudo[303825]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:48:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:48:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 ' entity='mgr.np0005625204.exgrzx' 
Feb 20 09:48:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.27007 172.18.0.108:0/150842184' entity='mgr.np0005625204.exgrzx' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 e91: 6 total, 6 up, 6 in
Feb 20 09:48:23 np0005625203.localdomain sshd[301790]: pam_unix(sshd:session): session closed for user ceph-admin
Feb 20 09:48:23 np0005625203.localdomain systemd[1]: session-70.scope: Deactivated successfully.
Feb 20 09:48:23 np0005625203.localdomain systemd[1]: session-70.scope: Consumed 11.697s CPU time.
Feb 20 09:48:23 np0005625203.localdomain systemd-logind[759]: Session 70 logged out. Waiting for processes to exit.
Feb 20 09:48:23 np0005625203.localdomain systemd-logind[759]: Removed session 70.
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.200:0/2835510203' entity='client.admin' cmd={"prefix": "mgr fail"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: Activating manager daemon np0005625202.arwxwo
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: osdmap e91: 6 total, 6 up, 6 in
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.200:0/2835510203' entity='client.admin' cmd='[{"prefix": "mgr fail"}]': finished
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: mgrmap e43: np0005625202.arwxwo(active, starting, since 0.0325637s), standbys: np0005625203.lonygy
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625202"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625203"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata", "id": "np0005625204"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata", "who": "mds.np0005625202.akhmop"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata", "who": "mds.np0005625204.wnsphl"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata", "who": "mds.np0005625203.zsrwgk"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625202.arwxwo", "id": "np0005625202.arwxwo"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625203.lonygy", "id": "np0005625203.lonygy"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 3} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 4} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata", "id": 5} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mds metadata"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd metadata"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mon metadata"} : dispatch
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: Manager daemon np0005625202.arwxwo is now available
Feb 20 09:48:23 np0005625203.localdomain sshd[303843]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:48:23 np0005625203.localdomain sshd[303843]: Accepted publickey for ceph-admin from 192.168.122.106 port 59108 ssh2: RSA SHA256:ReOVVWzTOjcz49zJan4sHrxgFDyL1hzGQqhA1t8e47Y
Feb 20 09:48:23 np0005625203.localdomain systemd-logind[759]: New session 71 of user ceph-admin.
Feb 20 09:48:23 np0005625203.localdomain systemd[1]: Started Session 71 of User ceph-admin.
Feb 20 09:48:23 np0005625203.localdomain sshd[303843]: pam_unix(sshd:session): session opened for user ceph-admin(uid=1002) by (uid=0)
Feb 20 09:48:23 np0005625203.localdomain sudo[303847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:48:23 np0005625203.localdomain sudo[303847]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:23 np0005625203.localdomain sudo[303847]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:48:23 np0005625203.localdomain sudo[303865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:48:23 np0005625203.localdomain sudo[303865]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/mirror_snapshot_schedule"} : dispatch
Feb 20 09:48:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/np0005625202.arwxwo/trash_purge_schedule"} : dispatch
Feb 20 09:48:24 np0005625203.localdomain ceph-mon[296066]: mgrmap e44: np0005625202.arwxwo(active, since 1.04816s), standbys: np0005625203.lonygy
Feb 20 09:48:24 np0005625203.localdomain podman[303956]: 2026-02-20 09:48:24.714609688 +0000 UTC m=+0.088489280 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, GIT_REPO=https://github.com/ceph/ceph-container.git, build-date=2026-02-09T10:25:24Z, distribution-scope=public, release=1770267347, io.buildah.version=1.42.2, description=Red Hat Ceph Storage 7, com.redhat.component=rhceph-container, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, architecture=x86_64, version=7, GIT_CLEAN=True, io.k8s.description=Red Hat Ceph Storage 7, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, RELEASE=main, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, ceph=True, org.opencontainers.image.created=2026-02-09T10:25:24Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=rhceph ceph, name=rhceph, GIT_BRANCH=main)
Feb 20 09:48:24 np0005625203.localdomain podman[303956]: 2026-02-20 09:48:24.823376133 +0000 UTC m=+0.197255715 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, com.redhat.component=rhceph-container, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, io.openshift.expose-services=, GIT_BRANCH=main, description=Red Hat Ceph Storage 7, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., CEPH_POINT_RELEASE=, architecture=x86_64, build-date=2026-02-09T10:25:24Z, version=7, io.k8s.description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, release=1770267347, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.42.2, ceph=True, GIT_CLEAN=True, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_REPO=https://github.com/ceph/ceph-container.git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, org.opencontainers.image.created=2026-02-09T10:25:24Z, vcs-type=git, distribution-scope=public, name=rhceph, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:48:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:48:24 np0005625203.localdomain podman[304002]: 2026-02-20 09:48:24.997032967 +0000 UTC m=+0.094617808 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Feb 20 09:48:25 np0005625203.localdomain podman[304002]: 2026-02-20 09:48:25.077288951 +0000 UTC m=+0.174873792 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 20 09:48:25 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:48:25 np0005625203.localdomain sudo[303865]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:25 np0005625203.localdomain ceph-mon[296066]: pgmap v3: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:26 np0005625203.localdomain sudo[304099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:48:26 np0005625203.localdomain sudo[304099]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:26 np0005625203.localdomain sudo[304099]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:26 np0005625203.localdomain sudo[304117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:48:26 np0005625203.localdomain sudo[304117]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:26 np0005625203.localdomain sudo[304117]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:48:24] ENGINE Bus STARTING
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:48:25] ENGINE Serving on http://172.18.0.106:8765
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:48:25] ENGINE Serving on https://172.18.0.106:7150
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:48:25] ENGINE Client ('172.18.0.106', 54518) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: [20/Feb/2026:09:48:25] ENGINE Bus STARTED
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: Health check cleared: CEPHADM_STRAY_DAEMON (was: 1 stray daemon(s) not managed by cephadm)
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: Health check cleared: CEPHADM_STRAY_HOST (was: 1 stray host(s) with 1 daemon(s) not managed by cephadm)
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: Cluster is now healthy
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: pgmap v4: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:26 np0005625203.localdomain sudo[304166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:48:26 np0005625203.localdomain sudo[304166]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:26 np0005625203.localdomain sudo[304166]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:27 np0005625203.localdomain sudo[304184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 list-networks
Feb 20 09:48:27 np0005625203.localdomain sudo[304184]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:27 np0005625203.localdomain sudo[304184]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:27 np0005625203.localdomain sudo[304222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:48:27 np0005625203.localdomain sudo[304222]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:27 np0005625203.localdomain sudo[304222]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:27 np0005625203.localdomain sudo[304240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:48:27 np0005625203.localdomain sudo[304240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:27 np0005625203.localdomain sudo[304240]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:27 np0005625203.localdomain sudo[304258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:48:27 np0005625203.localdomain sudo[304258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:27 np0005625203.localdomain sudo[304258]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:27 np0005625203.localdomain sudo[304276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:48:27 np0005625203.localdomain sudo[304276]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:27 np0005625203.localdomain sudo[304276]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:28 np0005625203.localdomain sudo[304294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:48:28 np0005625203.localdomain sudo[304294]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:28 np0005625203.localdomain sudo[304294]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:28 np0005625203.localdomain sudo[304328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:48:28 np0005625203.localdomain sudo[304328]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:28 np0005625203.localdomain sudo[304328]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:28 np0005625203.localdomain sudo[304346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new
Feb 20 09:48:28 np0005625203.localdomain sudo[304346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:28 np0005625203.localdomain sudo[304346]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:28 np0005625203.localdomain sudo[304364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Feb 20 09:48:28 np0005625203.localdomain sudo[304364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:28 np0005625203.localdomain sudo[304364]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:28 np0005625203.localdomain sudo[304382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:48:28 np0005625203.localdomain sudo[304382]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:28 np0005625203.localdomain sudo[304382]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:28 np0005625203.localdomain sudo[304400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:48:28 np0005625203.localdomain sudo[304400]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:28 np0005625203.localdomain sudo[304400]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: pgmap v5: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/etc/ceph/ceph.conf
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/etc/ceph/ceph.conf
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/etc/ceph/ceph.conf
Feb 20 09:48:28 np0005625203.localdomain sudo[304418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:48:28 np0005625203.localdomain sudo[304418]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:28 np0005625203.localdomain sudo[304418]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:28 np0005625203.localdomain sudo[304436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:48:28 np0005625203.localdomain sudo[304436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:28 np0005625203.localdomain sudo[304436]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:28 np0005625203.localdomain sudo[304454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:48:28 np0005625203.localdomain sudo[304454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:28 np0005625203.localdomain sudo[304454]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:28 np0005625203.localdomain sudo[304488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:48:28 np0005625203.localdomain sudo[304488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:28 np0005625203.localdomain sudo[304488]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:48:28 np0005625203.localdomain sudo[304506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new
Feb 20 09:48:28 np0005625203.localdomain sudo[304506]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:28 np0005625203.localdomain sudo[304506]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:48:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:48:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:48:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:48:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17800 "" "Go-http-client/1.1"
Feb 20 09:48:29 np0005625203.localdomain sudo[304524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:48:29 np0005625203.localdomain sudo[304524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:29 np0005625203.localdomain sudo[304524]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:29 np0005625203.localdomain sudo[304542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Feb 20 09:48:29 np0005625203.localdomain sudo[304542]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:29 np0005625203.localdomain sudo[304542]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:29 np0005625203.localdomain sudo[304560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph
Feb 20 09:48:29 np0005625203.localdomain sudo[304560]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:29 np0005625203.localdomain sudo[304560]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:29 np0005625203.localdomain sudo[304578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:48:29 np0005625203.localdomain sudo[304578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:29 np0005625203.localdomain sudo[304578]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:29 np0005625203.localdomain sudo[304596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:48:29 np0005625203.localdomain sudo[304596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:29 np0005625203.localdomain sudo[304596]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:29 np0005625203.localdomain sudo[304614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:48:29 np0005625203.localdomain sudo[304614]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:29 np0005625203.localdomain sudo[304614]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:29 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:48:29 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:48:29 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.conf
Feb 20 09:48:29 np0005625203.localdomain ceph-mon[296066]: Standby manager daemon np0005625204.exgrzx started
Feb 20 09:48:29 np0005625203.localdomain ceph-mon[296066]: mgrmap e45: np0005625202.arwxwo(active, since 5s), standbys: np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:48:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "mgr metadata", "who": "np0005625204.exgrzx", "id": "np0005625204.exgrzx"} : dispatch
Feb 20 09:48:29 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:48:29 np0005625203.localdomain sudo[304648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:48:29 np0005625203.localdomain sudo[304648]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:29 np0005625203.localdomain sudo[304648]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:29 np0005625203.localdomain sudo[304666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new
Feb 20 09:48:29 np0005625203.localdomain sudo[304666]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:29 np0005625203.localdomain sudo[304666]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:29 np0005625203.localdomain sudo[304684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Feb 20 09:48:29 np0005625203.localdomain sudo[304684]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:29 np0005625203.localdomain sudo[304684]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:29 np0005625203.localdomain sudo[304702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:48:29 np0005625203.localdomain sudo[304702]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:48:29 np0005625203.localdomain sudo[304702]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:29 np0005625203.localdomain sudo[304721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config
Feb 20 09:48:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:48:29 np0005625203.localdomain sudo[304721]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:29 np0005625203.localdomain sudo[304721]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:29 np0005625203.localdomain podman[304720]: 2026-02-20 09:48:29.976251887 +0000 UTC m=+0.092717990 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:48:29 np0005625203.localdomain podman[304720]: 2026-02-20 09:48:29.99120533 +0000 UTC m=+0.107671423 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:48:30 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:48:30 np0005625203.localdomain sudo[304763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:48:30 np0005625203.localdomain sudo[304763]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:30 np0005625203.localdomain sudo[304763]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:30 np0005625203.localdomain podman[304753]: 2026-02-20 09:48:30.075260331 +0000 UTC m=+0.097998404 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, name=ubi9/ubi-minimal, vcs-type=git, release=1770267347, distribution-scope=public, architecture=x86_64)
Feb 20 09:48:30 np0005625203.localdomain podman[304753]: 2026-02-20 09:48:30.09040706 +0000 UTC m=+0.113145153 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, version=9.7, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 20 09:48:30 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:48:30 np0005625203.localdomain sudo[304792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8
Feb 20 09:48:30 np0005625203.localdomain sudo[304792]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:30 np0005625203.localdomain sudo[304792]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:30 np0005625203.localdomain sudo[304812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:48:30 np0005625203.localdomain sudo[304812]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:30 np0005625203.localdomain sudo[304812]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:30 np0005625203.localdomain sudo[304846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:48:30 np0005625203.localdomain sudo[304846]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:30 np0005625203.localdomain sudo[304846]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:30 np0005625203.localdomain sudo[304864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new
Feb 20 09:48:30 np0005625203.localdomain sudo[304864]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:30 np0005625203.localdomain sudo[304864]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:30 np0005625203.localdomain sudo[304882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-a8557ee9-b55d-5519-942c-cf8f6172f1d8/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring.new /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:48:30 np0005625203.localdomain sudo[304882]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:30 np0005625203.localdomain sudo[304882]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/etc/ceph/ceph.client.admin.keyring
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: pgmap v6: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: Updating np0005625202.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: Updating np0005625203.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: Updating np0005625204.localdomain:/var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/config/ceph.client.admin.keyring
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:48:30 np0005625203.localdomain sudo[304900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:48:30 np0005625203.localdomain sudo[304900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:30 np0005625203.localdomain sudo[304900]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:31 np0005625203.localdomain sudo[304918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:48:31 np0005625203.localdomain sudo[304918]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:48:31 np0005625203.localdomain sudo[304918]: pam_unix(sudo:session): session closed for user root
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:31.931721) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580911931766, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 2719, "num_deletes": 256, "total_data_size": 8670897, "memory_usage": 8956584, "flush_reason": "Manual Compaction"}
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580911954423, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 5220534, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15131, "largest_seqno": 17845, "table_properties": {"data_size": 5209498, "index_size": 6901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27059, "raw_average_key_size": 22, "raw_value_size": 5185912, "raw_average_value_size": 4264, "num_data_blocks": 299, "num_entries": 1216, "num_filter_entries": 1216, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580813, "oldest_key_time": 1771580813, "file_creation_time": 1771580911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 22854 microseconds, and 10854 cpu microseconds.
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:31.954554) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 5220534 bytes OK
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:31.954601) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:31.956331) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:31.956359) EVENT_LOG_v1 {"time_micros": 1771580911956352, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:31.956393) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 8657884, prev total WAL file size 8657884, number of live WAL files 2.
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:31.958557) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131303434' seq:72057594037927935, type:22 .. '7061786F73003131323936' seq:0, type:0; will stop at (end)
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(5098KB)], [24(15MB)]
Feb 20 09:48:31 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580911958634, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 21589668, "oldest_snapshot_seqno": -1}
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 11900 keys, 18468293 bytes, temperature: kUnknown
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580912043400, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 18468293, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18399561, "index_size": 37911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29765, "raw_key_size": 319025, "raw_average_key_size": 26, "raw_value_size": 18196028, "raw_average_value_size": 1529, "num_data_blocks": 1448, "num_entries": 11900, "num_filter_entries": 11900, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771580911, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:32.043764) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 18468293 bytes
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:32.045635) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 254.4 rd, 217.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(5.0, 15.6 +0.0 blob) out(17.6 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 12440, records dropped: 540 output_compression: NoCompression
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:32.045669) EVENT_LOG_v1 {"time_micros": 1771580912045653, "job": 12, "event": "compaction_finished", "compaction_time_micros": 84851, "compaction_time_cpu_micros": 49326, "output_level": 6, "num_output_files": 1, "total_output_size": 18468293, "num_input_records": 12440, "num_output_records": 11900, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580912046695, "job": 12, "event": "table_file_deletion", "file_number": 26}
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771580912049897, "job": 12, "event": "table_file_deletion", "file_number": 24}
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:31.958433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:32.050031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:32.050041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:32.050044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:32.050047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:48:32.050050) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:48:32 np0005625203.localdomain ceph-mon[296066]: pgmap v7: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 0 B/s wr, 16 op/s
Feb 20 09:48:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:48:34 np0005625203.localdomain ceph-mon[296066]: pgmap v8: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 23 KiB/s rd, 0 B/s wr, 12 op/s
Feb 20 09:48:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:48:36 np0005625203.localdomain ceph-mon[296066]: pgmap v9: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 0 B/s wr, 10 op/s
Feb 20 09:48:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:48:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:48:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:48:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:48:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:48:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:48:38 np0005625203.localdomain ceph-mon[296066]: pgmap v10: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:48:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:48:39 np0005625203.localdomain sshd[304936]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:48:40 np0005625203.localdomain sshd[304938]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:48:40 np0005625203.localdomain sshd[304936]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:48:40 np0005625203.localdomain sshd[304940]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:48:40 np0005625203.localdomain ceph-mon[296066]: pgmap v11: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:48:40 np0005625203.localdomain sshd[304940]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:48:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:48:40 np0005625203.localdomain systemd[290999]: Created slice User Background Tasks Slice.
Feb 20 09:48:40 np0005625203.localdomain podman[304942]: 2026-02-20 09:48:40.973766023 +0000 UTC m=+0.095384124 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:48:40 np0005625203.localdomain systemd[290999]: Starting Cleanup of User's Temporary Files and Directories...
Feb 20 09:48:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:48:41 np0005625203.localdomain systemd[290999]: Finished Cleanup of User's Temporary Files and Directories.
Feb 20 09:48:41 np0005625203.localdomain sshd[304938]: Invalid user admin from 185.196.11.208 port 39848
Feb 20 09:48:41 np0005625203.localdomain podman[304942]: 2026-02-20 09:48:41.010288043 +0000 UTC m=+0.131906104 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:48:41 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:48:41 np0005625203.localdomain systemd[1]: tmp-crun.07N2iC.mount: Deactivated successfully.
Feb 20 09:48:41 np0005625203.localdomain podman[304963]: 2026-02-20 09:48:41.088158732 +0000 UTC m=+0.092839204 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:48:41 np0005625203.localdomain sshd[304938]: Received disconnect from 185.196.11.208 port 39848:11: Bye Bye [preauth]
Feb 20 09:48:41 np0005625203.localdomain sshd[304938]: Disconnected from invalid user admin 185.196.11.208 port 39848 [preauth]
Feb 20 09:48:41 np0005625203.localdomain podman[304963]: 2026-02-20 09:48:41.121004429 +0000 UTC m=+0.125684901 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:48:41 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:48:42 np0005625203.localdomain ceph-mon[296066]: pgmap v12: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 0 B/s wr, 9 op/s
Feb 20 09:48:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:48:45 np0005625203.localdomain ceph-mon[296066]: pgmap v13: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:46 np0005625203.localdomain ceph-mon[296066]: pgmap v14: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:46.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:48:46 np0005625203.localdomain sshd[304991]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:48:48 np0005625203.localdomain sshd[304991]: Invalid user marcio from 103.48.192.48 port 27989
Feb 20 09:48:48 np0005625203.localdomain ceph-mon[296066]: pgmap v15: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:48 np0005625203.localdomain sshd[304991]: Received disconnect from 103.48.192.48 port 27989:11: Bye Bye [preauth]
Feb 20 09:48:48 np0005625203.localdomain sshd[304991]: Disconnected from invalid user marcio 103.48.192.48 port 27989 [preauth]
Feb 20 09:48:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:48:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:49.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:48:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:49.579 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:48:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:49.580 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:48:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:49.580 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:48:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:49.580 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:48:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:49.581 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:48:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:48:50 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/786824728' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.039 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.241 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.243 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12430MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.243 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.244 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.320 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.321 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.336 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:48:50 np0005625203.localdomain ceph-mon[296066]: pgmap v16: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/786824728' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:48:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/918867819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:48:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:48:50 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/49463867' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.817 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.825 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.882 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.884 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:48:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:50.885 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:48:51 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/49463867' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:48:51 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/825169528' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:48:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:48:51 np0005625203.localdomain podman[305037]: 2026-02-20 09:48:51.767776272 +0000 UTC m=+0.082217116 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:48:51 np0005625203.localdomain podman[305037]: 2026-02-20 09:48:51.798067829 +0000 UTC m=+0.112508663 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:48:51 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:48:52 np0005625203.localdomain ceph-mon[296066]: pgmap v17: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3903298265' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:48:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:48:54 np0005625203.localdomain sshd[305055]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:48:54 np0005625203.localdomain ceph-mon[296066]: pgmap v18: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1696424530' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:48:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:54.885 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:48:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:54.886 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:48:55 np0005625203.localdomain sshd[305055]: Invalid user n8n from 152.32.129.236 port 44410
Feb 20 09:48:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:48:55 np0005625203.localdomain podman[305057]: 2026-02-20 09:48:55.288430484 +0000 UTC m=+0.085896909 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Feb 20 09:48:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:55.338 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:48:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:55.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:48:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:55.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:48:55 np0005625203.localdomain podman[305057]: 2026-02-20 09:48:55.360271437 +0000 UTC m=+0.157737852 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:48:55 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:48:55 np0005625203.localdomain sshd[305055]: Received disconnect from 152.32.129.236 port 44410:11: Bye Bye [preauth]
Feb 20 09:48:55 np0005625203.localdomain sshd[305055]: Disconnected from invalid user n8n 152.32.129.236 port 44410 [preauth]
Feb 20 09:48:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:56.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:48:56 np0005625203.localdomain ceph-mon[296066]: pgmap v19: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:57.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:48:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:58.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:48:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:58.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:48:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:58.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:48:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:48:58.357 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:48:58 np0005625203.localdomain ceph-mon[296066]: pgmap v20: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:48:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:48:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:48:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:48:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:48:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:48:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17792 "" "Go-http-client/1.1"
Feb 20 09:49:00 np0005625203.localdomain ceph-mon[296066]: pgmap v21: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:49:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:49:00 np0005625203.localdomain podman[305081]: 2026-02-20 09:49:00.7719516 +0000 UTC m=+0.088710397 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute)
Feb 20 09:49:00 np0005625203.localdomain podman[305081]: 2026-02-20 09:49:00.784194208 +0000 UTC m=+0.100953025 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3)
Feb 20 09:49:00 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:49:00 np0005625203.localdomain podman[305082]: 2026-02-20 09:49:00.873061449 +0000 UTC m=+0.185756429 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, name=ubi9/ubi-minimal, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter)
Feb 20 09:49:00 np0005625203.localdomain podman[305082]: 2026-02-20 09:49:00.891464028 +0000 UTC m=+0.204159048 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 09:49:00 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:49:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:49:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3831557669' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:49:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:49:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3831557669' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:49:02 np0005625203.localdomain ceph-mon[296066]: pgmap v22: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3831557669' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:49:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3831557669' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:49:03 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:49:04 np0005625203.localdomain ceph-mon[296066]: pgmap v23: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:06 np0005625203.localdomain ceph-mon[296066]: pgmap v24: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:49:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:49:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:49:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:49:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:49:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:49:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:49:07.665 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:49:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:49:07.666 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:49:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:49:07.667 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:49:08 np0005625203.localdomain ceph-mon[296066]: pgmap v25: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:49:10 np0005625203.localdomain ceph-mon[296066]: pgmap v26: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:49:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:49:11 np0005625203.localdomain podman[305121]: 2026-02-20 09:49:11.77292822 +0000 UTC m=+0.087104196 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:49:11 np0005625203.localdomain podman[305121]: 2026-02-20 09:49:11.805397104 +0000 UTC m=+0.119573100 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:49:11 np0005625203.localdomain systemd[1]: tmp-crun.XEHYCy.mount: Deactivated successfully.
Feb 20 09:49:11 np0005625203.localdomain podman[305122]: 2026-02-20 09:49:11.822118972 +0000 UTC m=+0.133694088 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:49:11 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:49:11 np0005625203.localdomain podman[305122]: 2026-02-20 09:49:11.833317418 +0000 UTC m=+0.144892564 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:49:11 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:49:12 np0005625203.localdomain ceph-mon[296066]: pgmap v27: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:13 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:49:14 np0005625203.localdomain ceph-mon[296066]: pgmap v28: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:16 np0005625203.localdomain ceph-mon[296066]: pgmap v29: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.190 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.191 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.192 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:49:17.193 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:49:18 np0005625203.localdomain ceph-mon[296066]: pgmap v30: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:18 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:49:20 np0005625203.localdomain sshd[305167]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:49:20 np0005625203.localdomain ceph-mon[296066]: pgmap v31: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:22 np0005625203.localdomain sshd[305167]: Invalid user sshuser from 103.61.123.132 port 52622
Feb 20 09:49:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:49:22 np0005625203.localdomain podman[305169]: 2026-02-20 09:49:22.161215644 +0000 UTC m=+0.079263153 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:49:22 np0005625203.localdomain podman[305169]: 2026-02-20 09:49:22.195362301 +0000 UTC m=+0.113409790 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 20 09:49:22 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:49:22 np0005625203.localdomain sshd[305167]: Received disconnect from 103.61.123.132 port 52622:11: Bye Bye [preauth]
Feb 20 09:49:22 np0005625203.localdomain sshd[305167]: Disconnected from invalid user sshuser 103.61.123.132 port 52622 [preauth]
Feb 20 09:49:22 np0005625203.localdomain ceph-mon[296066]: pgmap v32: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:49:25 np0005625203.localdomain sshd[305187]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:49:25 np0005625203.localdomain sshd[305188]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:49:25 np0005625203.localdomain sshd[305188]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:49:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:49:25 np0005625203.localdomain podman[305191]: 2026-02-20 09:49:25.724705021 +0000 UTC m=+0.082505144 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:49:25 np0005625203.localdomain podman[305191]: 2026-02-20 09:49:25.789385083 +0000 UTC m=+0.147185196 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:49:25 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:49:25 np0005625203.localdomain sshd[305187]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:49:26 np0005625203.localdomain ceph-mon[296066]: pgmap v33: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:27 np0005625203.localdomain ceph-mon[296066]: pgmap v34: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:28 np0005625203.localdomain ceph-mon[296066]: pgmap v35: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:49:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:49:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:49:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:49:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:49:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17794 "" "Go-http-client/1.1"
Feb 20 09:49:30 np0005625203.localdomain ceph-mon[296066]: pgmap v36: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:31 np0005625203.localdomain sudo[305215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:49:31 np0005625203.localdomain sudo[305215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:49:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:49:31 np0005625203.localdomain sudo[305215]: pam_unix(sudo:session): session closed for user root
Feb 20 09:49:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:49:31 np0005625203.localdomain sudo[305235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:49:31 np0005625203.localdomain sudo[305235]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:49:31 np0005625203.localdomain podman[305234]: 2026-02-20 09:49:31.392584503 +0000 UTC m=+0.093385510 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Feb 20 09:49:31 np0005625203.localdomain podman[305234]: 2026-02-20 09:49:31.405273166 +0000 UTC m=+0.106074163 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, maintainer=Red Hat, Inc., vcs-type=git)
Feb 20 09:49:31 np0005625203.localdomain systemd[1]: tmp-crun.4ciETv.mount: Deactivated successfully.
Feb 20 09:49:31 np0005625203.localdomain podman[305233]: 2026-02-20 09:49:31.454386176 +0000 UTC m=+0.158424953 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 20 09:49:31 np0005625203.localdomain podman[305233]: 2026-02-20 09:49:31.467354768 +0000 UTC m=+0.171393595 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:49:31 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:49:31 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:49:32 np0005625203.localdomain sudo[305235]: pam_unix(sudo:session): session closed for user root
Feb 20 09:49:32 np0005625203.localdomain sudo[305323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:49:32 np0005625203.localdomain sudo[305323]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:49:32 np0005625203.localdomain sudo[305323]: pam_unix(sudo:session): session closed for user root
Feb 20 09:49:32 np0005625203.localdomain sshd[305341]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:49:32 np0005625203.localdomain ceph-mon[296066]: pgmap v37: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:32 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:49:32 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:49:32 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:49:32 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:49:33 np0005625203.localdomain sshd[305341]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:49:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:49:34 np0005625203.localdomain ceph-mon[296066]: pgmap v38: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:49:36 np0005625203.localdomain ceph-mon[296066]: pgmap v39: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:49:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:49:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:49:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:49:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:49:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:49:38 np0005625203.localdomain ceph-mon[296066]: pgmap v40: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:49:40 np0005625203.localdomain ceph-mon[296066]: pgmap v41: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:42 np0005625203.localdomain ceph-mon[296066]: pgmap v42: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:49:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:49:42 np0005625203.localdomain podman[305344]: 2026-02-20 09:49:42.773371358 +0000 UTC m=+0.086054584 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:49:42 np0005625203.localdomain podman[305344]: 2026-02-20 09:49:42.783951726 +0000 UTC m=+0.096635032 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:49:42 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:49:42 np0005625203.localdomain systemd[1]: tmp-crun.ncasA7.mount: Deactivated successfully.
Feb 20 09:49:42 np0005625203.localdomain podman[305343]: 2026-02-20 09:49:42.831239158 +0000 UTC m=+0.143976006 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:49:42 np0005625203.localdomain podman[305343]: 2026-02-20 09:49:42.868374538 +0000 UTC m=+0.181111346 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:49:42 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:49:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:49:44 np0005625203.localdomain ceph-mon[296066]: pgmap v43: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:46.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:49:46 np0005625203.localdomain ceph-mon[296066]: pgmap v44: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:49:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7800.1 total, 600.0 interval
                                                          Cumulative writes: 5116 writes, 22K keys, 5116 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5116 writes, 788 syncs, 6.49 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 212 writes, 497 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s
                                                          Interval WAL: 212 writes, 99 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 09:49:48 np0005625203.localdomain ceph-mon[296066]: pgmap v45: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:49:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:49.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:49:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:49.360 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:49:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:49.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:49:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:49.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:49:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:49.362 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:49:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:49.362 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:49:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:49:49 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1780740115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:49:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:49.775 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:49:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:49.991 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:49:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:49.992 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12429MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:49:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:49.993 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:49:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:49.993 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:49:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:50.103 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:49:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:50.103 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:49:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:50.128 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:49:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:49:50 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2135499472' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:49:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:50.564 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:49:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:50.569 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:49:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:50.586 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:49:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:50.588 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:49:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:50.588 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:49:50 np0005625203.localdomain ceph-mon[296066]: pgmap v46: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1780740115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:49:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2135499472' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:49:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:49:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7800.1 total, 600.0 interval
                                                          Cumulative writes: 5978 writes, 25K keys, 5978 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5978 writes, 818 syncs, 7.31 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 51 writes, 170 keys, 51 commit groups, 1.0 writes per commit group, ingest: 0.23 MB, 0.00 MB/s
                                                          Interval WAL: 51 writes, 21 syncs, 2.43 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 09:49:52 np0005625203.localdomain ceph-mon[296066]: pgmap v47: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1972084818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:49:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:49:52 np0005625203.localdomain podman[305431]: 2026-02-20 09:49:52.768992479 +0000 UTC m=+0.084672731 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:49:52 np0005625203.localdomain podman[305431]: 2026-02-20 09:49:52.780285699 +0000 UTC m=+0.095965951 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:49:52 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:49:52 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:49:52.970 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:49:52 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:49:52.971 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:49:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1147410803' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:49:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:49:54 np0005625203.localdomain ceph-mon[296066]: pgmap v48: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:49:54 np0005625203.localdomain ceph-mon[296066]: mgrmap e46: np0005625202.arwxwo(active, since 90s), standbys: np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:49:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:55.590 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:49:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:55.590 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:49:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2042522384' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:49:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:56.339 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:49:56 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:56.431 262775 INFO oslo.privsep.daemon [None req-10a9f5e7-8348-4b2b-9f84-1a68ed77aca9 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpwklj7x35/privsep.sock']
Feb 20 09:49:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:49:56 np0005625203.localdomain podman[305453]: 2026-02-20 09:49:56.774090034 +0000 UTC m=+0.089620024 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb 20 09:49:56 np0005625203.localdomain ceph-mon[296066]: pgmap v49: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Feb 20 09:49:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/4152313636' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:49:56 np0005625203.localdomain podman[305453]: 2026-02-20 09:49:56.818288251 +0000 UTC m=+0.133818271 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:49:56 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:49:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:57.127 262775 INFO oslo.privsep.daemon [None req-10a9f5e7-8348-4b2b-9f84-1a68ed77aca9 - - - - - -] Spawned new privsep daemon via rootwrap
Feb 20 09:49:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:56.980 305476 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 20 09:49:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:56.986 305476 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 20 09:49:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:56.990 305476 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Feb 20 09:49:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:56.990 305476 INFO oslo.privsep.daemon [-] privsep daemon running as pid 305476
Feb 20 09:49:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:57.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:49:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:57.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:49:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:57.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:49:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:57.684 262775 INFO oslo.privsep.daemon [None req-10a9f5e7-8348-4b2b-9f84-1a68ed77aca9 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpdm_k96cv/privsep.sock']
Feb 20 09:49:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:58.276 262775 INFO oslo.privsep.daemon [None req-10a9f5e7-8348-4b2b-9f84-1a68ed77aca9 - - - - - -] Spawned new privsep daemon via rootwrap
Feb 20 09:49:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:58.176 305485 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 20 09:49:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:58.181 305485 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 20 09:49:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:58.185 305485 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Feb 20 09:49:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:58.185 305485 INFO oslo.privsep.daemon [-] privsep daemon running as pid 305485
Feb 20 09:49:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:58.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:49:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:58.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:49:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:58.344 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:49:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:58.363 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:49:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:49:58.363 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:49:58 np0005625203.localdomain ceph-mon[296066]: pgmap v50: 177 pgs: 177 active+clean; 105 MiB data, 588 MiB used, 41 GiB / 42 GiB avail; 255 B/s wr, 0 op/s
Feb 20 09:49:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:49:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:49:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:49:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:49:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 154068 "" "Go-http-client/1.1"
Feb 20 09:49:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 17795 "" "Go-http-client/1.1"
Feb 20 09:49:59 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:59.402 262775 INFO oslo.privsep.daemon [None req-10a9f5e7-8348-4b2b-9f84-1a68ed77aca9 - - - - - -] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpr1sx5jz7/privsep.sock']
Feb 20 09:49:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e92 e92: 6 total, 6 up, 6 in
Feb 20 09:50:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:50:00.020 262775 INFO oslo.privsep.daemon [None req-10a9f5e7-8348-4b2b-9f84-1a68ed77aca9 - - - - - -] Spawned new privsep daemon via rootwrap
Feb 20 09:50:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:59.903 305497 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 20 09:50:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:59.909 305497 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 20 09:50:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:59.913 305497 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 20 09:50:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:49:59.913 305497 INFO oslo.privsep.daemon [-] privsep daemon running as pid 305497
Feb 20 09:50:00 np0005625203.localdomain ceph-mon[296066]: pgmap v51: 177 pgs: 177 active+clean; 121 MiB data, 624 MiB used, 41 GiB / 42 GiB avail; 8.2 KiB/s rd, 1.3 MiB/s wr, 12 op/s
Feb 20 09:50:00 np0005625203.localdomain ceph-mon[296066]: osdmap e92: 6 total, 6 up, 6 in
Feb 20 09:50:00 np0005625203.localdomain ceph-mon[296066]: overall HEALTH_OK
Feb 20 09:50:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:01.358 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:50:01 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:50:01.584 262775 INFO neutron.agent.linux.ip_lib [None req-10a9f5e7-8348-4b2b-9f84-1a68ed77aca9 - - - - - -] Device tapd81a5002-fd cannot be used as it has no MAC address
Feb 20 09:50:01 np0005625203.localdomain kernel: device tapd81a5002-fd entered promiscuous mode
Feb 20 09:50:01 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581001.6642] manager: (tapd81a5002-fd): new Generic device (/org/freedesktop/NetworkManager/Devices/13)
Feb 20 09:50:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:50:01Z|00024|binding|INFO|Claiming lport d81a5002-fda1-45a7-a6a5-d789fe5ac339 for this chassis.
Feb 20 09:50:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:50:01Z|00025|binding|INFO|d81a5002-fda1-45a7-a6a5-d789fe5ac339: Claiming unknown
Feb 20 09:50:01 np0005625203.localdomain systemd-udevd[305512]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:50:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:50:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:01.679 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.199.3/24', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-c4120566-7052-4b98-a25f-da35ec67db10', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c4120566-7052-4b98-a25f-da35ec67db10', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '264535d5b76c4b86bf9c7436214b5148', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17d383ed-ed56-41f0-a495-5dc69ced740d, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=d81a5002-fda1-45a7-a6a5-d789fe5ac339) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:50:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:01.682 161112 INFO neutron.agent.ovn.metadata.agent [-] Port d81a5002-fda1-45a7-a6a5-d789fe5ac339 in datapath c4120566-7052-4b98-a25f-da35ec67db10 bound to our chassis
Feb 20 09:50:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:50:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:01.686 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port d11ee00e-01c0-426f-b00e-466338a67f5d IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:50:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:01.687 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c4120566-7052-4b98-a25f-da35ec67db10, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:50:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:01.688 161112 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpkwv1yanr/privsep.sock']
Feb 20 09:50:01 np0005625203.localdomain virtnodedevd[228478]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, )
Feb 20 09:50:01 np0005625203.localdomain virtnodedevd[228478]: hostname: np0005625203.localdomain
Feb 20 09:50:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd81a5002-fd: No such device
Feb 20 09:50:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd81a5002-fd: No such device
Feb 20 09:50:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd81a5002-fd: No such device
Feb 20 09:50:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:50:01Z|00026|binding|INFO|Setting lport d81a5002-fda1-45a7-a6a5-d789fe5ac339 ovn-installed in OVS
Feb 20 09:50:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:50:01Z|00027|binding|INFO|Setting lport d81a5002-fda1-45a7-a6a5-d789fe5ac339 up in Southbound
Feb 20 09:50:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd81a5002-fd: No such device
Feb 20 09:50:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd81a5002-fd: No such device
Feb 20 09:50:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd81a5002-fd: No such device
Feb 20 09:50:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd81a5002-fd: No such device
Feb 20 09:50:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd81a5002-fd: No such device
Feb 20 09:50:01 np0005625203.localdomain podman[305516]: 2026-02-20 09:50:01.793537299 +0000 UTC m=+0.094553387 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., release=1770267347, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 20 09:50:01 np0005625203.localdomain podman[305516]: 2026-02-20 09:50:01.812558997 +0000 UTC m=+0.113575045 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347, config_id=openstack_network_exporter, vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:50:01 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:50:01 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 e93: 6 total, 6 up, 6 in
Feb 20 09:50:01 np0005625203.localdomain systemd[1]: tmp-crun.rBpzyd.mount: Deactivated successfully.
Feb 20 09:50:01 np0005625203.localdomain podman[305515]: 2026-02-20 09:50:01.881485491 +0000 UTC m=+0.180197228 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127)
Feb 20 09:50:01 np0005625203.localdomain podman[305515]: 2026-02-20 09:50:01.919306771 +0000 UTC m=+0.218018558 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 20 09:50:01 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:50:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:01.974 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:50:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:50:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/188009214' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:50:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:50:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/188009214' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:50:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:02.378 161112 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 20 09:50:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:02.380 161112 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpkwv1yanr/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 20 09:50:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:02.248 305605 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 20 09:50:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:02.254 305605 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 20 09:50:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:02.257 305605 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Feb 20 09:50:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:02.257 305605 INFO oslo.privsep.daemon [-] privsep daemon running as pid 305605
Feb 20 09:50:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:02.384 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[0396f0a2-bb5d-4c69-9359-02a5729c2605]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:50:02 np0005625203.localdomain podman[305636]: 
Feb 20 09:50:02 np0005625203.localdomain podman[305636]: 2026-02-20 09:50:02.810363556 +0000 UTC m=+0.100517432 container create fe1d53eac2b03858c4d10b6eeac9a377363648a4a17b5850a47ef6c93cec77dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c4120566-7052-4b98-a25f-da35ec67db10, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:50:02 np0005625203.localdomain ceph-mon[296066]: pgmap v53: 177 pgs: 177 active+clean; 125 MiB data, 632 MiB used, 41 GiB / 42 GiB avail; 9.9 KiB/s rd, 2.0 MiB/s wr, 14 op/s
Feb 20 09:50:02 np0005625203.localdomain ceph-mon[296066]: osdmap e93: 6 total, 6 up, 6 in
Feb 20 09:50:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/188009214' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:50:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/188009214' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:50:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:02.853 305605 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:50:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:02.853 305605 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:50:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:02.853 305605 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:50:02 np0005625203.localdomain podman[305636]: 2026-02-20 09:50:02.76011045 +0000 UTC m=+0.050264326 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:50:02 np0005625203.localdomain systemd[1]: Started libpod-conmon-fe1d53eac2b03858c4d10b6eeac9a377363648a4a17b5850a47ef6c93cec77dd.scope.
Feb 20 09:50:02 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:50:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d09abd8fb1e64b927d24717449d3ceaddb91f2b1974d6c79e6cd50a21505965e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:50:02 np0005625203.localdomain podman[305636]: 2026-02-20 09:50:02.899278518 +0000 UTC m=+0.189432394 container init fe1d53eac2b03858c4d10b6eeac9a377363648a4a17b5850a47ef6c93cec77dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c4120566-7052-4b98-a25f-da35ec67db10, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:50:02 np0005625203.localdomain podman[305636]: 2026-02-20 09:50:02.917519883 +0000 UTC m=+0.207673759 container start fe1d53eac2b03858c4d10b6eeac9a377363648a4a17b5850a47ef6c93cec77dd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c4120566-7052-4b98-a25f-da35ec67db10, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 09:50:02 np0005625203.localdomain dnsmasq[305655]: started, version 2.85 cachesize 150
Feb 20 09:50:02 np0005625203.localdomain dnsmasq[305655]: DNS service limited to local subnets
Feb 20 09:50:02 np0005625203.localdomain dnsmasq[305655]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:50:02 np0005625203.localdomain dnsmasq[305655]: warning: no upstream servers configured
Feb 20 09:50:02 np0005625203.localdomain dnsmasq-dhcp[305655]: DHCP, static leases only on 192.168.199.0, lease time 1d
Feb 20 09:50:02 np0005625203.localdomain dnsmasq[305655]: read /var/lib/neutron/dhcp/c4120566-7052-4b98-a25f-da35ec67db10/addn_hosts - 0 addresses
Feb 20 09:50:02 np0005625203.localdomain dnsmasq-dhcp[305655]: read /var/lib/neutron/dhcp/c4120566-7052-4b98-a25f-da35ec67db10/host
Feb 20 09:50:02 np0005625203.localdomain dnsmasq-dhcp[305655]: read /var/lib/neutron/dhcp/c4120566-7052-4b98-a25f-da35ec67db10/opts
Feb 20 09:50:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:02.944 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[c0705c22-9c38-4019-970c-ddb018488d56]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:50:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:50:03.266 262775 INFO neutron.agent.dhcp.agent [None req-392fe465-b819-492d-8c55-361eab09a321 - - - - - -] DHCP configuration for ports {'caee8d7d-a822-4d74-bb1e-85d4cf17b591'} is completed
Feb 20 09:50:03 np0005625203.localdomain systemd[1]: tmp-crun.v2N0Mm.mount: Deactivated successfully.
Feb 20 09:50:03 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:50:04 np0005625203.localdomain ceph-mon[296066]: pgmap v55: 177 pgs: 177 active+clean; 125 MiB data, 632 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 18 op/s
Feb 20 09:50:06 np0005625203.localdomain ceph-mon[296066]: pgmap v56: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Feb 20 09:50:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:50:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:50:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:50:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:50:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:50:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:50:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:07.666 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:50:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:07.668 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:50:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:07.668 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:50:08 np0005625203.localdomain ceph-mon[296066]: pgmap v57: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 3.1 MiB/s wr, 29 op/s
Feb 20 09:50:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:50:09 np0005625203.localdomain sshd[305656]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:50:10 np0005625203.localdomain sshd[305656]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:50:10 np0005625203.localdomain ceph-mon[296066]: pgmap v58: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 2.1 MiB/s wr, 24 op/s
Feb 20 09:50:12 np0005625203.localdomain ceph-mon[296066]: pgmap v59: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 2.0 MiB/s wr, 23 op/s
Feb 20 09:50:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:50:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:50:13 np0005625203.localdomain podman[305659]: 2026-02-20 09:50:13.777299378 +0000 UTC m=+0.093754422 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:50:13 np0005625203.localdomain podman[305659]: 2026-02-20 09:50:13.816419289 +0000 UTC m=+0.132874333 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:50:13 np0005625203.localdomain podman[305660]: 2026-02-20 09:50:13.829448393 +0000 UTC m=+0.140361165 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:50:13 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:50:13 np0005625203.localdomain podman[305660]: 2026-02-20 09:50:13.864343682 +0000 UTC m=+0.175256464 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:50:13 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:50:13 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:50:15 np0005625203.localdomain ceph-mon[296066]: pgmap v60: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 1.8 MiB/s wr, 20 op/s
Feb 20 09:50:17 np0005625203.localdomain ceph-mon[296066]: pgmap v61: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 1.7 MiB/s wr, 19 op/s
Feb 20 09:50:18 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:50:19 np0005625203.localdomain ceph-mon[296066]: pgmap v62: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:21 np0005625203.localdomain ceph-mon[296066]: pgmap v63: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:23 np0005625203.localdomain ceph-mon[296066]: pgmap v64: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:50:23 np0005625203.localdomain systemd[1]: tmp-crun.tFIbo8.mount: Deactivated successfully.
Feb 20 09:50:23 np0005625203.localdomain podman[305703]: 2026-02-20 09:50:23.780200455 +0000 UTC m=+0.096278122 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 20 09:50:23 np0005625203.localdomain podman[305703]: 2026-02-20 09:50:23.789269915 +0000 UTC m=+0.105347632 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 20 09:50:23 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:50:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:50:24 np0005625203.localdomain sshd[305721]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:50:25 np0005625203.localdomain ceph-mon[296066]: pgmap v65: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:25 np0005625203.localdomain sshd[305721]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:50:27 np0005625203.localdomain ceph-mon[296066]: pgmap v66: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:50:27 np0005625203.localdomain systemd[1]: tmp-crun.prtKnn.mount: Deactivated successfully.
Feb 20 09:50:27 np0005625203.localdomain podman[305723]: 2026-02-20 09:50:27.774612519 +0000 UTC m=+0.088397087 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Feb 20 09:50:27 np0005625203.localdomain podman[305723]: 2026-02-20 09:50:27.838515076 +0000 UTC m=+0.152299644 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb 20 09:50:27 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:50:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:50:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:50:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:50:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:50:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 09:50:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18275 "" "Go-http-client/1.1"
Feb 20 09:50:29 np0005625203.localdomain ceph-mon[296066]: pgmap v67: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:31 np0005625203.localdomain ceph-mon[296066]: pgmap v68: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:31 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:50:31Z|00028|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb 20 09:50:32 np0005625203.localdomain sudo[305749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:50:32 np0005625203.localdomain sudo[305749]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:50:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:50:32 np0005625203.localdomain sudo[305749]: pam_unix(sudo:session): session closed for user root
Feb 20 09:50:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:50:32 np0005625203.localdomain sudo[305769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:50:32 np0005625203.localdomain sudo[305769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:50:32 np0005625203.localdomain podman[305768]: 2026-02-20 09:50:32.589701829 +0000 UTC m=+0.091479751 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vcs-type=git, build-date=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 20 09:50:32 np0005625203.localdomain podman[305768]: 2026-02-20 09:50:32.627636099 +0000 UTC m=+0.129413981 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-type=git, com.redhat.component=ubi9-minimal-container, release=1770267347, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 09:50:32 np0005625203.localdomain systemd[1]: tmp-crun.VlQy3K.mount: Deactivated successfully.
Feb 20 09:50:32 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:50:32 np0005625203.localdomain podman[305767]: 2026-02-20 09:50:32.663517266 +0000 UTC m=+0.166490426 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:50:32 np0005625203.localdomain podman[305767]: 2026-02-20 09:50:32.67921685 +0000 UTC m=+0.182190010 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute)
Feb 20 09:50:32 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:50:33 np0005625203.localdomain sudo[305769]: pam_unix(sudo:session): session closed for user root
Feb 20 09:50:33 np0005625203.localdomain ceph-mon[296066]: pgmap v69: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:33 np0005625203.localdomain sudo[305857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:50:33 np0005625203.localdomain sudo[305857]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:50:33 np0005625203.localdomain sudo[305857]: pam_unix(sudo:session): session closed for user root
Feb 20 09:50:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:50:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:50:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:50:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:50:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:50:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:50:35 np0005625203.localdomain ceph-mon[296066]: pgmap v70: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:50:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:50:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:50:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:50:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:50:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:50:37 np0005625203.localdomain ceph-mon[296066]: pgmap v71: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:50:39 np0005625203.localdomain ceph-mon[296066]: pgmap v72: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:40 np0005625203.localdomain ceph-mon[296066]: pgmap v73: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:42 np0005625203.localdomain ceph-mon[296066]: pgmap v74: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:50:44 np0005625203.localdomain ceph-mon[296066]: pgmap v75: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:50:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:50:44 np0005625203.localdomain podman[305875]: 2026-02-20 09:50:44.779857199 +0000 UTC m=+0.087057156 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:50:44 np0005625203.localdomain podman[305875]: 2026-02-20 09:50:44.788470484 +0000 UTC m=+0.095670401 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:50:44 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:50:44 np0005625203.localdomain podman[305876]: 2026-02-20 09:50:44.835962299 +0000 UTC m=+0.142537557 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:50:44 np0005625203.localdomain podman[305876]: 2026-02-20 09:50:44.872436714 +0000 UTC m=+0.179011912 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:50:44 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:50:46 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:50:46.024 262775 INFO neutron.agent.linux.ip_lib [None req-33b8f9b5-fbad-4491-9bc4-67efad23753f - - - - - -] Device tap3b176488-ec cannot be used as it has no MAC address
Feb 20 09:50:46 np0005625203.localdomain kernel: device tap3b176488-ec entered promiscuous mode
Feb 20 09:50:46 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581046.1031] manager: (tap3b176488-ec): new Generic device (/org/freedesktop/NetworkManager/Devices/14)
Feb 20 09:50:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:50:46Z|00029|binding|INFO|Claiming lport 3b176488-ecb3-4c4f-a254-2be6a57d131c for this chassis.
Feb 20 09:50:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:50:46Z|00030|binding|INFO|3b176488-ecb3-4c4f-a254-2be6a57d131c: Claiming unknown
Feb 20 09:50:46 np0005625203.localdomain systemd-udevd[305931]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:50:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:46.114 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30955323-f649-483f-8215-a2b2b9707d5e, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=3b176488-ecb3-4c4f-a254-2be6a57d131c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:50:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:46.116 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 3b176488-ecb3-4c4f-a254-2be6a57d131c in datapath 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 bound to our chassis
Feb 20 09:50:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:46.118 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:50:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:46.120 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[f9fb1908-c13c-4e28-aa4a-9519d433e7c3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:50:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:50:46Z|00031|binding|INFO|Setting lport 3b176488-ecb3-4c4f-a254-2be6a57d131c ovn-installed in OVS
Feb 20 09:50:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:50:46Z|00032|binding|INFO|Setting lport 3b176488-ecb3-4c4f-a254-2be6a57d131c up in Southbound
Feb 20 09:50:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3b176488-ec: No such device
Feb 20 09:50:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3b176488-ec: No such device
Feb 20 09:50:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3b176488-ec: No such device
Feb 20 09:50:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3b176488-ec: No such device
Feb 20 09:50:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3b176488-ec: No such device
Feb 20 09:50:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3b176488-ec: No such device
Feb 20 09:50:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3b176488-ec: No such device
Feb 20 09:50:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3b176488-ec: No such device
Feb 20 09:50:46 np0005625203.localdomain ceph-mon[296066]: pgmap v76: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:47 np0005625203.localdomain podman[306002]: 
Feb 20 09:50:47 np0005625203.localdomain podman[306002]: 2026-02-20 09:50:47.168828968 +0000 UTC m=+0.101577545 container create 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:50:47 np0005625203.localdomain podman[306002]: 2026-02-20 09:50:47.120269999 +0000 UTC m=+0.053018606 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:50:47 np0005625203.localdomain systemd[1]: Started libpod-conmon-4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f.scope.
Feb 20 09:50:47 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:50:47 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5520f4d6ab6ff12edf32111abd1576ee01576e40db83e45c16f10c5e9befd1f5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:50:47 np0005625203.localdomain podman[306002]: 2026-02-20 09:50:47.259188014 +0000 UTC m=+0.191936621 container init 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 20 09:50:47 np0005625203.localdomain podman[306002]: 2026-02-20 09:50:47.267392907 +0000 UTC m=+0.200141484 container start 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:50:47 np0005625203.localdomain dnsmasq[306019]: started, version 2.85 cachesize 150
Feb 20 09:50:47 np0005625203.localdomain dnsmasq[306019]: DNS service limited to local subnets
Feb 20 09:50:47 np0005625203.localdomain dnsmasq[306019]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:50:47 np0005625203.localdomain dnsmasq[306019]: warning: no upstream servers configured
Feb 20 09:50:47 np0005625203.localdomain dnsmasq-dhcp[306019]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:50:47 np0005625203.localdomain dnsmasq[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/addn_hosts - 0 addresses
Feb 20 09:50:47 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/host
Feb 20 09:50:47 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/opts
Feb 20 09:50:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:47.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:50:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:50:47.470 262775 INFO neutron.agent.dhcp.agent [None req-4d60d1a4-1af3-4941-b3c0-c48261fd932b - - - - - -] DHCP configuration for ports {'b6bbb6c0-ef13-4100-9a72-6d01c8b15be6'} is completed
Feb 20 09:50:48 np0005625203.localdomain ceph-mon[296066]: pgmap v77: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:50:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:49.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:50:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:49.360 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:50:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:49.360 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:50:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:49.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:50:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:49.361 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:50:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:49.361 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:50:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:50:49 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1559223443' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:50:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:49.800 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:50:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:49.974 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:50:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:49.975 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=12061MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:50:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:49.975 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:50:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:49.975 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:50:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:50.038 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:50:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:50.039 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:50:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:50.062 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:50:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:50:50 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1430418607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:50:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:50.533 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:50:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:50.538 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:50:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:50.551 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:50:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:50.553 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:50:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:50.553 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:50:50 np0005625203.localdomain ceph-mon[296066]: pgmap v78: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1559223443' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:50:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1430418607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:50:51 np0005625203.localdomain sshd[306064]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:50:51 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:50:51.412 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:50:50Z, description=, device_id=38311c27-406d-4e99-b88f-f014ece8535b, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ea3040>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ea8d30>], id=618787f8-db73-45f2-9647-7d9fd1679f03, ip_allocation=immediate, mac_address=fa:16:3e:0a:3e:a7, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:50:44Z, description=, dns_domain=, id=82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-813516866-network, port_security_enabled=True, project_id=a966116e4ddf4bdea0571a1bb751916e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6261, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=266, status=ACTIVE, subnets=['6b75f14d-6a18-4445-ab09-be7bd5a594a7'], tags=[], tenant_id=a966116e4ddf4bdea0571a1bb751916e, updated_at=2026-02-20T09:50:45Z, vlan_transparent=None, network_id=82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, port_security_enabled=False, project_id=a966116e4ddf4bdea0571a1bb751916e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=346, status=DOWN, tags=[], tenant_id=a966116e4ddf4bdea0571a1bb751916e, updated_at=2026-02-20T09:50:50Z on network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192
Feb 20 09:50:51 np0005625203.localdomain dnsmasq[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/addn_hosts - 1 addresses
Feb 20 09:50:51 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/host
Feb 20 09:50:51 np0005625203.localdomain podman[306082]: 2026-02-20 09:50:51.651103345 +0000 UTC m=+0.066558924 container kill 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 20 09:50:51 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/opts
Feb 20 09:50:51 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:50:51.838 262775 INFO neutron.agent.dhcp.agent [None req-90688b8d-623d-4425-ab03-52438b53915c - - - - - -] DHCP configuration for ports {'618787f8-db73-45f2-9647-7d9fd1679f03'} is completed
Feb 20 09:50:51 np0005625203.localdomain sshd[306064]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:50:52 np0005625203.localdomain ceph-mon[296066]: pgmap v79: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2038880473' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:50:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:53.148 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:50:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:50:53.149 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:50:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1986532880' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:50:53 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:50:53.925 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:50:50Z, description=, device_id=38311c27-406d-4e99-b88f-f014ece8535b, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4eac730>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4eacd30>], id=618787f8-db73-45f2-9647-7d9fd1679f03, ip_allocation=immediate, mac_address=fa:16:3e:0a:3e:a7, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:50:44Z, description=, dns_domain=, id=82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-813516866-network, port_security_enabled=True, project_id=a966116e4ddf4bdea0571a1bb751916e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6261, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=266, status=ACTIVE, subnets=['6b75f14d-6a18-4445-ab09-be7bd5a594a7'], tags=[], tenant_id=a966116e4ddf4bdea0571a1bb751916e, updated_at=2026-02-20T09:50:45Z, vlan_transparent=None, network_id=82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, port_security_enabled=False, project_id=a966116e4ddf4bdea0571a1bb751916e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=346, status=DOWN, tags=[], tenant_id=a966116e4ddf4bdea0571a1bb751916e, updated_at=2026-02-20T09:50:50Z on network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192
Feb 20 09:50:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:50:54 np0005625203.localdomain podman[306120]: 2026-02-20 09:50:54.162312954 +0000 UTC m=+0.065271853 container kill 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 20 09:50:54 np0005625203.localdomain dnsmasq[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/addn_hosts - 1 addresses
Feb 20 09:50:54 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/host
Feb 20 09:50:54 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/opts
Feb 20 09:50:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:50:54 np0005625203.localdomain podman[306135]: 2026-02-20 09:50:54.284854994 +0000 UTC m=+0.087566322 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Feb 20 09:50:54 np0005625203.localdomain podman[306135]: 2026-02-20 09:50:54.314550289 +0000 UTC m=+0.117261637 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:50:54 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:50:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:54.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:50:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:54.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:50:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:54.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 20 09:50:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:54.362 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 20 09:50:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:50:54.406 262775 INFO neutron.agent.dhcp.agent [None req-dc4cc41b-9d9f-482e-be63-6a9752c05061 - - - - - -] DHCP configuration for ports {'618787f8-db73-45f2-9647-7d9fd1679f03'} is completed
Feb 20 09:50:54 np0005625203.localdomain ceph-mon[296066]: pgmap v80: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:56 np0005625203.localdomain sshd[306158]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:50:56 np0005625203.localdomain sshd[306158]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:50:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:56.362 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:50:56 np0005625203.localdomain ceph-mon[296066]: pgmap v81: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:50:56Z|00033|ovn_bfd|INFO|Enabled BFD on interface ovn-0c414b-0
Feb 20 09:50:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:50:56Z|00034|ovn_bfd|INFO|Enabled BFD on interface ovn-2df8cc-0
Feb 20 09:50:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:50:56Z|00035|ovn_bfd|INFO|Enabled BFD on interface ovn-2275c3-0
Feb 20 09:50:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:57.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:50:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:57.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:50:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:57.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:50:57 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2966131561' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:50:57 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3661911017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:50:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:58.338 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:50:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:58.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:50:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:58.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:50:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:58.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:50:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:58.357 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:50:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:50:58.357 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:50:58 np0005625203.localdomain ceph-mon[296066]: pgmap v82: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:50:58 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/306112953' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:50:58 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:50:58 np0005625203.localdomain podman[306162]: 2026-02-20 09:50:58.799317475 +0000 UTC m=+0.115247666 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:50:58 np0005625203.localdomain podman[306162]: 2026-02-20 09:50:58.864359281 +0000 UTC m=+0.180289452 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:50:58 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:50:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:50:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:50:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:50:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:50:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157718 "" "Go-http-client/1.1"
Feb 20 09:50:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18754 "" "Go-http-client/1.1"
Feb 20 09:51:00 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:00.152 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:00 np0005625203.localdomain ceph-mon[296066]: pgmap v83: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 20 09:51:01 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:01.423 262775 INFO neutron.agent.linux.ip_lib [None req-b0a6ea6a-df40-45bd-b284-f70ce73a375e - - - - - -] Device tap189f50d0-ec cannot be used as it has no MAC address
Feb 20 09:51:01 np0005625203.localdomain kernel: device tap189f50d0-ec entered promiscuous mode
Feb 20 09:51:01 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581061.4620] manager: (tap189f50d0-ec): new Generic device (/org/freedesktop/NetworkManager/Devices/15)
Feb 20 09:51:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:01Z|00036|binding|INFO|Claiming lport 189f50d0-ec3c-4391-ae4a-06afd6ad7a16 for this chassis.
Feb 20 09:51:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:01Z|00037|binding|INFO|189f50d0-ec3c-4391-ae4a-06afd6ad7a16: Claiming unknown
Feb 20 09:51:01 np0005625203.localdomain systemd-udevd[306199]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:51:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:01.482 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65685fb154414740b6b5e1276111b8bb', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a19a018-28c8-4ea6-8726-adf082e39248, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=189f50d0-ec3c-4391-ae4a-06afd6ad7a16) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:51:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:01.487 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 189f50d0-ec3c-4391-ae4a-06afd6ad7a16 in datapath aeac20da-1ef4-4e07-847a-a0d1f8a80ad9 bound to our chassis
Feb 20 09:51:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:01.490 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network aeac20da-1ef4-4e07-847a-a0d1f8a80ad9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:51:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:01.491 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[152e3161-b81c-43a5-8a29-bd50e1da8914]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:01Z|00038|binding|INFO|Setting lport 189f50d0-ec3c-4391-ae4a-06afd6ad7a16 ovn-installed in OVS
Feb 20 09:51:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:01Z|00039|binding|INFO|Setting lport 189f50d0-ec3c-4391-ae4a-06afd6ad7a16 up in Southbound
Feb 20 09:51:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap189f50d0-ec: No such device
Feb 20 09:51:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap189f50d0-ec: No such device
Feb 20 09:51:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap189f50d0-ec: No such device
Feb 20 09:51:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap189f50d0-ec: No such device
Feb 20 09:51:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap189f50d0-ec: No such device
Feb 20 09:51:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap189f50d0-ec: No such device
Feb 20 09:51:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap189f50d0-ec: No such device
Feb 20 09:51:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap189f50d0-ec: No such device
Feb 20 09:51:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/4020676950' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:51:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:51:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2106398093' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:51:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:51:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2106398093' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:51:02 np0005625203.localdomain podman[306270]: 
Feb 20 09:51:02 np0005625203.localdomain podman[306270]: 2026-02-20 09:51:02.460711567 +0000 UTC m=+0.090850703 container create d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 20 09:51:02 np0005625203.localdomain systemd[1]: Started libpod-conmon-d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f.scope.
Feb 20 09:51:02 np0005625203.localdomain podman[306270]: 2026-02-20 09:51:02.416345909 +0000 UTC m=+0.046485075 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:51:02 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:51:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a8043a7ae832302aaf67084eba3cbada4e8d8137db62aea245334f7b2fefefa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:51:02 np0005625203.localdomain podman[306270]: 2026-02-20 09:51:02.56033726 +0000 UTC m=+0.190476406 container init d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 20 09:51:02 np0005625203.localdomain podman[306270]: 2026-02-20 09:51:02.568941535 +0000 UTC m=+0.199080671 container start d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:51:02 np0005625203.localdomain dnsmasq[306288]: started, version 2.85 cachesize 150
Feb 20 09:51:02 np0005625203.localdomain dnsmasq[306288]: DNS service limited to local subnets
Feb 20 09:51:02 np0005625203.localdomain dnsmasq[306288]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:51:02 np0005625203.localdomain dnsmasq[306288]: warning: no upstream servers configured
Feb 20 09:51:02 np0005625203.localdomain dnsmasq-dhcp[306288]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:51:02 np0005625203.localdomain dnsmasq[306288]: read /var/lib/neutron/dhcp/aeac20da-1ef4-4e07-847a-a0d1f8a80ad9/addn_hosts - 0 addresses
Feb 20 09:51:02 np0005625203.localdomain dnsmasq-dhcp[306288]: read /var/lib/neutron/dhcp/aeac20da-1ef4-4e07-847a-a0d1f8a80ad9/host
Feb 20 09:51:02 np0005625203.localdomain dnsmasq-dhcp[306288]: read /var/lib/neutron/dhcp/aeac20da-1ef4-4e07-847a-a0d1f8a80ad9/opts
Feb 20 09:51:02 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:51:02 np0005625203.localdomain ceph-mon[296066]: pgmap v84: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 20 09:51:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1455703923' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:51:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2106398093' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:51:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2106398093' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:51:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:02.732 262775 INFO neutron.agent.dhcp.agent [None req-9e6d28ca-4e9c-43f1-a7fb-7291684d4b01 - - - - - -] DHCP configuration for ports {'e52a8019-e1fd-446d-8dad-02b0427ea74f'} is completed
Feb 20 09:51:02 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:51:02 np0005625203.localdomain podman[306289]: 2026-02-20 09:51:02.782591614 +0000 UTC m=+0.094042371 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, com.redhat.component=ubi9-minimal-container, release=1770267347, container_name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, name=ubi9/ubi-minimal, distribution-scope=public, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7)
Feb 20 09:51:02 np0005625203.localdomain podman[306289]: 2026-02-20 09:51:02.797084201 +0000 UTC m=+0.108534958 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, version=9.7, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 09:51:02 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:51:02 np0005625203.localdomain podman[306307]: 2026-02-20 09:51:02.886693515 +0000 UTC m=+0.094346711 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:51:02 np0005625203.localdomain podman[306307]: 2026-02-20 09:51:02.923274443 +0000 UTC m=+0.130927599 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 09:51:02 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:51:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:03.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:51:04 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:51:04 np0005625203.localdomain ceph-mon[296066]: pgmap v85: 177 pgs: 177 active+clean; 145 MiB data, 711 MiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Feb 20 09:51:05 np0005625203.localdomain sshd[306329]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:51:06 np0005625203.localdomain sshd[306329]: Received disconnect from 5.253.59.68 port 42370:11: Bye Bye [preauth]
Feb 20 09:51:06 np0005625203.localdomain sshd[306329]: Disconnected from authenticating user root 5.253.59.68 port 42370 [preauth]
Feb 20 09:51:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:06.356 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:51:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:06.356 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 20 09:51:06 np0005625203.localdomain ceph-mon[296066]: pgmap v86: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Feb 20 09:51:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:51:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:51:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:51:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:51:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:51:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:51:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:07.667 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:07.668 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:07.668 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:08 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:08.061 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:07Z, description=, device_id=bf5809dc-d63e-49cb-96cb-266b8d503e70, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ed9580>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ed93a0>], id=18934921-06d2-4418-9f20-888609cb6f5e, ip_allocation=immediate, mac_address=fa:16:3e:fa:29:ff, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:50:59Z, description=, dns_domain=, id=aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1967081879-network, port_security_enabled=True, project_id=65685fb154414740b6b5e1276111b8bb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=713, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=431, status=ACTIVE, subnets=['546e800b-b217-4b2b-973a-388a2cd2b30f'], tags=[], tenant_id=65685fb154414740b6b5e1276111b8bb, updated_at=2026-02-20T09:51:00Z, vlan_transparent=None, network_id=aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, port_security_enabled=False, project_id=65685fb154414740b6b5e1276111b8bb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=478, status=DOWN, tags=[], tenant_id=65685fb154414740b6b5e1276111b8bb, updated_at=2026-02-20T09:51:07Z on network aeac20da-1ef4-4e07-847a-a0d1f8a80ad9
Feb 20 09:51:08 np0005625203.localdomain dnsmasq[306288]: read /var/lib/neutron/dhcp/aeac20da-1ef4-4e07-847a-a0d1f8a80ad9/addn_hosts - 1 addresses
Feb 20 09:51:08 np0005625203.localdomain dnsmasq-dhcp[306288]: read /var/lib/neutron/dhcp/aeac20da-1ef4-4e07-847a-a0d1f8a80ad9/host
Feb 20 09:51:08 np0005625203.localdomain dnsmasq-dhcp[306288]: read /var/lib/neutron/dhcp/aeac20da-1ef4-4e07-847a-a0d1f8a80ad9/opts
Feb 20 09:51:08 np0005625203.localdomain podman[306348]: 2026-02-20 09:51:08.295111826 +0000 UTC m=+0.062272981 container kill d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:51:08 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:08.487 262775 INFO neutron.agent.dhcp.agent [None req-5ff944b7-94b8-4edc-9c7e-8229d3572e64 - - - - - -] DHCP configuration for ports {'18934921-06d2-4418-9f20-888609cb6f5e'} is completed
Feb 20 09:51:08 np0005625203.localdomain ceph-mon[296066]: pgmap v87: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 3.1 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Feb 20 09:51:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:51:09 np0005625203.localdomain sshd[306369]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:51:10 np0005625203.localdomain ceph-mon[296066]: pgmap v88: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Feb 20 09:51:11 np0005625203.localdomain sshd[306369]: Received disconnect from 34.131.211.42 port 51380:11: Bye Bye [preauth]
Feb 20 09:51:11 np0005625203.localdomain sshd[306369]: Disconnected from authenticating user root 34.131.211.42 port 51380 [preauth]
Feb 20 09:51:11 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:11.235 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:07Z, description=, device_id=bf5809dc-d63e-49cb-96cb-266b8d503e70, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ef2940>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ef2a60>], id=18934921-06d2-4418-9f20-888609cb6f5e, ip_allocation=immediate, mac_address=fa:16:3e:fa:29:ff, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:50:59Z, description=, dns_domain=, id=aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveMigrationTest-1967081879-network, port_security_enabled=True, project_id=65685fb154414740b6b5e1276111b8bb, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=713, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=431, status=ACTIVE, subnets=['546e800b-b217-4b2b-973a-388a2cd2b30f'], tags=[], tenant_id=65685fb154414740b6b5e1276111b8bb, updated_at=2026-02-20T09:51:00Z, vlan_transparent=None, network_id=aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, port_security_enabled=False, project_id=65685fb154414740b6b5e1276111b8bb, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=478, status=DOWN, tags=[], tenant_id=65685fb154414740b6b5e1276111b8bb, updated_at=2026-02-20T09:51:07Z on network aeac20da-1ef4-4e07-847a-a0d1f8a80ad9
Feb 20 09:51:11 np0005625203.localdomain podman[306388]: 2026-02-20 09:51:11.449062908 +0000 UTC m=+0.057496884 container kill d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 20 09:51:11 np0005625203.localdomain dnsmasq[306288]: read /var/lib/neutron/dhcp/aeac20da-1ef4-4e07-847a-a0d1f8a80ad9/addn_hosts - 1 addresses
Feb 20 09:51:11 np0005625203.localdomain dnsmasq-dhcp[306288]: read /var/lib/neutron/dhcp/aeac20da-1ef4-4e07-847a-a0d1f8a80ad9/host
Feb 20 09:51:11 np0005625203.localdomain dnsmasq-dhcp[306288]: read /var/lib/neutron/dhcp/aeac20da-1ef4-4e07-847a-a0d1f8a80ad9/opts
Feb 20 09:51:11 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:11.642 262775 INFO neutron.agent.dhcp.agent [None req-1c83bc12-d4b0-489c-8637-010e577aa37e - - - - - -] DHCP configuration for ports {'18934921-06d2-4418-9f20-888609cb6f5e'} is completed
Feb 20 09:51:11 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:11.700 2 INFO neutron.agent.securitygroups_rpc [None req-12bd9327-2dd3-43c4-b987-ac4cbf3c449a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Security group member updated ['07d2fe18-fbbf-4547-931e-bb55f378bade']
Feb 20 09:51:11 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:11.924 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:11Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4fe6370>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4f810d0>], id=89472e1e-6ca6-404e-8ec3-7651099fb248, ip_allocation=immediate, mac_address=fa:16:3e:00:f6:87, name=tempest-parent-254587356, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:50:44Z, description=, dns_domain=, id=82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-LiveAutoBlockMigrationV225Test-813516866-network, port_security_enabled=True, project_id=a966116e4ddf4bdea0571a1bb751916e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=6261, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=266, status=ACTIVE, subnets=['6b75f14d-6a18-4445-ab09-be7bd5a594a7'], tags=[], tenant_id=a966116e4ddf4bdea0571a1bb751916e, updated_at=2026-02-20T09:50:45Z, vlan_transparent=None, network_id=82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, port_security_enabled=True, project_id=a966116e4ddf4bdea0571a1bb751916e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['07d2fe18-fbbf-4547-931e-bb55f378bade'], standard_attr_id=498, status=DOWN, tags=[], tenant_id=a966116e4ddf4bdea0571a1bb751916e, updated_at=2026-02-20T09:51:11Z on network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192
Feb 20 09:51:12 np0005625203.localdomain podman[306427]: 2026-02-20 09:51:12.122951512 +0000 UTC m=+0.047357981 container kill 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:51:12 np0005625203.localdomain dnsmasq[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/addn_hosts - 2 addresses
Feb 20 09:51:12 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/host
Feb 20 09:51:12 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/opts
Feb 20 09:51:12 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:12.395 262775 INFO neutron.agent.dhcp.agent [None req-2abde755-873d-4db2-a7e7-79d51f60fe30 - - - - - -] DHCP configuration for ports {'89472e1e-6ca6-404e-8ec3-7651099fb248'} is completed
Feb 20 09:51:12 np0005625203.localdomain ceph-mon[296066]: pgmap v89: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 20 09:51:13 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 20 09:51:14 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:51:14 np0005625203.localdomain ceph-mon[296066]: pgmap v90: 177 pgs: 177 active+clean; 192 MiB data, 775 MiB used, 41 GiB / 42 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Feb 20 09:51:14 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:14.650 262775 INFO neutron.agent.linux.ip_lib [None req-16ef1d83-ef93-4aba-8886-8b82cb24b0fa - - - - - -] Device tap8a6b3cf6-d1 cannot be used as it has no MAC address
Feb 20 09:51:14 np0005625203.localdomain kernel: device tap8a6b3cf6-d1 entered promiscuous mode
Feb 20 09:51:14 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581074.6901] manager: (tap8a6b3cf6-d1): new Generic device (/org/freedesktop/NetworkManager/Devices/16)
Feb 20 09:51:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:14Z|00040|binding|INFO|Claiming lport 8a6b3cf6-d133-4989-ba72-56ce2d9fea97 for this chassis.
Feb 20 09:51:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:14Z|00041|binding|INFO|8a6b3cf6-d133-4989-ba72-56ce2d9fea97: Claiming unknown
Feb 20 09:51:14 np0005625203.localdomain systemd-udevd[306462]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:51:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:14.706 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-5faf2589-b0d7-486e-a56b-df0762273b7b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5faf2589-b0d7-486e-a56b-df0762273b7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8083fe1-977d-4fae-94f3-b03c7096c58a, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=8a6b3cf6-d133-4989-ba72-56ce2d9fea97) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:51:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:14.708 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 8a6b3cf6-d133-4989-ba72-56ce2d9fea97 in datapath 5faf2589-b0d7-486e-a56b-df0762273b7b bound to our chassis
Feb 20 09:51:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:14.711 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5faf2589-b0d7-486e-a56b-df0762273b7b or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:51:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:14.713 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[63c30a1b-4ed5-4a0e-974f-ce47a3a3b01d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap8a6b3cf6-d1: No such device
Feb 20 09:51:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap8a6b3cf6-d1: No such device
Feb 20 09:51:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:14Z|00042|binding|INFO|Setting lport 8a6b3cf6-d133-4989-ba72-56ce2d9fea97 ovn-installed in OVS
Feb 20 09:51:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:14Z|00043|binding|INFO|Setting lport 8a6b3cf6-d133-4989-ba72-56ce2d9fea97 up in Southbound
Feb 20 09:51:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap8a6b3cf6-d1: No such device
Feb 20 09:51:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap8a6b3cf6-d1: No such device
Feb 20 09:51:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap8a6b3cf6-d1: No such device
Feb 20 09:51:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap8a6b3cf6-d1: No such device
Feb 20 09:51:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap8a6b3cf6-d1: No such device
Feb 20 09:51:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap8a6b3cf6-d1: No such device
Feb 20 09:51:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e94 e94: 6 total, 6 up, 6 in
Feb 20 09:51:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:51:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:51:15 np0005625203.localdomain podman[306533]: 
Feb 20 09:51:15 np0005625203.localdomain podman[306533]: 2026-02-20 09:51:15.721929638 +0000 UTC m=+0.101112949 container create ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5faf2589-b0d7-486e-a56b-df0762273b7b, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:51:15 np0005625203.localdomain podman[306533]: 2026-02-20 09:51:15.669835642 +0000 UTC m=+0.049019003 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:51:15 np0005625203.localdomain systemd[1]: Started libpod-conmon-ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53.scope.
Feb 20 09:51:15 np0005625203.localdomain systemd[1]: tmp-crun.rbMzx8.mount: Deactivated successfully.
Feb 20 09:51:15 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:51:15 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39d7cce47dcf4f69a50c2a5b594635426d74e4e6822197e4d9312e677ef07418/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:51:15 np0005625203.localdomain podman[306545]: 2026-02-20 09:51:15.805555067 +0000 UTC m=+0.112390687 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:51:15 np0005625203.localdomain podman[306544]: 2026-02-20 09:51:15.824468981 +0000 UTC m=+0.132758616 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:51:15 np0005625203.localdomain podman[306544]: 2026-02-20 09:51:15.838293147 +0000 UTC m=+0.146582872 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:51:15 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:51:15 np0005625203.localdomain podman[306533]: 2026-02-20 09:51:15.861143032 +0000 UTC m=+0.240326353 container init ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Feb 20 09:51:15 np0005625203.localdomain sshd[306593]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:51:15 np0005625203.localdomain podman[306533]: 2026-02-20 09:51:15.874405171 +0000 UTC m=+0.253588462 container start ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:51:15 np0005625203.localdomain dnsmasq[306596]: started, version 2.85 cachesize 150
Feb 20 09:51:15 np0005625203.localdomain dnsmasq[306596]: DNS service limited to local subnets
Feb 20 09:51:15 np0005625203.localdomain dnsmasq[306596]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:51:15 np0005625203.localdomain dnsmasq[306596]: warning: no upstream servers configured
Feb 20 09:51:15 np0005625203.localdomain dnsmasq-dhcp[306596]: DHCP, static leases only on 19.80.0.0, lease time 1d
Feb 20 09:51:15 np0005625203.localdomain dnsmasq[306596]: read /var/lib/neutron/dhcp/5faf2589-b0d7-486e-a56b-df0762273b7b/addn_hosts - 0 addresses
Feb 20 09:51:15 np0005625203.localdomain dnsmasq-dhcp[306596]: read /var/lib/neutron/dhcp/5faf2589-b0d7-486e-a56b-df0762273b7b/host
Feb 20 09:51:15 np0005625203.localdomain dnsmasq-dhcp[306596]: read /var/lib/neutron/dhcp/5faf2589-b0d7-486e-a56b-df0762273b7b/opts
Feb 20 09:51:15 np0005625203.localdomain podman[306545]: 2026-02-20 09:51:15.895633955 +0000 UTC m=+0.202469585 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:51:15 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:51:16 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:16.066 262775 INFO neutron.agent.dhcp.agent [None req-710c589a-a0e7-43a3-b7c5-452ea952fba2 - - - - - -] DHCP configuration for ports {'3bb75901-4106-4229-b593-83c4bfd80b13'} is completed
Feb 20 09:51:16 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:16.264 2 INFO neutron.agent.securitygroups_rpc [None req-dd3e0c14-3c22-4790-87f5-ba03a5ef1aea ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Security group member updated ['6a912071-fd9c-4d5f-8453-7f993db3506d']
Feb 20 09:51:16 np0005625203.localdomain sshd[306593]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:51:16 np0005625203.localdomain ceph-mon[296066]: pgmap v91: 177 pgs: 177 active+clean; 217 MiB data, 865 MiB used, 41 GiB / 42 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 149 op/s
Feb 20 09:51:16 np0005625203.localdomain ceph-mon[296066]: osdmap e94: 6 total, 6 up, 6 in
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:51:17.200 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:51:17 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:17.446 2 INFO neutron.agent.securitygroups_rpc [None req-c36d1673-2dec-447b-a8b3-50030e0a0823 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Security group member updated ['07d2fe18-fbbf-4547-931e-bb55f378bade']
Feb 20 09:51:17 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:17.482 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:17Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e49520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e49c10>], id=533acac2-f7ea-4ecb-b927-c6780a91a0a2, ip_allocation=immediate, mac_address=fa:16:3e:94:06:ec, name=tempest-subport-1194540045, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:51:12Z, description=, dns_domain=, id=5faf2589-b0d7-486e-a56b-df0762273b7b, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-subport_net-1256438256, port_security_enabled=True, project_id=a966116e4ddf4bdea0571a1bb751916e, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=32844, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=506, status=ACTIVE, subnets=['5a5e309d-1125-4018-a85f-bff82eb71cdf'], tags=[], tenant_id=a966116e4ddf4bdea0571a1bb751916e, updated_at=2026-02-20T09:51:13Z, vlan_transparent=None, network_id=5faf2589-b0d7-486e-a56b-df0762273b7b, port_security_enabled=True, project_id=a966116e4ddf4bdea0571a1bb751916e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['07d2fe18-fbbf-4547-931e-bb55f378bade'], standard_attr_id=525, status=DOWN, tags=[], tenant_id=a966116e4ddf4bdea0571a1bb751916e, updated_at=2026-02-20T09:51:17Z on network 5faf2589-b0d7-486e-a56b-df0762273b7b
Feb 20 09:51:17 np0005625203.localdomain dnsmasq[306596]: read /var/lib/neutron/dhcp/5faf2589-b0d7-486e-a56b-df0762273b7b/addn_hosts - 1 addresses
Feb 20 09:51:17 np0005625203.localdomain podman[306612]: 2026-02-20 09:51:17.718943029 +0000 UTC m=+0.064955264 container kill ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 20 09:51:17 np0005625203.localdomain dnsmasq-dhcp[306596]: read /var/lib/neutron/dhcp/5faf2589-b0d7-486e-a56b-df0762273b7b/host
Feb 20 09:51:17 np0005625203.localdomain dnsmasq-dhcp[306596]: read /var/lib/neutron/dhcp/5faf2589-b0d7-486e-a56b-df0762273b7b/opts
Feb 20 09:51:17 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:17.922 262775 INFO neutron.agent.dhcp.agent [None req-a502786c-76ed-476d-9f21-f5458a7b0dcb - - - - - -] DHCP configuration for ports {'533acac2-f7ea-4ecb-b927-c6780a91a0a2'} is completed
Feb 20 09:51:18 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e95 e95: 6 total, 6 up, 6 in
Feb 20 09:51:18 np0005625203.localdomain ceph-mon[296066]: pgmap v93: 177 pgs: 177 active+clean; 217 MiB data, 865 MiB used, 41 GiB / 42 GiB avail; 1.0 MiB/s rd, 2.4 MiB/s wr, 80 op/s
Feb 20 09:51:18 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:18.770 2 INFO neutron.agent.securitygroups_rpc [req-3c77ea9c-030b-4c3f-a6b2-e9f761f0d591 req-ae9f50d3-4bb2-48d4-a279-bccc17ebbc38 19c6a0af0d664b5d92fdce6a6ecdbcc4 5ce7589beebc4b9187ac7a68f3264776 - - default default] Security group rule updated ['ddf49fd2-9d36-4d8c-9b90-f70fbafa6560']
Feb 20 09:51:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:51:19 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:19.366 2 INFO neutron.agent.securitygroups_rpc [req-cf23cb9e-603b-4426-8e72-b88eccda31be req-4c57a8df-f22b-4186-8db6-2e7fa9aa1e7d 19c6a0af0d664b5d92fdce6a6ecdbcc4 5ce7589beebc4b9187ac7a68f3264776 - - default default] Security group rule updated ['ddf49fd2-9d36-4d8c-9b90-f70fbafa6560']
Feb 20 09:51:19 np0005625203.localdomain ceph-mon[296066]: osdmap e95: 6 total, 6 up, 6 in
Feb 20 09:51:19 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:19.909 2 INFO neutron.agent.securitygroups_rpc [None req-b04d749d-19a2-4f89-bafc-552dc6778fc9 ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Security group member updated ['6a912071-fd9c-4d5f-8453-7f993db3506d']
Feb 20 09:51:20 np0005625203.localdomain sshd[306632]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:51:20 np0005625203.localdomain ceph-mon[296066]: pgmap v95: 177 pgs: 177 active+clean; 236 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 5.3 MiB/s rd, 4.1 MiB/s wr, 168 op/s
Feb 20 09:51:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e96 e96: 6 total, 6 up, 6 in
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.349 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.350 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.369 279640 DEBUG nova.compute.manager [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.612 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.613 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.618 279640 DEBUG nova.virt.hardware [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.619 279640 INFO nova.compute.claims [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Claim successful on node np0005625203.localdomain
Feb 20 09:51:21 np0005625203.localdomain ceph-mon[296066]: osdmap e96: 6 total, 6 up, 6 in
Feb 20 09:51:21 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e97 e97: 6 total, 6 up, 6 in
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.742 279640 DEBUG nova.scheduler.client.report [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Refreshing inventories for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.800 279640 DEBUG nova.scheduler.client.report [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Updating ProviderTree inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.801 279640 DEBUG nova.compute.provider_tree [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.815 279640 DEBUG nova.scheduler.client.report [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Refreshing aggregate associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.850 279640 DEBUG nova.scheduler.client.report [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Refreshing trait associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, traits: COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 20 09:51:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:21.896 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:21 np0005625203.localdomain sshd[306632]: Invalid user oracle from 118.99.80.29 port 18655
Feb 20 09:51:22 np0005625203.localdomain sshd[306632]: Received disconnect from 118.99.80.29 port 18655:11: Bye Bye [preauth]
Feb 20 09:51:22 np0005625203.localdomain sshd[306632]: Disconnected from invalid user oracle 118.99.80.29 port 18655 [preauth]
Feb 20 09:51:22 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:51:22 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3950457057' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.348 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.356 279640 DEBUG nova.compute.provider_tree [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.373 279640 DEBUG nova.scheduler.client.report [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.396 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.397 279640 DEBUG nova.compute.manager [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.448 279640 DEBUG nova.compute.manager [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.449 279640 DEBUG nova.network.neutron [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.461 279640 INFO nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.479 279640 DEBUG nova.compute.manager [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.573 279640 DEBUG nova.compute.manager [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.575 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.575 279640 INFO nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Creating image(s)
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.619 279640 DEBUG nova.storage.rbd_utils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] rbd image e6ab74b8-b495-4363-8d40-2356596c895c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.662 279640 DEBUG nova.storage.rbd_utils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] rbd image e6ab74b8-b495-4363-8d40-2356596c895c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.706 279640 DEBUG nova.storage.rbd_utils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] rbd image e6ab74b8-b495-4363-8d40-2356596c895c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.711 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Acquiring lock "3692da63af034f7d594aac7c4b8eda10742f09b0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:22.713 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "3692da63af034f7d594aac7c4b8eda10742f09b0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:22 np0005625203.localdomain ceph-mon[296066]: pgmap v97: 177 pgs: 177 active+clean; 236 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 126 op/s
Feb 20 09:51:22 np0005625203.localdomain ceph-mon[296066]: osdmap e97: 6 total, 6 up, 6 in
Feb 20 09:51:22 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3950457057' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:23.208 279640 DEBUG nova.virt.libvirt.imagebackend [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Image locations are: [{'url': 'rbd://a8557ee9-b55d-5519-942c-cf8f6172f1d8/images/06bd71fd-c415-45d9-b669-46209b7ca2f4/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a8557ee9-b55d-5519-942c-cf8f6172f1d8/images/06bd71fd-c415-45d9-b669-46209b7ca2f4/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 20 09:51:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:23.636 279640 WARNING oslo_policy.policy [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Feb 20 09:51:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:23.637 279640 WARNING oslo_policy.policy [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Feb 20 09:51:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:23.642 279640 DEBUG nova.policy [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0db48d5f6f5e44fc93154cf4b34a94e0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Feb 20 09:51:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:23.975 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:24.066 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.part --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:24.068 279640 DEBUG nova.virt.images [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] 06bd71fd-c415-45d9-b669-46209b7ca2f4 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Feb 20 09:51:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:24.070 279640 DEBUG nova.privsep.utils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 20 09:51:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:24.071 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.part /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:24.274 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.part /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.converted" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:24.279 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:51:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:24.351 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0.converted --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:24.352 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "3692da63af034f7d594aac7c4b8eda10742f09b0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:24.390 279640 DEBUG nova.storage.rbd_utils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] rbd image e6ab74b8-b495-4363-8d40-2356596c895c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:51:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:24.396 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0 e6ab74b8-b495-4363-8d40-2356596c895c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:24 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:51:24 np0005625203.localdomain ceph-mon[296066]: pgmap v99: 177 pgs: 177 active+clean; 236 MiB data, 872 MiB used, 41 GiB / 42 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 126 op/s
Feb 20 09:51:24 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3777396642' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:24 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2724867462' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:24 np0005625203.localdomain podman[306764]: 2026-02-20 09:51:24.804595078 +0000 UTC m=+0.088383209 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Feb 20 09:51:24 np0005625203.localdomain podman[306764]: 2026-02-20 09:51:24.810420767 +0000 UTC m=+0.094208908 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 20 09:51:24 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:51:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:24.998 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/3692da63af034f7d594aac7c4b8eda10742f09b0 e6ab74b8-b495-4363-8d40-2356596c895c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:25 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:25.007 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005625203.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:11Z, description=, device_id=e6ab74b8-b495-4363-8d40-2356596c895c, device_owner=compute:nova, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ef9a30>], dns_domain=, dns_name=tempest-liveautoblockmigrationv225test-server-1557569525, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ef9340>], id=89472e1e-6ca6-404e-8ec3-7651099fb248, ip_allocation=immediate, mac_address=fa:16:3e:00:f6:87, name=tempest-parent-254587356, network_id=82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, port_security_enabled=True, project_id=a966116e4ddf4bdea0571a1bb751916e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['07d2fe18-fbbf-4547-931e-bb55f378bade'], standard_attr_id=498, status=DOWN, tags=[], tenant_id=a966116e4ddf4bdea0571a1bb751916e, trunk_details=sub_ports=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e49880>], trunk_id=9784f938-17c7-4c74-9956-6e0be6058c3d, updated_at=2026-02-20T09:51:24Z on network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192
Feb 20 09:51:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:25.140 279640 DEBUG nova.storage.rbd_utils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] resizing rbd image e6ab74b8-b495-4363-8d40-2356596c895c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Feb 20 09:51:25 np0005625203.localdomain podman[306851]: 2026-02-20 09:51:25.259845169 +0000 UTC m=+0.077335527 container kill 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:51:25 np0005625203.localdomain systemd[1]: tmp-crun.nDSkn9.mount: Deactivated successfully.
Feb 20 09:51:25 np0005625203.localdomain dnsmasq[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/addn_hosts - 2 addresses
Feb 20 09:51:25 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/host
Feb 20 09:51:25 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/opts
Feb 20 09:51:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:25.320 279640 DEBUG nova.objects.instance [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lazy-loading 'migration_context' on Instance uuid e6ab74b8-b495-4363-8d40-2356596c895c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:51:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:25.348 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 20 09:51:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:25.349 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Ensure instance console log exists: /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 20 09:51:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:25.349 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:25.350 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:25.350 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:25 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:25.479 262775 INFO neutron.agent.dhcp.agent [None req-091c9201-2b22-44c3-a01e-c9fdb1a9033a - - - - - -] DHCP configuration for ports {'89472e1e-6ca6-404e-8ec3-7651099fb248'} is completed
Feb 20 09:51:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e98 e98: 6 total, 6 up, 6 in
Feb 20 09:51:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:26.406 279640 DEBUG nova.network.neutron [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Successfully updated port: 89472e1e-6ca6-404e-8ec3-7651099fb248 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Feb 20 09:51:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:26.424 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Acquiring lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:51:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:26.425 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Acquired lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:51:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:26.425 279640 DEBUG nova.network.neutron [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 20 09:51:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:26.543 279640 DEBUG nova.compute.manager [req-470f5ab4-1524-4fef-962b-d13e5be83db4 req-a917033b-bc6b-47b5-9bbe-bda4adaca9ae d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-changed-89472e1e-6ca6-404e-8ec3-7651099fb248 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 20 09:51:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:26.544 279640 DEBUG nova.compute.manager [req-470f5ab4-1524-4fef-962b-d13e5be83db4 req-a917033b-bc6b-47b5-9bbe-bda4adaca9ae d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Refreshing instance network info cache due to event network-changed-89472e1e-6ca6-404e-8ec3-7651099fb248. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 20 09:51:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:26.544 279640 DEBUG oslo_concurrency.lockutils [req-470f5ab4-1524-4fef-962b-d13e5be83db4 req-a917033b-bc6b-47b5-9bbe-bda4adaca9ae d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:51:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:26.559 279640 DEBUG nova.network.neutron [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 20 09:51:26 np0005625203.localdomain ceph-mon[296066]: pgmap v100: 177 pgs: 177 active+clean; 224 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 10 MiB/s rd, 7.1 MiB/s wr, 301 op/s
Feb 20 09:51:26 np0005625203.localdomain ceph-mon[296066]: osdmap e98: 6 total, 6 up, 6 in
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.181 279640 DEBUG nova.network.neutron [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Updating instance_info_cache with network_info: [{"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.238 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Releasing lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.238 279640 DEBUG nova.compute.manager [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Instance network_info: |[{"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.240 279640 DEBUG oslo_concurrency.lockutils [req-470f5ab4-1524-4fef-962b-d13e5be83db4 req-a917033b-bc6b-47b5-9bbe-bda4adaca9ae d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquired lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.240 279640 DEBUG nova.network.neutron [req-470f5ab4-1524-4fef-962b-d13e5be83db4 req-a917033b-bc6b-47b5-9bbe-bda4adaca9ae d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Refreshing network info cache for port 89472e1e-6ca6-404e-8ec3-7651099fb248 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.246 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Start _get_guest_xml network_info=[{"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-20T09:49:57Z,direct_url=<?>,disk_format='qcow2',id=06bd71fd-c415-45d9-b669-46209b7ca2f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='91bce661d685472eb3e7cacab17bf52a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-20T09:49:59Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'image_id': '06bd71fd-c415-45d9-b669-46209b7ca2f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.253 279640 WARNING nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.256 279640 DEBUG nova.virt.libvirt.host [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Searching host: 'np0005625203.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.257 279640 DEBUG nova.virt.libvirt.host [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.259 279640 DEBUG nova.virt.libvirt.host [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Searching host: 'np0005625203.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.260 279640 DEBUG nova.virt.libvirt.host [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.261 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.261 279640 DEBUG nova.virt.hardware [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-20T09:49:56Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='40a6f41a-8891-4900-942e-688a656af142',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-02-20T09:49:57Z,direct_url=<?>,disk_format='qcow2',id=06bd71fd-c415-45d9-b669-46209b7ca2f4,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='91bce661d685472eb3e7cacab17bf52a',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-02-20T09:49:59Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.262 279640 DEBUG nova.virt.hardware [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.262 279640 DEBUG nova.virt.hardware [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.263 279640 DEBUG nova.virt.hardware [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.263 279640 DEBUG nova.virt.hardware [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.263 279640 DEBUG nova.virt.hardware [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.264 279640 DEBUG nova.virt.hardware [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.264 279640 DEBUG nova.virt.hardware [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.265 279640 DEBUG nova.virt.hardware [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.265 279640 DEBUG nova.virt.hardware [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.266 279640 DEBUG nova.virt.hardware [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.272 279640 DEBUG nova.privsep.utils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.272 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:27 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e99 e99: 6 total, 6 up, 6 in
Feb 20 09:51:27 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:51:27 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4036231401' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.736 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.790 279640 DEBUG nova.storage.rbd_utils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] rbd image e6ab74b8-b495-4363-8d40-2356596c895c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:51:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:27.798 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:27 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3066824957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:27 np0005625203.localdomain ceph-mon[296066]: osdmap e99: 6 total, 6 up, 6 in
Feb 20 09:51:27 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/4036231401' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:51:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:51:28 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3035586964' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.215 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.219 279640 DEBUG nova.virt.libvirt.vif [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-20T09:51:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1557569525',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005625203.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-1557569525',id=7,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005625203.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005625203.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a966116e4ddf4bdea0571a1bb751916e',ramdisk_id='',reservation_id='r-ty4xcmp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-425062890',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-425062890-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-20T09:51:22Z,user_data=None,user_id='0db48d5f6f5e44fc93154cf4b34a94e0',uuid=e6ab74b8-b495-4363-8d40-2356596c895c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.220 279640 DEBUG nova.network.os_vif_util [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Converting VIF {"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.221 279640 DEBUG nova.network.os_vif_util [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.225 279640 DEBUG nova.objects.instance [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lazy-loading 'pci_devices' on Instance uuid e6ab74b8-b495-4363-8d40-2356596c895c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.229 279640 DEBUG nova.network.neutron [req-470f5ab4-1524-4fef-962b-d13e5be83db4 req-a917033b-bc6b-47b5-9bbe-bda4adaca9ae d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Updated VIF entry in instance network info cache for port 89472e1e-6ca6-404e-8ec3-7651099fb248. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.229 279640 DEBUG nova.network.neutron [req-470f5ab4-1524-4fef-962b-d13e5be83db4 req-a917033b-bc6b-47b5-9bbe-bda4adaca9ae d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Updating instance_info_cache with network_info: [{"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.256 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] End _get_guest_xml xml=<domain type="kvm">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   <uuid>e6ab74b8-b495-4363-8d40-2356596c895c</uuid>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   <name>instance-00000007</name>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   <memory>131072</memory>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   <vcpu>1</vcpu>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   <metadata>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-1557569525</nova:name>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <nova:creationTime>2026-02-20 09:51:27</nova:creationTime>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <nova:flavor name="m1.nano">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <nova:memory>128</nova:memory>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <nova:disk>1</nova:disk>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <nova:swap>0</nova:swap>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <nova:ephemeral>0</nova:ephemeral>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <nova:vcpus>1</nova:vcpus>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       </nova:flavor>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <nova:owner>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <nova:user uuid="0db48d5f6f5e44fc93154cf4b34a94e0">tempest-LiveAutoBlockMigrationV225Test-425062890-project-member</nova:user>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <nova:project uuid="a966116e4ddf4bdea0571a1bb751916e">tempest-LiveAutoBlockMigrationV225Test-425062890</nova:project>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       </nova:owner>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <nova:root type="image" uuid="06bd71fd-c415-45d9-b669-46209b7ca2f4"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <nova:ports>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <nova:port uuid="89472e1e-6ca6-404e-8ec3-7651099fb248">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         </nova:port>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       </nova:ports>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     </nova:instance>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   </metadata>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   <sysinfo type="smbios">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <system>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <entry name="manufacturer">RDO</entry>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <entry name="product">OpenStack Compute</entry>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <entry name="serial">e6ab74b8-b495-4363-8d40-2356596c895c</entry>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <entry name="uuid">e6ab74b8-b495-4363-8d40-2356596c895c</entry>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <entry name="family">Virtual Machine</entry>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     </system>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   </sysinfo>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   <os>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <boot dev="hd"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <smbios mode="sysinfo"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   </os>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   <features>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <acpi/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <apic/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <vmcoreinfo/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   </features>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   <clock offset="utc">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <timer name="pit" tickpolicy="delay"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <timer name="hpet" present="no"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   </clock>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   <cpu mode="host-model" match="exact">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <topology sockets="1" cores="1" threads="1"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   </cpu>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   <devices>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <disk type="network" device="disk">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <driver type="raw" cache="none"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <source protocol="rbd" name="vms/e6ab74b8-b495-4363-8d40-2356596c895c_disk">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <host name="172.18.0.103" port="6789"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <host name="172.18.0.104" port="6789"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <host name="172.18.0.105" port="6789"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       </source>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <auth username="openstack">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <secret type="ceph" uuid="a8557ee9-b55d-5519-942c-cf8f6172f1d8"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       </auth>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <target dev="vda" bus="virtio"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     </disk>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <disk type="network" device="cdrom">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <driver type="raw" cache="none"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <source protocol="rbd" name="vms/e6ab74b8-b495-4363-8d40-2356596c895c_disk.config">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <host name="172.18.0.103" port="6789"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <host name="172.18.0.104" port="6789"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <host name="172.18.0.105" port="6789"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       </source>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <auth username="openstack">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:         <secret type="ceph" uuid="a8557ee9-b55d-5519-942c-cf8f6172f1d8"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       </auth>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <target dev="sda" bus="sata"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     </disk>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <interface type="ethernet">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <mac address="fa:16:3e:00:f6:87"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <model type="virtio"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <driver name="vhost" rx_queue_size="512"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <mtu size="1442"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <target dev="tap89472e1e-6c"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     </interface>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <serial type="pty">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <log file="/var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c/console.log" append="off"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     </serial>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <video>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <model type="virtio"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     </video>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <input type="tablet" bus="usb"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <rng model="virtio">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <backend model="random">/dev/urandom</backend>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     </rng>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <controller type="usb" index="0"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     <memballoon model="virtio">
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:       <stats period="10"/>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:     </memballoon>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:   </devices>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: </domain>
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.258 279640 DEBUG nova.compute.manager [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Preparing to wait for external event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.258 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.259 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.259 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.261 279640 DEBUG nova.virt.libvirt.vif [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-02-20T09:51:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1557569525',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005625203.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-1557569525',id=7,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='np0005625203.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005625203.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a966116e4ddf4bdea0571a1bb751916e',ramdisk_id='',reservation_id='r-ty4xcmp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-425062890',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-425062890-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-02-20T09:51:22Z,user_data=None,user_id='0db48d5f6f5e44fc93154cf4b34a94e0',uuid=e6ab74b8-b495-4363-8d40-2356596c895c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.261 279640 DEBUG nova.network.os_vif_util [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Converting VIF {"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.262 279640 DEBUG nova.network.os_vif_util [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.263 279640 DEBUG os_vif [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.304 279640 DEBUG oslo_concurrency.lockutils [req-470f5ab4-1524-4fef-962b-d13e5be83db4 req-a917033b-bc6b-47b5-9bbe-bda4adaca9ae d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Releasing lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.350 279640 DEBUG ovsdbapp.backend.ovs_idl [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.350 279640 DEBUG ovsdbapp.backend.ovs_idl [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.351 279640 DEBUG ovsdbapp.backend.ovs_idl [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.351 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.352 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.352 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.353 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.354 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.359 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.376 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.376 279640 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.377 279640 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.379 279640 INFO oslo.privsep.daemon [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpuorhep_3/privsep.sock']
Feb 20 09:51:28 np0005625203.localdomain ceph-mon[296066]: pgmap v102: 177 pgs: 177 active+clean; 224 MiB data, 868 MiB used, 41 GiB / 42 GiB avail; 4.4 MiB/s rd, 5.9 MiB/s wr, 192 op/s
Feb 20 09:51:28 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3035586964' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:51:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e100 e100: 6 total, 6 up, 6 in
Feb 20 09:51:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.896 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:51:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:51:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:51:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 161364 "" "Go-http-client/1.1"
Feb 20 09:51:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19698 "" "Go-http-client/1.1"
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.153 279640 INFO oslo.privsep.daemon [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Spawned new privsep daemon via rootwrap
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.968 306959 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.972 306959 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.975 306959 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:28.975 306959 INFO oslo.privsep.daemon [-] privsep daemon running as pid 306959
Feb 20 09:51:29 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.452 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.452 279640 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89472e1e-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.453 279640 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap89472e1e-6c, col_values=(('external_ids', {'iface-id': '89472e1e-6ca6-404e-8ec3-7651099fb248', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:f6:87', 'vm-uuid': 'e6ab74b8-b495-4363-8d40-2356596c895c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.455 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.461 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.465 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.467 279640 INFO os_vif [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c')
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.534 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.534 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.535 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] No VIF found with MAC fa:16:3e:00:f6:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.536 279640 INFO nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Using config drive
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.580 279640 DEBUG nova.storage.rbd_utils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] rbd image e6ab74b8-b495-4363-8d40-2356596c895c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:51:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:51:29 np0005625203.localdomain systemd[1]: tmp-crun.ll7n60.mount: Deactivated successfully.
Feb 20 09:51:29 np0005625203.localdomain podman[306983]: 2026-02-20 09:51:29.791107708 +0000 UTC m=+0.103036518 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.833 279640 INFO nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Creating config drive at /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c/disk.config
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.839 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprjqqwv6i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:29 np0005625203.localdomain podman[306983]: 2026-02-20 09:51:29.843555616 +0000 UTC m=+0.155484426 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Feb 20 09:51:29 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:51:29 np0005625203.localdomain ceph-mon[296066]: osdmap e100: 6 total, 6 up, 6 in
Feb 20 09:51:29 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3117282067' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:51:29 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3784175978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:51:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:29.974 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmprjqqwv6i" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.016 279640 DEBUG nova.storage.rbd_utils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] rbd image e6ab74b8-b495-4363-8d40-2356596c895c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.023 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c/disk.config e6ab74b8-b495-4363-8d40-2356596c895c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.260 279640 DEBUG oslo_concurrency.processutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c/disk.config e6ab74b8-b495-4363-8d40-2356596c895c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.261 279640 INFO nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Deleting local config drive /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c/disk.config because it was imported into RBD.
Feb 20 09:51:30 np0005625203.localdomain systemd[1]: Started libvirt secret daemon.
Feb 20 09:51:30 np0005625203.localdomain kernel: tun: Universal TUN/TAP device driver, 1.6
Feb 20 09:51:30 np0005625203.localdomain kernel: device tap89472e1e-6c entered promiscuous mode
Feb 20 09:51:30 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581090.4127] manager: (tap89472e1e-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/17)
Feb 20 09:51:30 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:30Z|00044|binding|INFO|Claiming lport 89472e1e-6ca6-404e-8ec3-7651099fb248 for this chassis.
Feb 20 09:51:30 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:30Z|00045|binding|INFO|89472e1e-6ca6-404e-8ec3-7651099fb248: Claiming fa:16:3e:00:f6:87 10.100.0.6
Feb 20 09:51:30 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:30Z|00046|binding|INFO|Claiming lport 533acac2-f7ea-4ecb-b927-c6780a91a0a2 for this chassis.
Feb 20 09:51:30 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:30Z|00047|binding|INFO|533acac2-f7ea-4ecb-b927-c6780a91a0a2: Claiming fa:16:3e:94:06:ec 19.80.0.250
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.414 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:30 np0005625203.localdomain systemd-udevd[307079]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.429 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:f6:87 10.100.0.6'], port_security=['fa:16:3e:00:f6:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-254587356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e6ab74b8-b495-4363-8d40-2356596c895c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-254587356', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07d2fe18-fbbf-4547-931e-bb55f378bade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30955323-f649-483f-8215-a2b2b9707d5e, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=89472e1e-6ca6-404e-8ec3-7651099fb248) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.432 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:06:ec 19.80.0.250'], port_security=['fa:16:3e:94:06:ec 19.80.0.250'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['89472e1e-6ca6-404e-8ec3-7651099fb248'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1194540045', 'neutron:cidrs': '19.80.0.250/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5faf2589-b0d7-486e-a56b-df0762273b7b', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1194540045', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '07d2fe18-fbbf-4547-931e-bb55f378bade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=b8083fe1-977d-4fae-94f3-b03c7096c58a, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=533acac2-f7ea-4ecb-b927-c6780a91a0a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.434 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 89472e1e-6ca6-404e-8ec3-7651099fb248 in datapath 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 bound to our chassis
Feb 20 09:51:30 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581090.4368] device (tap89472e1e-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', sys-iface-state: 'external')
Feb 20 09:51:30 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581090.4374] device (tap89472e1e-6c): state change: unavailable -> disconnected (reason 'none', sys-iface-state: 'external')
Feb 20 09:51:30 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:30Z|00048|binding|INFO|Setting lport 89472e1e-6ca6-404e-8ec3-7651099fb248 ovn-installed in OVS
Feb 20 09:51:30 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:30Z|00049|binding|INFO|Setting lport 89472e1e-6ca6-404e-8ec3-7651099fb248 up in Southbound
Feb 20 09:51:30 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:30Z|00050|binding|INFO|Setting lport 533acac2-f7ea-4ecb-b927-c6780a91a0a2 up in Southbound
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.438 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 7a825d6e-f7b2-47e9-9544-3aa61f8eb23e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.440 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.439 161112 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.463 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.472 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:30 np0005625203.localdomain systemd-machined[204853]: New machine qemu-1-instance-00000007.
Feb 20 09:51:30 np0005625203.localdomain systemd[1]: Started Virtual Machine qemu-1-instance-00000007.
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.711 279640 DEBUG nova.compute.manager [req-b50a18e7-08ad-4c02-83cb-d4ada647cbcf req-1b0cff6b-0582-4cd6-bd3a-32a5db5be0c2 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.712 279640 DEBUG oslo_concurrency.lockutils [req-b50a18e7-08ad-4c02-83cb-d4ada647cbcf req-1b0cff6b-0582-4cd6-bd3a-32a5db5be0c2 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.712 279640 DEBUG oslo_concurrency.lockutils [req-b50a18e7-08ad-4c02-83cb-d4ada647cbcf req-1b0cff6b-0582-4cd6-bd3a-32a5db5be0c2 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.713 279640 DEBUG oslo_concurrency.lockutils [req-b50a18e7-08ad-4c02-83cb-d4ada647cbcf req-1b0cff6b-0582-4cd6-bd3a-32a5db5be0c2 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.713 279640 DEBUG nova.compute.manager [req-b50a18e7-08ad-4c02-83cb-d4ada647cbcf req-1b0cff6b-0582-4cd6-bd3a-32a5db5be0c2 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Processing event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.829 279640 DEBUG nova.virt.driver [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Emitting event <LifecycleEvent: 1771581090.8290048, e6ab74b8-b495-4363-8d40-2356596c895c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.830 279640 INFO nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] VM Started (Lifecycle Event)
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.834 279640 DEBUG nova.compute.manager [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.838 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.862 279640 DEBUG nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.864 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[b0e35174-a56f-4527-84fe-23f2984e31bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.865 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap82c5dcbb-e1 in ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.867 279640 INFO nova.virt.libvirt.driver [-] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Instance spawned successfully.
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.867 305605 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap82c5dcbb-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.867 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.867 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[f3cdd414-ce20-466c-ba55-f23a8c8ca4d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.869 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[b32f98a7-b7eb-4b84-a577-7693a9679950]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.872 279640 DEBUG nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 20 09:51:30 np0005625203.localdomain ceph-mon[296066]: pgmap v105: 177 pgs: 177 active+clean; 356 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 16 MiB/s rd, 18 MiB/s wr, 555 op/s
Feb 20 09:51:30 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1957332596' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:51:30 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/4245632626' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.895 279640 INFO nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.896 279640 DEBUG nova.virt.driver [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Emitting event <LifecycleEvent: 1771581090.8334906, e6ab74b8-b495-4363-8d40-2356596c895c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.897 279640 INFO nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] VM Paused (Lifecycle Event)
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.903 161363 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4e918b-db41-4447-9860-175c06c658c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.904 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.905 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.906 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 20 09:51:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e101 e101: 6 total, 6 up, 6 in
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.906 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.907 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.908 279640 DEBUG nova.virt.libvirt.driver [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.928 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2ff56d-54c8-47d7-b447-f0063b42b333]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.929 279640 DEBUG nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 20 09:51:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:30.931 161112 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp3nc1modl/privsep.sock']
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.934 279640 DEBUG nova.virt.driver [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Emitting event <LifecycleEvent: 1771581090.8697433, e6ab74b8-b495-4363-8d40-2356596c895c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.934 279640 INFO nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] VM Resumed (Lifecycle Event)
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.957 279640 DEBUG nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.962 279640 DEBUG nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 20 09:51:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:30.991 279640 INFO nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 20 09:51:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:31.008 279640 INFO nova.compute.manager [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Took 8.43 seconds to spawn the instance on the hypervisor.
Feb 20 09:51:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:31.009 279640 DEBUG nova.compute.manager [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 20 09:51:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:31.126 279640 INFO nova.compute.manager [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Took 9.71 seconds to build instance.
Feb 20 09:51:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:31.168 279640 DEBUG oslo_concurrency.lockutils [None req-d7783042-a2ce-4920-8031-97609d9ffe9a 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:31.596 161112 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Feb 20 09:51:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:31.598 161112 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3nc1modl/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Feb 20 09:51:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:31.459 307143 INFO oslo.privsep.daemon [-] privsep daemon starting
Feb 20 09:51:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:31.465 307143 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Feb 20 09:51:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:31.469 307143 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Feb 20 09:51:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:31.469 307143 INFO oslo.privsep.daemon [-] privsep daemon running as pid 307143
Feb 20 09:51:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:31.602 307143 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3ba40f-cc24-4649-9105-96f33f3caa74]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:31 np0005625203.localdomain ceph-mon[296066]: osdmap e101: 6 total, 6 up, 6 in
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.032 307143 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.032 307143 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.032 307143 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:32 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e102 e102: 6 total, 6 up, 6 in
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.559 307143 DEBUG oslo.privsep.daemon [-] privsep: reply[0c2117ea-118f-417b-9e13-79d0d340b3c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:32 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581092.5865] manager: (tap82c5dcbb-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/18)
Feb 20 09:51:32 np0005625203.localdomain systemd-udevd[307078]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.590 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[26b43627-82e9-42b1-af9e-b0adb34e32ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.627 307143 DEBUG oslo.privsep.daemon [-] privsep: reply[7139ff33-f4d3-44e9-ac83-690bca935a7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.631 307143 DEBUG oslo.privsep.daemon [-] privsep: reply[2d33fd09-134b-47bc-be87-6c6cf059962e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:32 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581092.6547] device (tap82c5dcbb-e0): carrier: link connected
Feb 20 09:51:32 np0005625203.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap82c5dcbb-e1: link becomes ready
Feb 20 09:51:32 np0005625203.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap82c5dcbb-e0: link becomes ready
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.660 307143 DEBUG oslo.privsep.daemon [-] privsep: reply[de2f163b-881a-401e-8f7c-c9dfc66072e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.686 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[66f9f516-7eb6-4bed-bd88-f3eb4804797b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82c5dcbb-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:4b:80:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165087, 'reachable_time': 39473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307170, 'error': None, 'target': 'ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.707 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c971c7-46fd-48d4-bc65-59e631b56242]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4b:80dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1165087, 'tstamp': 1165087}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307171, 'error': None, 'target': 'ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.728 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[8962b53d-7fba-4276-8e59-0f2d8849fbc2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap82c5dcbb-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:4b:80:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165087, 'reachable_time': 39473, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307172, 'error': None, 'target': 'ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.765 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[2e71fb02-431b-4440-b239-5e1ec3a1fad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:32.792 279640 DEBUG nova.compute.manager [req-9671d10b-793e-45cb-be11-60b05572acac req-99b4123a-ff00-49a1-9c18-31c4a2c638a4 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 20 09:51:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:32.792 279640 DEBUG oslo_concurrency.lockutils [req-9671d10b-793e-45cb-be11-60b05572acac req-99b4123a-ff00-49a1-9c18-31c4a2c638a4 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:32.793 279640 DEBUG oslo_concurrency.lockutils [req-9671d10b-793e-45cb-be11-60b05572acac req-99b4123a-ff00-49a1-9c18-31c4a2c638a4 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:32.793 279640 DEBUG oslo_concurrency.lockutils [req-9671d10b-793e-45cb-be11-60b05572acac req-99b4123a-ff00-49a1-9c18-31c4a2c638a4 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:32.794 279640 DEBUG nova.compute.manager [req-9671d10b-793e-45cb-be11-60b05572acac req-99b4123a-ff00-49a1-9c18-31c4a2c638a4 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] No waiting events found dispatching network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 20 09:51:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:32.794 279640 WARNING nova.compute.manager [req-9671d10b-793e-45cb-be11-60b05572acac req-99b4123a-ff00-49a1-9c18-31c4a2c638a4 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received unexpected event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 for instance with vm_state active and task_state None.
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.840 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[28171634-2e8c-4d7a-a1af-56f6747fd842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.843 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82c5dcbb-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.843 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.844 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap82c5dcbb-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:32 np0005625203.localdomain kernel: device tap82c5dcbb-e0 entered promiscuous mode
Feb 20 09:51:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:32.882 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.888 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap82c5dcbb-e0, col_values=(('external_ids', {'iface-id': 'b6bbb6c0-ef13-4100-9a72-6d01c8b15be6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:32 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:32Z|00051|binding|INFO|Releasing lport b6bbb6c0-ef13-4100-9a72-6d01c8b15be6 from this chassis (sb_readonly=0)
Feb 20 09:51:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:32.891 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:32.906 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:32.908 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.908 161112 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.910 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[676d6edd-5f2e-4833-9748-654c5b88e13f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.911 161112 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: global
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     log         /dev/log local0 debug
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     log-tag     haproxy-metadata-proxy-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     user        root
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     group       root
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     maxconn     1024
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     pidfile     /var/lib/neutron/external/pids/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192.pid.haproxy
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     daemon
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: defaults
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     log global
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     mode http
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     option httplog
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     option dontlognull
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     option http-server-close
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     option forwardfor
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     retries                 3
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     timeout http-request    30s
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     timeout connect         30s
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     timeout client          32s
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     timeout server          32s
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     timeout http-keep-alive 30s
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: listen listener
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     bind 169.254.169.254:80
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     server metadata /var/lib/neutron/metadata_proxy
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:     http-request add-header X-OVN-Network-ID 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 20 09:51:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:32.912 161112 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'env', 'PROCESS_TAG=haproxy-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 20 09:51:32 np0005625203.localdomain ceph-mon[296066]: pgmap v107: 177 pgs: 177 active+clean; 356 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 12 MiB/s rd, 13 MiB/s wr, 364 op/s
Feb 20 09:51:32 np0005625203.localdomain ceph-mon[296066]: osdmap e102: 6 total, 6 up, 6 in
Feb 20 09:51:33 np0005625203.localdomain podman[307205]: 
Feb 20 09:51:33 np0005625203.localdomain podman[307205]: 2026-02-20 09:51:33.433460684 +0000 UTC m=+0.100681337 container create d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 20 09:51:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:51:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:51:33 np0005625203.localdomain podman[307205]: 2026-02-20 09:51:33.381557522 +0000 UTC m=+0.048778205 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 20 09:51:33 np0005625203.localdomain systemd[1]: Started libpod-conmon-d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72.scope.
Feb 20 09:51:33 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:51:33 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10f7aee7f279e5a09af144cb9ca9cd6f4ccd347285934878a16ed70fc4dc46a3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:51:33 np0005625203.localdomain podman[307205]: 2026-02-20 09:51:33.534372426 +0000 UTC m=+0.201593079 container init d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:51:33 np0005625203.localdomain podman[307205]: 2026-02-20 09:51:33.55104333 +0000 UTC m=+0.218263983 container start d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 20 09:51:33 np0005625203.localdomain neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[307221]: [NOTICE]   (307246) : New worker (307248) forked
Feb 20 09:51:33 np0005625203.localdomain neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[307221]: [NOTICE]   (307246) : Loading success.
Feb 20 09:51:33 np0005625203.localdomain podman[307219]: 2026-02-20 09:51:33.604256631 +0000 UTC m=+0.113573694 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:51:33 np0005625203.localdomain podman[307219]: 2026-02-20 09:51:33.611127643 +0000 UTC m=+0.120444706 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.621 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 533acac2-f7ea-4ecb-b927-c6780a91a0a2 in datapath 5faf2589-b0d7-486e-a56b-df0762273b7b unbound from our chassis
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.625 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port f483cdd1-50ea-4e33-bf71-c6b2770be63f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.627 161112 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5faf2589-b0d7-486e-a56b-df0762273b7b
Feb 20 09:51:33 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.641 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[1a87194e-c0e5-48f7-b615-baec9ef374fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.643 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5faf2589-b1 in ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Feb 20 09:51:33 np0005625203.localdomain sudo[307257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.646 305605 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5faf2589-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Feb 20 09:51:33 np0005625203.localdomain sudo[307257]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.646 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[81f28ba3-08ca-49cf-b28f-003473fff92d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.647 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[ec664cbd-177f-45b0-8e7c-3df67da234b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain sudo[307257]: pam_unix(sudo:session): session closed for user root
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.674 161363 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb52f19-1abf-4b02-8df0-0d1d02ff8c7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.694 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[736a3e4f-59b4-493a-83f6-5e13c74abd9c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain podman[307220]: 2026-02-20 09:51:33.712023514 +0000 UTC m=+0.219813289 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, vcs-type=git, version=9.7, distribution-scope=public, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, config_id=openstack_network_exporter, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 09:51:33 np0005625203.localdomain sudo[307286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:51:33 np0005625203.localdomain sudo[307286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.725 307143 DEBUG oslo.privsep.daemon [-] privsep: reply[b96a7d17-5295-41c9-a6a6-0d19f1e996cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581093.7341] manager: (tap5faf2589-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/19)
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.735 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[76ba504c-e5e0-46aa-9253-a209a08608a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain systemd-udevd[307160]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:51:33 np0005625203.localdomain podman[307220]: 2026-02-20 09:51:33.752463741 +0000 UTC m=+0.260253466 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, config_id=openstack_network_exporter, release=1770267347, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z)
Feb 20 09:51:33 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.772 307143 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd13074-0645-44e8-b972-2c593a4a8b2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.781 307143 DEBUG oslo.privsep.daemon [-] privsep: reply[d99f65f9-66fd-4060-b4ef-f3772b987d38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap5faf2589-b1: link becomes ready
Feb 20 09:51:33 np0005625203.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): tap5faf2589-b0: link becomes ready
Feb 20 09:51:33 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581093.8132] device (tap5faf2589-b0): carrier: link connected
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.823 307143 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb85fe4-2da2-4cd2-897e-953555726b3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.843 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[347661ec-ae91-410e-a37c-e556b457af00]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5faf2589-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:ca:19:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165203, 'reachable_time': 34481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307319, 'error': None, 'target': 'ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.863 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[4714d7c2-df91-4975-8720-6e50073b7b7e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feca:1953'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 1165203, 'tstamp': 1165203}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307320, 'error': None, 'target': 'ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.884 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[f794c29f-0e12-4774-849e-e41061d80afb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5faf2589-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', 'fa:16:3e:ca:19:53'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165203, 'reachable_time': 34481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1400, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307321, 'error': None, 'target': 'ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:33 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:33.938 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:33 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:33.951 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[897b5b07-fc7e-446b-9175-8f2286435344]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:34.029 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[7ffeb869-0499-470a-aece-1f0d41acaaac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:34.033 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5faf2589-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:34.034 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:34.035 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5faf2589-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:34.038 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:34 np0005625203.localdomain kernel: device tap5faf2589-b0 entered promiscuous mode
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:34.043 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5faf2589-b0, col_values=(('external_ids', {'iface-id': '3bb75901-4106-4229-b593-83c4bfd80b13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:34.045 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:34 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:34Z|00052|binding|INFO|Releasing lport 3bb75901-4106-4229-b593-83c4bfd80b13 from this chassis (sb_readonly=0)
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:34.048 161112 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5faf2589-b0d7-486e-a56b-df0762273b7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5faf2589-b0d7-486e-a56b-df0762273b7b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Feb 20 09:51:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:34.052 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:34.051 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[42252307-0cc6-41c7-8f52-3f43ddafa158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:34.053 161112 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: global
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     log         /dev/log local0 debug
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     log-tag     haproxy-metadata-proxy-5faf2589-b0d7-486e-a56b-df0762273b7b
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     user        root
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     group       root
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     maxconn     1024
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     pidfile     /var/lib/neutron/external/pids/5faf2589-b0d7-486e-a56b-df0762273b7b.pid.haproxy
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     daemon
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: 
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: defaults
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     log global
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     mode http
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     option httplog
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     option dontlognull
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     option http-server-close
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     option forwardfor
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     retries                 3
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     timeout http-request    30s
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     timeout connect         30s
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     timeout client          32s
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     timeout server          32s
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     timeout http-keep-alive 30s
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: 
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: 
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: listen listener
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     bind 169.254.169.254:80
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     server metadata /var/lib/neutron/metadata_proxy
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:     http-request add-header X-OVN-Network-ID 5faf2589-b0d7-486e-a56b-df0762273b7b
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Feb 20 09:51:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:34.056 161112 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b', 'env', 'PROCESS_TAG=haproxy-5faf2589-b0d7-486e-a56b-df0762273b7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5faf2589-b0d7-486e-a56b-df0762273b7b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:51:34 np0005625203.localdomain systemd[1]: tmp-crun.gMoTbv.mount: Deactivated successfully.
Feb 20 09:51:34 np0005625203.localdomain sudo[307286]: pam_unix(sudo:session): session closed for user root
Feb 20 09:51:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:34.456 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:34 np0005625203.localdomain podman[307377]: 
Feb 20 09:51:34 np0005625203.localdomain podman[307377]: 2026-02-20 09:51:34.471418035 +0000 UTC m=+0.086880671 container create b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:51:34 np0005625203.localdomain systemd[1]: Started libpod-conmon-b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6.scope.
Feb 20 09:51:34 np0005625203.localdomain podman[307377]: 2026-02-20 09:51:34.431191135 +0000 UTC m=+0.046653801 image pull  quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Feb 20 09:51:34 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:51:34 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58fb35e63672d32b606647541beae7760ee9489eb7796bb9fcd6b658c338b83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:51:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:34.568 279640 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Check if temp file /var/lib/nova/instances/tmpm4ea8dj2 exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065
Feb 20 09:51:34 np0005625203.localdomain podman[307377]: 2026-02-20 09:51:34.56919709 +0000 UTC m=+0.184659716 container init b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:51:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:34.569 279640 DEBUG nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=13312,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpm4ea8dj2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e6ab74b8-b495-4363-8d40-2356596c895c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587
Feb 20 09:51:34 np0005625203.localdomain podman[307377]: 2026-02-20 09:51:34.579431606 +0000 UTC m=+0.194894252 container start b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:51:34 np0005625203.localdomain neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[307399]: [NOTICE]   (307403) : New worker (307405) forked
Feb 20 09:51:34 np0005625203.localdomain neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[307399]: [NOTICE]   (307403) : Loading success.
Feb 20 09:51:34 np0005625203.localdomain sudo[307414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:51:34 np0005625203.localdomain sudo[307414]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:51:34 np0005625203.localdomain sudo[307414]: pam_unix(sudo:session): session closed for user root
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: pgmap v109: 177 pgs: 177 active+clean; 356 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 12 MiB/s rd, 12 MiB/s wr, 347 op/s
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:34.981218) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581094981307, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 2473, "num_deletes": 254, "total_data_size": 3616053, "memory_usage": 3674544, "flush_reason": "Manual Compaction"}
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581094995300, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 2342375, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17850, "largest_seqno": 20318, "table_properties": {"data_size": 2333652, "index_size": 5356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19191, "raw_average_key_size": 20, "raw_value_size": 2315578, "raw_average_value_size": 2522, "num_data_blocks": 236, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580912, "oldest_key_time": 1771580912, "file_creation_time": 1771581094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 14143 microseconds, and 5361 cpu microseconds.
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:34.995371) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 2342375 bytes OK
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:34.995406) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:34.997294) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:34.997320) EVENT_LOG_v1 {"time_micros": 1771581094997313, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:34.997354) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3605076, prev total WAL file size 3605076, number of live WAL files 2.
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:34.998395) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131323935' seq:72057594037927935, type:22 .. '7061786F73003131353437' seq:0, type:0; will stop at (end)
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(2287KB)], [27(17MB)]
Feb 20 09:51:34 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581094998442, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 20810668, "oldest_snapshot_seqno": -1}
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 12288 keys, 18890234 bytes, temperature: kUnknown
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581095083500, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 18890234, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18818214, "index_size": 40239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 327832, "raw_average_key_size": 26, "raw_value_size": 18606994, "raw_average_value_size": 1514, "num_data_blocks": 1543, "num_entries": 12288, "num_filter_entries": 12288, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:35.083915) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 18890234 bytes
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:35.085607) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 244.3 rd, 221.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 17.6 +0.0 blob) out(18.0 +0.0 blob), read-write-amplify(16.9) write-amplify(8.1) OK, records in: 12818, records dropped: 530 output_compression: NoCompression
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:35.085640) EVENT_LOG_v1 {"time_micros": 1771581095085625, "job": 14, "event": "compaction_finished", "compaction_time_micros": 85193, "compaction_time_cpu_micros": 50238, "output_level": 6, "num_output_files": 1, "total_output_size": 18890234, "num_input_records": 12818, "num_output_records": 12288, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581095086140, "job": 14, "event": "table_file_deletion", "file_number": 29}
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581095088893, "job": 14, "event": "table_file_deletion", "file_number": 27}
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:34.998284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:35.088953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:35.088960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:35.088964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:35.088967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:51:35 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:51:35.088970) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:51:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:35.199 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:51:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:35.201 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:51:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:35.211 279640 INFO nova.compute.rpcapi [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66
Feb 20 09:51:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:35.218 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:51:36 np0005625203.localdomain ceph-mon[296066]: pgmap v110: 177 pgs: 177 active+clean; 318 MiB data, 1000 MiB used, 41 GiB / 42 GiB avail; 20 MiB/s rd, 14 MiB/s wr, 805 op/s
Feb 20 09:51:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:51:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:51:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:51:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:51:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:51:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:51:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e103 e103: 6 total, 6 up, 6 in
Feb 20 09:51:38 np0005625203.localdomain sshd[307432]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:51:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:38.437 279640 DEBUG nova.compute.manager [req-31fe1d02-d409-4b28-b642-41e8a0b53387 req-1c5af389-1cc7-4df7-b4a3-27e006f44e73 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-unplugged-89472e1e-6ca6-404e-8ec3-7651099fb248 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 20 09:51:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:38.437 279640 DEBUG oslo_concurrency.lockutils [req-31fe1d02-d409-4b28-b642-41e8a0b53387 req-1c5af389-1cc7-4df7-b4a3-27e006f44e73 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:38.438 279640 DEBUG oslo_concurrency.lockutils [req-31fe1d02-d409-4b28-b642-41e8a0b53387 req-1c5af389-1cc7-4df7-b4a3-27e006f44e73 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:38.438 279640 DEBUG oslo_concurrency.lockutils [req-31fe1d02-d409-4b28-b642-41e8a0b53387 req-1c5af389-1cc7-4df7-b4a3-27e006f44e73 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:38.439 279640 DEBUG nova.compute.manager [req-31fe1d02-d409-4b28-b642-41e8a0b53387 req-1c5af389-1cc7-4df7-b4a3-27e006f44e73 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] No waiting events found dispatching network-vif-unplugged-89472e1e-6ca6-404e-8ec3-7651099fb248 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 20 09:51:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:38.439 279640 DEBUG nova.compute.manager [req-31fe1d02-d409-4b28-b642-41e8a0b53387 req-1c5af389-1cc7-4df7-b4a3-27e006f44e73 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-unplugged-89472e1e-6ca6-404e-8ec3-7651099fb248 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 20 09:51:38 np0005625203.localdomain ceph-mon[296066]: pgmap v111: 177 pgs: 177 active+clean; 318 MiB data, 1000 MiB used, 41 GiB / 42 GiB avail; 7.7 MiB/s rd, 2.4 MiB/s wr, 403 op/s
Feb 20 09:51:38 np0005625203.localdomain ceph-mon[296066]: osdmap e103: 6 total, 6 up, 6 in
Feb 20 09:51:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:51:38 np0005625203.localdomain sshd[307432]: Invalid user n8n from 185.196.11.208 port 33094
Feb 20 09:51:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:38.941 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:39 np0005625203.localdomain sshd[307432]: Received disconnect from 185.196.11.208 port 33094:11: Bye Bye [preauth]
Feb 20 09:51:39 np0005625203.localdomain sshd[307432]: Disconnected from invalid user n8n 185.196.11.208 port 33094 [preauth]
Feb 20 09:51:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:51:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:39.458 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:40 np0005625203.localdomain sshd[307434]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.488 279640 INFO nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Took 5.29 seconds for pre_live_migration on destination host np0005625202.localdomain.
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.489 279640 DEBUG nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.501 279640 DEBUG nova.compute.manager [req-c4474244-5f3c-4dee-b414-a998785ef269 req-84cac1d0-49fa-4324-a6f0-37e483dc06c5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.501 279640 DEBUG oslo_concurrency.lockutils [req-c4474244-5f3c-4dee-b414-a998785ef269 req-84cac1d0-49fa-4324-a6f0-37e483dc06c5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.502 279640 DEBUG oslo_concurrency.lockutils [req-c4474244-5f3c-4dee-b414-a998785ef269 req-84cac1d0-49fa-4324-a6f0-37e483dc06c5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.502 279640 DEBUG oslo_concurrency.lockutils [req-c4474244-5f3c-4dee-b414-a998785ef269 req-84cac1d0-49fa-4324-a6f0-37e483dc06c5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.503 279640 DEBUG nova.compute.manager [req-c4474244-5f3c-4dee-b414-a998785ef269 req-84cac1d0-49fa-4324-a6f0-37e483dc06c5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] No waiting events found dispatching network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.503 279640 WARNING nova.compute.manager [req-c4474244-5f3c-4dee-b414-a998785ef269 req-84cac1d0-49fa-4324-a6f0-37e483dc06c5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received unexpected event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 for instance with vm_state active and task_state migrating.
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.523 279640 DEBUG nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[],block_migration=False,disk_available_mb=13312,disk_over_commit=<?>,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpm4ea8dj2',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='e6ab74b8-b495-4363-8d40-2356596c895c',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=False,migration=Migration(723674e5-bdf2-40b8-94ab-403f997348bc),old_vol_attachment_ids={},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.528 279640 DEBUG nova.objects.instance [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Lazy-loading 'migration_context' on Instance uuid e6ab74b8-b495-4363-8d40-2356596c895c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.530 279640 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.532 279640 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.532 279640 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.550 279640 DEBUG nova.virt.libvirt.vif [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-20T09:51:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1557569525',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005625203.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-1557569525',id=7,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-20T09:51:31Z,launched_on='np0005625203.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005625203.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a966116e4ddf4bdea0571a1bb751916e',ramdisk_id='',reservation_id='r-ty4xcmp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-425062890',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-425062890-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-20T09:51:31Z,user_data=None,user_id='0db48d5f6f5e44fc93154cf4b34a94e0',uuid=e6ab74b8-b495-4363-8d40-2356596c895c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.551 279640 DEBUG nova.network.os_vif_util [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Converting VIF {"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.552 279640 DEBUG nova.network.os_vif_util [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.553 279640 DEBUG nova.virt.libvirt.migration [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Updating guest XML with vif config: <interface type="ethernet">
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]:   <mac address="fa:16:3e:00:f6:87"/>
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]:   <model type="virtio"/>
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]:   <driver name="vhost" rx_queue_size="512"/>
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]:   <mtu size="1442"/>
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]:   <target dev="tap89472e1e-6c"/>
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: </interface>
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]:  _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388
Feb 20 09:51:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:40.554 279640 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272
Feb 20 09:51:40 np0005625203.localdomain ceph-mon[296066]: pgmap v113: 177 pgs: 177 active+clean; 318 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 8.7 MiB/s rd, 2.4 MiB/s wr, 435 op/s
Feb 20 09:51:40 np0005625203.localdomain sshd[307434]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:51:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:41.035 279640 DEBUG nova.virt.libvirt.migration [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Feb 20 09:51:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:41.036 279640 INFO nova.virt.libvirt.migration [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Increasing downtime to 50 ms after 0 sec elapsed time
Feb 20 09:51:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:41.110 279640 INFO nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).
Feb 20 09:51:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:41.614 279640 DEBUG nova.virt.libvirt.migration [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Feb 20 09:51:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:41.615 279640 DEBUG nova.virt.libvirt.migration [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.119 279640 DEBUG nova.virt.libvirt.migration [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.120 279640 DEBUG nova.virt.libvirt.migration [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.700 279640 DEBUG nova.virt.libvirt.migration [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Current 50 elapsed 2 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.700 279640 DEBUG nova.virt.libvirt.migration [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525
Feb 20 09:51:42 np0005625203.localdomain ceph-mon[296066]: pgmap v114: 177 pgs: 177 active+clean; 318 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 7.8 MiB/s rd, 2.1 MiB/s wr, 392 op/s
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.751 279640 DEBUG nova.compute.manager [req-aef9dfa8-f2da-4048-bb50-a0ef4e0e2f51 req-8a692af1-03ac-42ec-90df-c8d239cbdef5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-changed-89472e1e-6ca6-404e-8ec3-7651099fb248 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.751 279640 DEBUG nova.compute.manager [req-aef9dfa8-f2da-4048-bb50-a0ef4e0e2f51 req-8a692af1-03ac-42ec-90df-c8d239cbdef5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Refreshing instance network info cache due to event network-changed-89472e1e-6ca6-404e-8ec3-7651099fb248. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.752 279640 DEBUG oslo_concurrency.lockutils [req-aef9dfa8-f2da-4048-bb50-a0ef4e0e2f51 req-8a692af1-03ac-42ec-90df-c8d239cbdef5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.752 279640 DEBUG oslo_concurrency.lockutils [req-aef9dfa8-f2da-4048-bb50-a0ef4e0e2f51 req-8a692af1-03ac-42ec-90df-c8d239cbdef5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquired lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.753 279640 DEBUG nova.network.neutron [req-aef9dfa8-f2da-4048-bb50-a0ef4e0e2f51 req-8a692af1-03ac-42ec-90df-c8d239cbdef5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Refreshing network info cache for port 89472e1e-6ca6-404e-8ec3-7651099fb248 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.892 279640 DEBUG nova.virt.driver [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Emitting event <LifecycleEvent: 1771581102.891834, e6ab74b8-b495-4363-8d40-2356596c895c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.892 279640 INFO nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] VM Paused (Lifecycle Event)
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.917 279640 DEBUG nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.921 279640 DEBUG nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 20 09:51:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:42.942 279640 INFO nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] During sync_power_state the instance has a pending task (migrating). Skip.
Feb 20 09:51:43 np0005625203.localdomain kernel: device tap89472e1e-6c left promiscuous mode
Feb 20 09:51:43 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581103.1000] device (tap89472e1e-6c): state change: disconnected -> unmanaged (reason 'unmanaged', sys-iface-state: 'removed')
Feb 20 09:51:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:43Z|00053|binding|INFO|Releasing lport 89472e1e-6ca6-404e-8ec3-7651099fb248 from this chassis (sb_readonly=0)
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.157 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:43Z|00054|binding|INFO|Setting lport 89472e1e-6ca6-404e-8ec3-7651099fb248 down in Southbound
Feb 20 09:51:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:43Z|00055|binding|INFO|Releasing lport 533acac2-f7ea-4ecb-b927-c6780a91a0a2 from this chassis (sb_readonly=0)
Feb 20 09:51:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:43Z|00056|binding|INFO|Setting lport 533acac2-f7ea-4ecb-b927-c6780a91a0a2 down in Southbound
Feb 20 09:51:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:43Z|00057|binding|INFO|Removing iface tap89472e1e-6c ovn-installed in OVS
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.162 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:43Z|00058|binding|INFO|Releasing lport 3bb75901-4106-4229-b593-83c4bfd80b13 from this chassis (sb_readonly=0)
Feb 20 09:51:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:43Z|00059|binding|INFO|Releasing lport b6bbb6c0-ef13-4100-9a72-6d01c8b15be6 from this chassis (sb_readonly=0)
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.169 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:f6:87 10.100.0.6'], port_security=['fa:16:3e:00:f6:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain,np0005625202.localdomain', 'activation-strategy': 'rarp', 'additional-chassis-activated': '0a83b6be-9fe2-42ef-8768-88847d97b165'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-parent-254587356', 'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'e6ab74b8-b495-4363-8d40-2356596c895c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-parent-254587356', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '8', 'neutron:security_group_ids': '07d2fe18-fbbf-4547-931e-bb55f378bade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30955323-f649-483f-8215-a2b2b9707d5e, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=89472e1e-6ca6-404e-8ec3-7651099fb248) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.172 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:06:ec 19.80.0.250'], port_security=['fa:16:3e:94:06:ec 19.80.0.250'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=['89472e1e-6ca6-404e-8ec3-7651099fb248'], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-subport-1194540045', 'neutron:cidrs': '19.80.0.250/24', 'neutron:device_id': '', 'neutron:device_owner': 'trunk:subport', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5faf2589-b0d7-486e-a56b-df0762273b7b', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-subport-1194540045', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '07d2fe18-fbbf-4547-931e-bb55f378bade', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[42], additional_encap=[], encap=[], mirror_rules=[], datapath=b8083fe1-977d-4fae-94f3-b03c7096c58a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[], logical_port=533acac2-f7ea-4ecb-b927-c6780a91a0a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.173 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 89472e1e-6ca6-404e-8ec3-7651099fb248 in datapath 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 unbound from our chassis
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.178 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 7a825d6e-f7b2-47e9-9544-3aa61f8eb23e IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.178 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.180 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[71e81695-ae7a-4eb6-b413-7173413885d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.181 161112 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 namespace which is not needed anymore
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.188 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:43 np0005625203.localdomain systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Deactivated successfully.
Feb 20 09:51:43 np0005625203.localdomain systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000007.scope: Consumed 12.201s CPU time.
Feb 20 09:51:43 np0005625203.localdomain systemd-machined[204853]: Machine qemu-1-instance-00000007 terminated.
Feb 20 09:51:43 np0005625203.localdomain virtqemud[228198]: Unable to get XATTR trusted.libvirt.security.ref_selinux on vms/e6ab74b8-b495-4363-8d40-2356596c895c_disk: No such file or directory
Feb 20 09:51:43 np0005625203.localdomain virtqemud[228198]: Unable to get XATTR trusted.libvirt.security.ref_dac on vms/e6ab74b8-b495-4363-8d40-2356596c895c_disk: No such file or directory
Feb 20 09:51:43 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581103.2538] manager: (tap89472e1e-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.255 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.265 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.270 279640 DEBUG nova.virt.libvirt.guest [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.271 279640 INFO nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Migration operation has completed
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.271 279640 INFO nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] _post_live_migration() is started..
Feb 20 09:51:43 np0005625203.localdomain virtqemud[228198]: End of file while reading data: : Input/output error
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.286 279640 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.287 279640 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.287 279640 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630
Feb 20 09:51:43 np0005625203.localdomain neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[307221]: [NOTICE]   (307246) : haproxy version is 2.8.14-c23fe91
Feb 20 09:51:43 np0005625203.localdomain neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[307221]: [NOTICE]   (307246) : path to executable is /usr/sbin/haproxy
Feb 20 09:51:43 np0005625203.localdomain neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[307221]: [WARNING]  (307246) : Exiting Master process...
Feb 20 09:51:43 np0005625203.localdomain neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[307221]: [ALERT]    (307246) : Current worker (307248) exited with code 143 (Terminated)
Feb 20 09:51:43 np0005625203.localdomain neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192[307221]: [WARNING]  (307246) : All workers exited. Exiting... (0)
Feb 20 09:51:43 np0005625203.localdomain systemd[1]: libpod-d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72.scope: Deactivated successfully.
Feb 20 09:51:43 np0005625203.localdomain podman[307471]: 2026-02-20 09:51:43.391433068 +0000 UTC m=+0.076035486 container died d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:51:43 np0005625203.localdomain podman[307471]: 2026-02-20 09:51:43.50661632 +0000 UTC m=+0.191218718 container cleanup d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 09:51:43 np0005625203.localdomain podman[307483]: 2026-02-20 09:51:43.520261201 +0000 UTC m=+0.117032860 container cleanup d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:51:43 np0005625203.localdomain systemd[1]: libpod-conmon-d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72.scope: Deactivated successfully.
Feb 20 09:51:43 np0005625203.localdomain podman[307499]: 2026-02-20 09:51:43.629419808 +0000 UTC m=+0.102795672 container remove d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.636 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[8fcb8a62-4bb9-4d95-817b-6dccc614d8a4]: (4, ('Fri Feb 20 09:51:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 (d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72)\nd22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72\nFri Feb 20 09:51:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 (d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72)\nd22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.639 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3b4b47-43a0-448f-b302-3d0d6525cea2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.642 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap82c5dcbb-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:43 np0005625203.localdomain kernel: device tap82c5dcbb-e0 left promiscuous mode
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.646 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.653 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.654 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.658 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[466ac742-6414-4924-bb6e-ba473bdc0711]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.676 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd75d12-b953-4a26-9262-638cb8da5081]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.678 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca78ffb-9fa6-479f-971c-9716bd4e7868]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.695 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[be6851a8-a707-41f2-a19e-95b2a4f0cab5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165077, 'reachable_time': 18272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307520, 'error': None, 'target': 'ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.704 161363 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.705 161363 DEBUG oslo.privsep.daemon [-] privsep: reply[e08fce60-bfb7-40fc-af18-3f5b332644ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.707 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 533acac2-f7ea-4ecb-b927-c6780a91a0a2 in datapath 5faf2589-b0d7-486e-a56b-df0762273b7b unbound from our chassis
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.711 279640 DEBUG nova.network.neutron [req-aef9dfa8-f2da-4048-bb50-a0ef4e0e2f51 req-8a692af1-03ac-42ec-90df-c8d239cbdef5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Updated VIF entry in instance network info cache for port 89472e1e-6ca6-404e-8ec3-7651099fb248. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.712 279640 DEBUG nova.network.neutron [req-aef9dfa8-f2da-4048-bb50-a0ef4e0e2f51 req-8a692af1-03ac-42ec-90df-c8d239cbdef5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Updating instance_info_cache with network_info: [{"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"migrating_to": "np0005625202.localdomain"}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.713 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port f483cdd1-50ea-4e33-bf71-c6b2770be63f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.714 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5faf2589-b0d7-486e-a56b-df0762273b7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.715 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[7b05e620-5935-496e-bd38-8d5aefab8440]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:43.716 161112 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b namespace which is not needed anymore
Feb 20 09:51:43 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:43.725 2 INFO neutron.agent.securitygroups_rpc [req-bc88a03e-b48b-4063-bf3f-e91bcc37d72d req-9e63afcc-d40a-4b2c-a3aa-f230d65e4db2 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['d4aeef42-5959-493a-9cfc-ec0d9adb0b00']
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.770 279640 DEBUG oslo_concurrency.lockutils [req-aef9dfa8-f2da-4048-bb50-a0ef4e0e2f51 req-8a692af1-03ac-42ec-90df-c8d239cbdef5 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Releasing lock "refresh_cache-e6ab74b8-b495-4363-8d40-2356596c895c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:51:43 np0005625203.localdomain neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[307399]: [NOTICE]   (307403) : haproxy version is 2.8.14-c23fe91
Feb 20 09:51:43 np0005625203.localdomain neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[307399]: [NOTICE]   (307403) : path to executable is /usr/sbin/haproxy
Feb 20 09:51:43 np0005625203.localdomain neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[307399]: [WARNING]  (307403) : Exiting Master process...
Feb 20 09:51:43 np0005625203.localdomain neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[307399]: [WARNING]  (307403) : Exiting Master process...
Feb 20 09:51:43 np0005625203.localdomain neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[307399]: [ALERT]    (307403) : Current worker (307405) exited with code 143 (Terminated)
Feb 20 09:51:43 np0005625203.localdomain neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b[307399]: [WARNING]  (307403) : All workers exited. Exiting... (0)
Feb 20 09:51:43 np0005625203.localdomain systemd[1]: libpod-b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6.scope: Deactivated successfully.
Feb 20 09:51:43 np0005625203.localdomain podman[307539]: 2026-02-20 09:51:43.894497174 +0000 UTC m=+0.072418995 container died b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:51:43 np0005625203.localdomain podman[307539]: 2026-02-20 09:51:43.930850495 +0000 UTC m=+0.108772306 container cleanup b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb 20 09:51:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:43.944 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:43 np0005625203.localdomain podman[307553]: 2026-02-20 09:51:43.960845169 +0000 UTC m=+0.064543511 container cleanup b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:51:43 np0005625203.localdomain systemd[1]: libpod-conmon-b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6.scope: Deactivated successfully.
Feb 20 09:51:44 np0005625203.localdomain podman[307567]: 2026-02-20 09:51:44.014813454 +0000 UTC m=+0.065012836 container remove b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:51:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:44.019 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[f8ec8c3d-2301-4d9e-be81-f8f04fc4b589]: (4, ('Fri Feb 20 09:51:43 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b (b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6)\nb100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6\nFri Feb 20 09:51:43 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b (b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6)\nb100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:44.021 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[ac39818b-4a6f-4759-9bb1-3d870c18ee7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:44.023 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5faf2589-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:44.025 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:44 np0005625203.localdomain kernel: device tap5faf2589-b0 left promiscuous mode
Feb 20 09:51:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:44.037 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:44.041 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[1130b449-99d0-4bc0-9897-a0d372faeeb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:44.056 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[7d7a506b-8c8d-44bf-8e36-47eb64caa3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:44.057 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[39fee024-689e-4b0c-9724-3cffc0583c02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:44.073 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[8f7c907e-d362-4d6d-af1d-151c84ceba76]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_QDISC', 'noqueue'], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 1, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 1165194, 'reachable_time': 24330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 37, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}]], 'header': {'length': 1356, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307590, 'error': None, 'target': 'ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:44.075 161363 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5faf2589-b0d7-486e-a56b-df0762273b7b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Feb 20 09:51:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:44.076 161363 DEBUG oslo.privsep.daemon [-] privsep: reply[2c9b1720-8c7a-410d-ab16-23c33410018f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:51:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e58fb35e63672d32b606647541beae7760ee9489eb7796bb9fcd6b658c338b83-merged.mount: Deactivated successfully.
Feb 20 09:51:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b100c44f51474fe887f58a656278c1a0b24a655fd716bf19e5637253258e4bf6-userdata-shm.mount: Deactivated successfully.
Feb 20 09:51:44 np0005625203.localdomain systemd[1]: run-netns-ovnmeta\x2d5faf2589\x2db0d7\x2d486e\x2da56b\x2ddf0762273b7b.mount: Deactivated successfully.
Feb 20 09:51:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-10f7aee7f279e5a09af144cb9ca9cd6f4ccd347285934878a16ed70fc4dc46a3-merged.mount: Deactivated successfully.
Feb 20 09:51:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d22f2e886340336d6f83388c0eaf09e697593b901cd5ba6e161238af92cc1c72-userdata-shm.mount: Deactivated successfully.
Feb 20 09:51:44 np0005625203.localdomain systemd[1]: run-netns-ovnmeta\x2d82c5dcbb\x2de77d\x2d4af1\x2dbf3e\x2d89ecf6e35192.mount: Deactivated successfully.
Feb 20 09:51:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:44.465 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:44 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:44.702 2 INFO neutron.agent.securitygroups_rpc [req-d0af9ee5-c34c-498a-a79b-d6b681e80e4a req-6395f2ff-4ffd-4bf4-8fd0-e88e3c18ce7a 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['f1d2b747-b5b9-4577-9543-577b07c94aaa']
Feb 20 09:51:44 np0005625203.localdomain ceph-mon[296066]: pgmap v115: 177 pgs: 177 active+clean; 318 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 7.0 MiB/s rd, 1.9 MiB/s wr, 348 op/s
Feb 20 09:51:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:44.867 279640 DEBUG nova.compute.manager [req-f5396600-f3c3-4f6a-b3e1-68d83464c835 req-214608c5-88ff-4c3a-a9f7-23dd846cc6e1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-unplugged-89472e1e-6ca6-404e-8ec3-7651099fb248 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 20 09:51:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:44.867 279640 DEBUG oslo_concurrency.lockutils [req-f5396600-f3c3-4f6a-b3e1-68d83464c835 req-214608c5-88ff-4c3a-a9f7-23dd846cc6e1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:44.868 279640 DEBUG oslo_concurrency.lockutils [req-f5396600-f3c3-4f6a-b3e1-68d83464c835 req-214608c5-88ff-4c3a-a9f7-23dd846cc6e1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:44.868 279640 DEBUG oslo_concurrency.lockutils [req-f5396600-f3c3-4f6a-b3e1-68d83464c835 req-214608c5-88ff-4c3a-a9f7-23dd846cc6e1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:44.868 279640 DEBUG nova.compute.manager [req-f5396600-f3c3-4f6a-b3e1-68d83464c835 req-214608c5-88ff-4c3a-a9f7-23dd846cc6e1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] No waiting events found dispatching network-vif-unplugged-89472e1e-6ca6-404e-8ec3-7651099fb248 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 20 09:51:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:44.869 279640 DEBUG nova.compute.manager [req-f5396600-f3c3-4f6a-b3e1-68d83464c835 req-214608c5-88ff-4c3a-a9f7-23dd846cc6e1 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-unplugged-89472e1e-6ca6-404e-8ec3-7651099fb248 for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.148 279640 DEBUG nova.network.neutron [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Activated binding for port 89472e1e-6ca6-404e-8ec3-7651099fb248 and host np0005625202.localdomain migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.149 279640 DEBUG nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.150 279640 DEBUG nova.virt.libvirt.vif [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-02-20T09:51:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1557569525',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='np0005625203.localdomain',hostname='tempest-liveautoblockmigrationv225test-server-1557569525',id=7,image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-02-20T09:51:31Z,launched_on='np0005625203.localdomain',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='np0005625203.localdomain',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a966116e4ddf4bdea0571a1bb751916e',ramdisk_id='',reservation_id='r-ty4xcmp0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='06bd71fd-c415-45d9-b669-46209b7ca2f4',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-425062890',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-425062890-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-02-20T09:51:33Z,user_data=None,user_id='0db48d5f6f5e44fc93154cf4b34a94e0',uuid=e6ab74b8-b495-4363-8d40-2356596c895c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.150 279640 DEBUG nova.network.os_vif_util [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Converting VIF {"id": "89472e1e-6ca6-404e-8ec3-7651099fb248", "address": "fa:16:3e:00:f6:87", "network": {"id": "82c5dcbb-e77d-4af1-bf3e-89ecf6e35192", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-813516866-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true, "dhcp_server": "10.100.0.2"}}], "meta": {"injected": false, "tenant_id": "a966116e4ddf4bdea0571a1bb751916e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap89472e1e-6c", "ovs_interfaceid": "89472e1e-6ca6-404e-8ec3-7651099fb248", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.151 279640 DEBUG nova.network.os_vif_util [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.152 279640 DEBUG os_vif [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.155 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.156 279640 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89472e1e-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.157 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.161 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.164 279640 INFO os_vif [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:f6:87,bridge_name='br-int',has_traffic_filtering=True,id=89472e1e-6ca6-404e-8ec3-7651099fb248,network=Network(82c5dcbb-e77d-4af1-bf3e-89ecf6e35192),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap89472e1e-6c')
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.165 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.165 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.166 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.166 279640 DEBUG nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.167 279640 INFO nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Deleting instance files /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c_del
Feb 20 09:51:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:45.167 279640 INFO nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Deletion of /var/lib/nova/instances/e6ab74b8-b495-4363-8d40-2356596c895c_del complete
Feb 20 09:51:46 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:46.585 2 INFO neutron.agent.securitygroups_rpc [req-d0a4da6f-de87-4709-b506-6d507f2fa68b req-a4293e8b-cdcb-4f0f-b9af-77766c0f126a 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['b0641abe-7ec2-4391-9e24-125339c7b7ee']
Feb 20 09:51:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:51:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:51:46 np0005625203.localdomain podman[307591]: 2026-02-20 09:51:46.775361272 +0000 UTC m=+0.089670536 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:51:46 np0005625203.localdomain podman[307591]: 2026-02-20 09:51:46.784320418 +0000 UTC m=+0.098629642 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:51:46 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:51:46 np0005625203.localdomain ceph-mon[296066]: pgmap v116: 177 pgs: 177 active+clean; 360 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 4.8 MiB/s wr, 155 op/s
Feb 20 09:51:46 np0005625203.localdomain podman[307592]: 2026-02-20 09:51:46.881489295 +0000 UTC m=+0.190512686 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:51:46 np0005625203.localdomain podman[307592]: 2026-02-20 09:51:46.921437487 +0000 UTC m=+0.230460828 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:51:46 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:51:47 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:47.322 2 INFO neutron.agent.securitygroups_rpc [req-bca898f3-80d6-4116-8407-48ccb221c91a req-faa58b52-ff7a-4794-95e2-54e55cdad610 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['57ce2b3f-bfcc-424f-be8f-efa4d8d83e67']
Feb 20 09:51:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:47.359 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:51:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:47.886 279640 DEBUG nova.compute.manager [req-9d7f792d-a243-4dec-909b-a52c138e0aa0 req-0f1d2ad0-e550-45cf-86a1-1fbc1a793e53 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Feb 20 09:51:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:47.886 279640 DEBUG oslo_concurrency.lockutils [req-9d7f792d-a243-4dec-909b-a52c138e0aa0 req-0f1d2ad0-e550-45cf-86a1-1fbc1a793e53 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:47.887 279640 DEBUG oslo_concurrency.lockutils [req-9d7f792d-a243-4dec-909b-a52c138e0aa0 req-0f1d2ad0-e550-45cf-86a1-1fbc1a793e53 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:47.887 279640 DEBUG oslo_concurrency.lockutils [req-9d7f792d-a243-4dec-909b-a52c138e0aa0 req-0f1d2ad0-e550-45cf-86a1-1fbc1a793e53 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:47.887 279640 DEBUG nova.compute.manager [req-9d7f792d-a243-4dec-909b-a52c138e0aa0 req-0f1d2ad0-e550-45cf-86a1-1fbc1a793e53 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] No waiting events found dispatching network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Feb 20 09:51:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:47.888 279640 WARNING nova.compute.manager [req-9d7f792d-a243-4dec-909b-a52c138e0aa0 req-0f1d2ad0-e550-45cf-86a1-1fbc1a793e53 d4446b63864e47aebd18955a38393018 f6685cdb0ff24cddaeb987c63c89eafb - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Received unexpected event network-vif-plugged-89472e1e-6ca6-404e-8ec3-7651099fb248 for instance with vm_state active and task_state migrating.
Feb 20 09:51:48 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:48.229 2 INFO neutron.agent.securitygroups_rpc [req-9dee97d4-d9a8-4ee1-93af-ecec75edb6d8 req-b55f0bec-8773-478c-b205-7d4f6dd0e50e 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['35812fcc-3257-4b87-b011-2e502647727b']
Feb 20 09:51:48 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:48.494 2 INFO neutron.agent.securitygroups_rpc [req-3e7d9994-1b77-4e20-ab05-14f19dff3953 req-ef6c6054-1e67-499f-b1bf-b8ae592974a9 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['35812fcc-3257-4b87-b011-2e502647727b']
Feb 20 09:51:48 np0005625203.localdomain ceph-mon[296066]: pgmap v117: 177 pgs: 177 active+clean; 360 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 4.8 MiB/s wr, 155 op/s
Feb 20 09:51:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:48.974 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.099 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquiring lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.099 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.100 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Lock "e6ab74b8-b495-4363-8d40-2356596c895c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.119 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.119 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.120 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.120 279640 DEBUG nova.compute.resource_tracker [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.121 279640 DEBUG oslo_concurrency.processutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:49 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:49.170 2 INFO neutron.agent.securitygroups_rpc [req-fdb7e361-ff4f-4f47-a1a0-e5e8ae6f1fbe req-da6cfbf4-9203-474f-ad26-64056962735b 107ab22f7bfc4441a4ab7a417331cdb2 0e7d3e1cfe9f4e4d8451c6f0b8be3a29 - - default default] Security group rule updated ['35812fcc-3257-4b87-b011-2e502647727b']
Feb 20 09:51:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:51:49 np0005625203.localdomain snmpd[68076]: empty variable list in _query
Feb 20 09:51:49 np0005625203.localdomain snmpd[68076]: empty variable list in _query
Feb 20 09:51:49 np0005625203.localdomain snmpd[68076]: empty variable list in _query
Feb 20 09:51:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:51:49 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/523973223' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.567 279640 DEBUG oslo_concurrency.processutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.787 279640 WARNING nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.789 279640 DEBUG nova.compute.resource_tracker [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11764MB free_disk=41.43628692626953GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.789 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.790 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.829 279640 DEBUG nova.compute.resource_tracker [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Migration for instance e6ab74b8-b495-4363-8d40-2356596c895c refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903
Feb 20 09:51:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e104 e104: 6 total, 6 up, 6 in
Feb 20 09:51:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/523973223' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.876 279640 DEBUG nova.compute.resource_tracker [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.923 279640 DEBUG nova.compute.resource_tracker [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Migration 723674e5-bdf2-40b8-94ab-403f997348bc is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.923 279640 DEBUG nova.compute.resource_tracker [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:51:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:49.924 279640 DEBUG nova.compute.resource_tracker [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:51:49 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:49.953 2 INFO neutron.agent.securitygroups_rpc [None req-3fe948ea-c8e6-429c-836a-702342b0e4ac 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Security group rule updated ['4439e19b-bf91-4420-aff1-6854f961fef4']
Feb 20 09:51:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:50.000 279640 DEBUG oslo_concurrency.processutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:50.190 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:50 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:50.275 2 INFO neutron.agent.securitygroups_rpc [None req-e1dc84f3-2fa9-4dac-a092-2cb427ae3321 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Security group rule updated ['4439e19b-bf91-4420-aff1-6854f961fef4']
Feb 20 09:51:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:51:50 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2939242926' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:50.447 279640 DEBUG oslo_concurrency.processutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:50.456 279640 DEBUG nova.compute.provider_tree [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:51:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:50.506 279640 ERROR nova.scheduler.client.report [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [req-d837a7d3-3646-4284-ae10-925797437bb3] Failed to update inventory to [{'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}}] for resource provider with UUID e5d5157a-2df2-4f51-b5fb-cd2da3a8584e.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-d837a7d3-3646-4284-ae10-925797437bb3"}]}
Feb 20 09:51:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:50.532 279640 DEBUG nova.scheduler.client.report [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Refreshing inventories for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 20 09:51:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:50.562 279640 DEBUG nova.scheduler.client.report [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Updating ProviderTree inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 20 09:51:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:50.562 279640 DEBUG nova.compute.provider_tree [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 0, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:51:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:50.596 279640 DEBUG nova.scheduler.client.report [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Refreshing aggregate associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 20 09:51:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:50.622 279640 DEBUG nova.scheduler.client.report [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Refreshing trait associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, traits: COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 20 09:51:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:50.669 279640 DEBUG oslo_concurrency.processutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:50 np0005625203.localdomain ceph-mon[296066]: pgmap v118: 177 pgs: 177 active+clean; 380 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Feb 20 09:51:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/552899203' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:50 np0005625203.localdomain ceph-mon[296066]: osdmap e104: 6 total, 6 up, 6 in
Feb 20 09:51:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2939242926' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3199515391' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:51:51 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/997567679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.139 279640 DEBUG oslo_concurrency.processutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.148 279640 DEBUG nova.compute.provider_tree [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.197 279640 DEBUG nova.scheduler.client.report [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Updated inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with generation 5 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 15738, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, 'reserved': 0}, 'DISK_GB': {'total': 41, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.198 279640 DEBUG nova.compute.provider_tree [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Updating resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e generation from 5 to 6 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.199 279640 DEBUG nova.compute.provider_tree [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.225 279640 DEBUG nova.compute.resource_tracker [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.225 279640 DEBUG oslo_concurrency.lockutils [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.233 279640 INFO nova.compute.manager [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Migrating instance to np0005625202.localdomain finished successfully.
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.383 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.384 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.384 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.385 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.385 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.406 279640 INFO nova.scheduler.client.report [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] Deleted allocation for migration 723674e5-bdf2-40b8-94ab-403f997348bc
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.407 279640 DEBUG nova.virt.libvirt.driver [None req-15f83be2-76c1-4614-88f6-538fe2682745 f01e99be64c241eaae4ace2f74d422df 5605ba7cb0df4223b48ebf8a1894cdf1 - - default default] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662
Feb 20 09:51:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e105 e105: 6 total, 6 up, 6 in
Feb 20 09:51:51 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/997567679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:51:51 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/804935202' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:51.930 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:52.125 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:51:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:52.126 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11762MB free_disk=41.428184509277344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:51:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:52.126 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:52.126 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:52.159 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:51:11Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e63f70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e63e80>], id=89472e1e-6ca6-404e-8ec3-7651099fb248, ip_allocation=immediate, mac_address=fa:16:3e:00:f6:87, name=tempest-parent-254587356, network_id=82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, port_security_enabled=True, project_id=a966116e4ddf4bdea0571a1bb751916e, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=14, security_groups=['07d2fe18-fbbf-4547-931e-bb55f378bade'], standard_attr_id=498, status=DOWN, tags=[], tenant_id=a966116e4ddf4bdea0571a1bb751916e, trunk_details=sub_ports=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e63bb0>], trunk_id=9784f938-17c7-4c74-9956-6e0be6058c3d, updated_at=2026-02-20T09:51:51Z on network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192
Feb 20 09:51:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:52.195 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:51:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:52.196 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:51:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:52.213 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:52 np0005625203.localdomain dnsmasq[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/addn_hosts - 2 addresses
Feb 20 09:51:52 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/host
Feb 20 09:51:52 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/opts
Feb 20 09:51:52 np0005625203.localdomain systemd[1]: tmp-crun.lZmdEs.mount: Deactivated successfully.
Feb 20 09:51:52 np0005625203.localdomain podman[307743]: 2026-02-20 09:51:52.436364125 +0000 UTC m=+0.090786771 container kill 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 09:51:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:51:52 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2232832379' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:52.677 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:52.683 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:51:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:52.700 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:51:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:52.703 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:51:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:52.703 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:52.711 262775 INFO neutron.agent.dhcp.agent [None req-9714a454-d17f-4b42-8e73-360c0e4e1625 - - - - - -] DHCP configuration for ports {'89472e1e-6ca6-404e-8ec3-7651099fb248'} is completed
Feb 20 09:51:52 np0005625203.localdomain ceph-mon[296066]: pgmap v120: 177 pgs: 177 active+clean; 385 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.4 MiB/s rd, 5.1 MiB/s wr, 228 op/s
Feb 20 09:51:52 np0005625203.localdomain ceph-mon[296066]: osdmap e105: 6 total, 6 up, 6 in
Feb 20 09:51:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/804935202' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/4266218097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2232832379' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e106 e106: 6 total, 6 up, 6 in
Feb 20 09:51:53 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:53.671 2 INFO neutron.agent.securitygroups_rpc [None req-2028af40-368e-4b25-90de-8401d53be72c 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Security group member updated ['07d2fe18-fbbf-4547-931e-bb55f378bade']
Feb 20 09:51:53 np0005625203.localdomain ceph-mon[296066]: osdmap e106: 6 total, 6 up, 6 in
Feb 20 09:51:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3716024828' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:53.976 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:51:54 np0005625203.localdomain dnsmasq[306596]: read /var/lib/neutron/dhcp/5faf2589-b0d7-486e-a56b-df0762273b7b/addn_hosts - 0 addresses
Feb 20 09:51:54 np0005625203.localdomain dnsmasq-dhcp[306596]: read /var/lib/neutron/dhcp/5faf2589-b0d7-486e-a56b-df0762273b7b/host
Feb 20 09:51:54 np0005625203.localdomain podman[307803]: 2026-02-20 09:51:54.621000452 +0000 UTC m=+0.066928906 container kill ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:51:54 np0005625203.localdomain dnsmasq-dhcp[306596]: read /var/lib/neutron/dhcp/5faf2589-b0d7-486e-a56b-df0762273b7b/opts
Feb 20 09:51:54 np0005625203.localdomain ceph-mon[296066]: pgmap v123: 177 pgs: 177 active+clean; 385 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 624 KiB/s wr, 163 op/s
Feb 20 09:51:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/151132608' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:55.193 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:55.210 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:51:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:55.209 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:55.211 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:51:55 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:51:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:55.704 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:51:55 np0005625203.localdomain systemd[1]: tmp-crun.lYjlva.mount: Deactivated successfully.
Feb 20 09:51:55 np0005625203.localdomain podman[307826]: 2026-02-20 09:51:55.785162005 +0000 UTC m=+0.092370919 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:51:55 np0005625203.localdomain podman[307826]: 2026-02-20 09:51:55.814675806 +0000 UTC m=+0.121884690 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 20 09:51:55 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:51:55 np0005625203.localdomain dnsmasq[306596]: exiting on receipt of SIGTERM
Feb 20 09:51:55 np0005625203.localdomain podman[307859]: 2026-02-20 09:51:55.904328681 +0000 UTC m=+0.063702066 container kill ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 20 09:51:55 np0005625203.localdomain systemd[1]: libpod-ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53.scope: Deactivated successfully.
Feb 20 09:51:55 np0005625203.localdomain podman[307873]: 2026-02-20 09:51:55.989463226 +0000 UTC m=+0.064859151 container died ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:51:56 np0005625203.localdomain podman[307873]: 2026-02-20 09:51:56.022586498 +0000 UTC m=+0.097982353 container cleanup ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:51:56 np0005625203.localdomain systemd[1]: libpod-conmon-ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53.scope: Deactivated successfully.
Feb 20 09:51:56 np0005625203.localdomain podman[307874]: 2026-02-20 09:51:56.059344392 +0000 UTC m=+0.132168237 container remove ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-5faf2589-b0d7-486e-a56b-df0762273b7b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:51:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:56Z|00060|binding|INFO|Releasing lport 8a6b3cf6-d133-4989-ba72-56ce2d9fea97 from this chassis (sb_readonly=0)
Feb 20 09:51:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:56.070 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:51:56Z|00061|binding|INFO|Setting lport 8a6b3cf6-d133-4989-ba72-56ce2d9fea97 down in Southbound
Feb 20 09:51:56 np0005625203.localdomain kernel: device tap8a6b3cf6-d1 left promiscuous mode
Feb 20 09:51:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:56.077 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '19.80.0.2/24', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-5faf2589-b0d7-486e-a56b-df0762273b7b', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5faf2589-b0d7-486e-a56b-df0762273b7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b8083fe1-977d-4fae-94f3-b03c7096c58a, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=8a6b3cf6-d133-4989-ba72-56ce2d9fea97) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:51:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:56.079 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 8a6b3cf6-d133-4989-ba72-56ce2d9fea97 in datapath 5faf2589-b0d7-486e-a56b-df0762273b7b unbound from our chassis
Feb 20 09:51:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:56.083 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5faf2589-b0d7-486e-a56b-df0762273b7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:51:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:51:56.085 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[d47d22fa-6f25-47b2-8547-3c820edad1ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:51:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:56.105 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:56 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:56.299 262775 INFO neutron.agent.dhcp.agent [None req-757b8eb0-ada4-4162-b414-55bfcbdc194a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:51:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:56.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:51:56 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:56.492 2 INFO neutron.agent.securitygroups_rpc [req-9bfa3730-17a6-4d3e-b4cb-835cdd225f2c req-73160dbe-971e-4219-ac30-c0c28777ca1e 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Security group member updated ['4439e19b-bf91-4420-aff1-6854f961fef4']
Feb 20 09:51:56 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:56.572 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:51:56 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-39d7cce47dcf4f69a50c2a5b594635426d74e4e6822197e4d9312e677ef07418-merged.mount: Deactivated successfully.
Feb 20 09:51:56 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ea9344bdba2d4ba253b851a086248ae71526326c594f0cec261b73ff532a4e53-userdata-shm.mount: Deactivated successfully.
Feb 20 09:51:56 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d5faf2589\x2db0d7\x2d486e\x2da56b\x2ddf0762273b7b.mount: Deactivated successfully.
Feb 20 09:51:56 np0005625203.localdomain ceph-mon[296066]: pgmap v124: 177 pgs: 177 active+clean; 307 MiB data, 1021 MiB used, 41 GiB / 42 GiB avail; 8.1 MiB/s rd, 7.8 MiB/s wr, 301 op/s
Feb 20 09:51:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3016823328' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:57.204 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Acquiring lock "43720f70-168d-461a-8b52-ba71de6033a0" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:57.205 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:57.205 279640 INFO nova.compute.manager [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Unshelving
Feb 20 09:51:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:57.290 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:57.291 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:57.294 279640 DEBUG nova.objects.instance [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'pci_requests' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:51:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:57.310 279640 DEBUG nova.objects.instance [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'numa_topology' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:51:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:57.333 279640 DEBUG nova.virt.hardware [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Feb 20 09:51:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:57.334 279640 INFO nova.compute.claims [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Claim successful on node np0005625203.localdomain
Feb 20 09:51:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:57.340 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:51:57 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:57.449 2 INFO neutron.agent.securitygroups_rpc [None req-426d7c59-43bb-4b5f-98f0-2945e94d9430 ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Security group member updated ['6a912071-fd9c-4d5f-8453-7f993db3506d']
Feb 20 09:51:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:57.493 279640 DEBUG oslo_concurrency.processutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:51:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:51:57.539 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:51:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e107 e107: 6 total, 6 up, 6 in
Feb 20 09:51:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:51:57 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1662759014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.011 279640 DEBUG oslo_concurrency.processutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.017 279640 DEBUG nova.compute.provider_tree [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.034 279640 DEBUG nova.scheduler.client.report [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.058 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.164 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Acquiring lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.164 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Acquired lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.165 279640 DEBUG nova.network.neutron [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.271 279640 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771581103.2699218, e6ab74b8-b495-4363-8d40-2356596c895c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.271 279640 INFO nova.compute.manager [-] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] VM Stopped (Lifecycle Event)
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.278 279640 DEBUG nova.network.neutron [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.304 279640 DEBUG nova.compute.manager [None req-527ea45a-dce4-469c-a872-e9665d078729 - - - - - -] [instance: e6ab74b8-b495-4363-8d40-2356596c895c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.415 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:58 np0005625203.localdomain ceph-mon[296066]: pgmap v125: 177 pgs: 177 active+clean; 307 MiB data, 1021 MiB used, 41 GiB / 42 GiB avail; 6.3 MiB/s rd, 6.2 MiB/s wr, 216 op/s
Feb 20 09:51:58 np0005625203.localdomain ceph-mon[296066]: osdmap e107: 6 total, 6 up, 6 in
Feb 20 09:51:58 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2462011471' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:58 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1662759014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.765 279640 DEBUG nova.network.neutron [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.778 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Releasing lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.780 279640 DEBUG nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.781 279640 INFO nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Creating image(s)
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.818 279640 DEBUG nova.storage.rbd_utils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.823 279640 DEBUG nova.objects.instance [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'trusted_certs' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.872 279640 DEBUG nova.storage.rbd_utils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.913 279640 DEBUG nova.storage.rbd_utils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.919 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Acquiring lock "8ae86f07429971ba9e2364d2d9baeb259244c2cd" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.921 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "8ae86f07429971ba9e2364d2d9baeb259244c2cd" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:51:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:58.955 279640 DEBUG nova.virt.libvirt.imagebackend [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Image locations are: [{'url': 'rbd://a8557ee9-b55d-5519-942c-cf8f6172f1d8/images/2ca20fba-0573-4823-861d-917510483c1a/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://a8557ee9-b55d-5519-942c-cf8f6172f1d8/images/2ca20fba-0573-4823-861d-917510483c1a/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Feb 20 09:51:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:51:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:51:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:51:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159542 "" "Go-http-client/1.1"
Feb 20 09:51:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:59.043 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:51:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19243 "" "Go-http-client/1.1"
Feb 20 09:51:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:59.058 279640 DEBUG nova.virt.libvirt.imagebackend [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Selected location: {'url': 'rbd://a8557ee9-b55d-5519-942c-cf8f6172f1d8/images/2ca20fba-0573-4823-861d-917510483c1a/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Feb 20 09:51:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:59.060 279640 DEBUG nova.storage.rbd_utils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] cloning images/2ca20fba-0573-4823-861d-917510483c1a@snap to None/43720f70-168d-461a-8b52-ba71de6033a0_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Feb 20 09:51:59 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:59.079 2 INFO neutron.agent.securitygroups_rpc [None req-bcd9a2b7-ab94-49ae-b942-9c3b757c3657 0db48d5f6f5e44fc93154cf4b34a94e0 a966116e4ddf4bdea0571a1bb751916e - - default default] Security group member updated ['07d2fe18-fbbf-4547-931e-bb55f378bade']
Feb 20 09:51:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:59.251 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "8ae86f07429971ba9e2364d2d9baeb259244c2cd" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:51:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:51:59 np0005625203.localdomain dnsmasq[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/addn_hosts - 1 addresses
Feb 20 09:51:59 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/host
Feb 20 09:51:59 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/opts
Feb 20 09:51:59 np0005625203.localdomain podman[308064]: 2026-02-20 09:51:59.323020597 +0000 UTC m=+0.071474834 container kill 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:51:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:59.402 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:51:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:59.403 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:51:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:59.404 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:51:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:59.489 279640 DEBUG nova.objects.instance [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'migration_context' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:51:59 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:51:59.521 2 INFO neutron.agent.securitygroups_rpc [None req-da379379-3275-471e-8ade-92d9716364d1 ba15d0e9919d4594a2e6e9d6b3414a5e e704aae5b1ba49d59262f9aa0c366fb4 - - default default] Security group member updated ['6a912071-fd9c-4d5f-8453-7f993db3506d']
Feb 20 09:51:59 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2189930680' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:51:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:51:59.582 279640 DEBUG nova.storage.rbd_utils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] flattening vms/43720f70-168d-461a-8b52-ba71de6033a0_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Feb 20 09:51:59 np0005625203.localdomain sshd[308174]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.196 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:00 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:00.213 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.344 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.370 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.371 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquired lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.371 279640 DEBUG nova.network.neutron [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.371 279640 DEBUG nova.objects.instance [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:52:00 np0005625203.localdomain ceph-mon[296066]: pgmap v127: 177 pgs: 177 active+clean; 277 MiB data, 970 MiB used, 41 GiB / 42 GiB avail; 6.3 MiB/s rd, 8.4 MiB/s wr, 283 op/s
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.605 279640 DEBUG nova.network.neutron [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 20 09:52:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:52:00 np0005625203.localdomain podman[308176]: 2026-02-20 09:52:00.774036108 +0000 UTC m=+0.084195117 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:52:00 np0005625203.localdomain podman[308176]: 2026-02-20 09:52:00.812726372 +0000 UTC m=+0.122885431 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible)
Feb 20 09:52:00 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.853 279640 DEBUG nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Image rbd:vms/43720f70-168d-461a-8b52-ba71de6033a0_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.854 279640 DEBUG nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.854 279640 DEBUG nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Ensure instance console log exists: /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.855 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.855 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.855 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.857 279640 DEBUG nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-02-20T09:51:35Z,direct_url=<?>,disk_format='raw',id=2ca20fba-0573-4823-861d-917510483c1a,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1846377785-shelved',owner='ff4cacca21b64031adfd6cb25f7e62fc',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-20T09:51:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'size': 0, 'encryption_format': None, 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'image_id': '06bd71fd-c415-45d9-b669-46209b7ca2f4'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.862 279640 WARNING nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.864 279640 DEBUG nova.virt.libvirt.host [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Searching host: 'np0005625203.localdomain' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.864 279640 DEBUG nova.virt.libvirt.host [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.866 279640 DEBUG nova.virt.libvirt.host [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Searching host: 'np0005625203.localdomain' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.866 279640 DEBUG nova.virt.libvirt.host [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.867 279640 DEBUG nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.867 279640 DEBUG nova.virt.hardware [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-02-20T09:49:56Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='40a6f41a-8891-4900-942e-688a656af142',id=5,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-02-20T09:51:35Z,direct_url=<?>,disk_format='raw',id=2ca20fba-0573-4823-861d-917510483c1a,min_disk=1,min_ram=0,name='tempest-UnshelveToHostMultiNodesTest-server-1846377785-shelved',owner='ff4cacca21b64031adfd6cb25f7e62fc',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-02-20T09:51:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.867 279640 DEBUG nova.virt.hardware [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.867 279640 DEBUG nova.virt.hardware [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.868 279640 DEBUG nova.virt.hardware [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.868 279640 DEBUG nova.virt.hardware [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.868 279640 DEBUG nova.virt.hardware [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.868 279640 DEBUG nova.virt.hardware [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.868 279640 DEBUG nova.virt.hardware [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.869 279640 DEBUG nova.virt.hardware [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.869 279640 DEBUG nova.virt.hardware [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.869 279640 DEBUG nova.virt.hardware [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.869 279640 DEBUG nova.objects.instance [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'vcpu_model' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.884 279640 DEBUG oslo_concurrency.processutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.898 279640 DEBUG nova.network.neutron [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.918 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Releasing lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.918 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Feb 20 09:52:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:00.919 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:52:01 np0005625203.localdomain sshd[308174]: Invalid user nutanix from 152.32.129.236 port 36730
Feb 20 09:52:01 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:52:01 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4022390931' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:01.308 279640 DEBUG oslo_concurrency.processutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:52:01 np0005625203.localdomain sshd[308174]: Received disconnect from 152.32.129.236 port 36730:11: Bye Bye [preauth]
Feb 20 09:52:01 np0005625203.localdomain sshd[308174]: Disconnected from invalid user nutanix 152.32.129.236 port 36730 [preauth]
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:01.343 279640 DEBUG nova.storage.rbd_utils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:01.349 279640 DEBUG oslo_concurrency.processutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:52:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/4022390931' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:52:01 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:52:01 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1831565294' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:01.765 279640 DEBUG oslo_concurrency.processutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:01.768 279640 DEBUG nova.objects.instance [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'pci_devices' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:01.815 279640 DEBUG nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] End _get_guest_xml xml=<domain type="kvm">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   <uuid>43720f70-168d-461a-8b52-ba71de6033a0</uuid>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   <name>instance-00000006</name>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   <memory>131072</memory>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   <vcpu>1</vcpu>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   <metadata>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <nova:package version="27.5.2-0.20260127144738.eaa65f0.el9"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <nova:name>tempest-UnshelveToHostMultiNodesTest-server-1846377785</nova:name>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <nova:creationTime>2026-02-20 09:52:00</nova:creationTime>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <nova:flavor name="m1.nano">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <nova:memory>128</nova:memory>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <nova:disk>1</nova:disk>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <nova:swap>0</nova:swap>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <nova:ephemeral>0</nova:ephemeral>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <nova:vcpus>1</nova:vcpus>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       </nova:flavor>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <nova:owner>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <nova:user uuid="65489f8d7cbf42a2960f2d764c16b3f2">tempest-UnshelveToHostMultiNodesTest-1217794180-project-member</nova:user>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <nova:project uuid="ff4cacca21b64031adfd6cb25f7e62fc">tempest-UnshelveToHostMultiNodesTest-1217794180</nova:project>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       </nova:owner>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <nova:root type="image" uuid="2ca20fba-0573-4823-861d-917510483c1a"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <nova:ports/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     </nova:instance>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   </metadata>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   <sysinfo type="smbios">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <system>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <entry name="manufacturer">RDO</entry>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <entry name="product">OpenStack Compute</entry>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <entry name="version">27.5.2-0.20260127144738.eaa65f0.el9</entry>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <entry name="serial">43720f70-168d-461a-8b52-ba71de6033a0</entry>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <entry name="uuid">43720f70-168d-461a-8b52-ba71de6033a0</entry>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <entry name="family">Virtual Machine</entry>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     </system>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   </sysinfo>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   <os>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <type arch="x86_64" machine="q35">hvm</type>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <boot dev="hd"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <smbios mode="sysinfo"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   </os>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   <features>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <acpi/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <apic/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <vmcoreinfo/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   </features>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   <clock offset="utc">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <timer name="pit" tickpolicy="delay"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <timer name="rtc" tickpolicy="catchup"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <timer name="hpet" present="no"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   </clock>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   <cpu mode="host-model" match="exact">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <topology sockets="1" cores="1" threads="1"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   </cpu>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   <devices>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <disk type="network" device="disk">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <driver type="raw" cache="none"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <source protocol="rbd" name="vms/43720f70-168d-461a-8b52-ba71de6033a0_disk">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <host name="172.18.0.103" port="6789"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <host name="172.18.0.104" port="6789"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <host name="172.18.0.105" port="6789"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       </source>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <auth username="openstack">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <secret type="ceph" uuid="a8557ee9-b55d-5519-942c-cf8f6172f1d8"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       </auth>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <target dev="vda" bus="virtio"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     </disk>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <disk type="network" device="cdrom">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <driver type="raw" cache="none"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <source protocol="rbd" name="vms/43720f70-168d-461a-8b52-ba71de6033a0_disk.config">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <host name="172.18.0.103" port="6789"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <host name="172.18.0.104" port="6789"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <host name="172.18.0.105" port="6789"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       </source>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <auth username="openstack">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:         <secret type="ceph" uuid="a8557ee9-b55d-5519-942c-cf8f6172f1d8"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       </auth>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <target dev="sda" bus="sata"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     </disk>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <serial type="pty">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <log file="/var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/console.log" append="off"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     </serial>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <video>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <model type="virtio"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     </video>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <input type="tablet" bus="usb"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <input type="keyboard" bus="usb"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <rng model="virtio">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <backend model="random">/dev/urandom</backend>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     </rng>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="pci" model="pcie-root-port"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <controller type="usb" index="0"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     <memballoon model="virtio">
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:       <stats period="10"/>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:     </memballoon>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:   </devices>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: </domain>
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Feb 20 09:52:01 np0005625203.localdomain dnsmasq[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/addn_hosts - 0 addresses
Feb 20 09:52:01 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/host
Feb 20 09:52:01 np0005625203.localdomain dnsmasq-dhcp[306019]: read /var/lib/neutron/dhcp/82c5dcbb-e77d-4af1-bf3e-89ecf6e35192/opts
Feb 20 09:52:01 np0005625203.localdomain podman[308280]: 2026-02-20 09:52:01.850056104 +0000 UTC m=+0.049564019 container kill 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:01.886 279640 DEBUG nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:01.886 279640 DEBUG nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:01.887 279640 INFO nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Using config drive
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:01.915 279640 DEBUG nova.storage.rbd_utils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:01.949 279640 DEBUG nova.objects.instance [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'ec2_ids' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:52:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:01.988 279640 DEBUG nova.objects.instance [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lazy-loading 'keypairs' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:52:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:52:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3397992773' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:52:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:52:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3397992773' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.153 279640 INFO nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Creating config drive at /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.158 279640 DEBUG oslo_concurrency.processutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxhv77w44 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.286 279640 DEBUG oslo_concurrency.processutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260127144738.eaa65f0.el9 -quiet -J -r -V config-2 /tmp/tmpxhv77w44" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.327 279640 DEBUG nova.storage.rbd_utils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] rbd image 43720f70-168d-461a-8b52-ba71de6033a0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.333 279640 DEBUG oslo_concurrency.processutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config 43720f70-168d-461a-8b52-ba71de6033a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:52:02 np0005625203.localdomain kernel: device tap3b176488-ec left promiscuous mode
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.476 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:02 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:02Z|00062|binding|INFO|Releasing lport 3b176488-ecb3-4c4f-a254-2be6a57d131c from this chassis (sb_readonly=0)
Feb 20 09:52:02 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:02Z|00063|binding|INFO|Setting lport 3b176488-ecb3-4c4f-a254-2be6a57d131c down in Southbound
Feb 20 09:52:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:02.489 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a966116e4ddf4bdea0571a1bb751916e', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30955323-f649-483f-8215-a2b2b9707d5e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=3b176488-ecb3-4c4f-a254-2be6a57d131c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:52:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:02.490 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 3b176488-ecb3-4c4f-a254-2be6a57d131c in datapath 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192 unbound from our chassis
Feb 20 09:52:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:02.495 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.495 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:02.497 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a58a85-b943-4ec9-a41d-0251511e1903]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.563 279640 DEBUG oslo_concurrency.processutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config 43720f70-168d-461a-8b52-ba71de6033a0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.565 279640 INFO nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Deleting local config drive /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0/disk.config because it was imported into RBD.
Feb 20 09:52:02 np0005625203.localdomain ceph-mon[296066]: pgmap v128: 177 pgs: 177 active+clean; 273 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 6.0 MiB/s rd, 8.0 MiB/s wr, 294 op/s
Feb 20 09:52:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1831565294' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:52:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3397992773' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:52:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3397992773' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:52:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1586058909' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:52:02 np0005625203.localdomain systemd-machined[204853]: New machine qemu-2-instance-00000006.
Feb 20 09:52:02 np0005625203.localdomain systemd[1]: Started Virtual Machine qemu-2-instance-00000006.
Feb 20 09:52:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:02.694 262775 INFO neutron.agent.linux.ip_lib [None req-d3ad25be-7168-4968-b872-b3983458e2a9 - - - - - -] Device tap624d51a9-fd cannot be used as it has no MAC address
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.720 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:02 np0005625203.localdomain kernel: device tap624d51a9-fd entered promiscuous mode
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.730 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:02 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:02Z|00064|binding|INFO|Claiming lport 624d51a9-fd7a-4df3-b4a3-bea42f75f772 for this chassis.
Feb 20 09:52:02 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:02Z|00065|binding|INFO|624d51a9-fd7a-4df3-b4a3-bea42f75f772: Claiming unknown
Feb 20 09:52:02 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581122.7363] manager: (tap624d51a9-fd): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Feb 20 09:52:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:02.739 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-679f63a5-827b-4ada-9337-37070bdd98e3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-679f63a5-827b-4ada-9337-37070bdd98e3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '78fdd34f107b4ec7ac81795ecc3f677c', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=101c75d7-e64b-45a2-ad87-6430f31217e2, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=624d51a9-fd7a-4df3-b4a3-bea42f75f772) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:52:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:02.741 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 624d51a9-fd7a-4df3-b4a3-bea42f75f772 in datapath 679f63a5-827b-4ada-9337-37070bdd98e3 bound to our chassis
Feb 20 09:52:02 np0005625203.localdomain systemd-udevd[308384]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:52:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:02.744 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 679f63a5-827b-4ada-9337-37070bdd98e3 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:52:02 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:02Z|00066|binding|INFO|Setting lport 624d51a9-fd7a-4df3-b4a3-bea42f75f772 ovn-installed in OVS
Feb 20 09:52:02 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:02Z|00067|binding|INFO|Setting lport 624d51a9-fd7a-4df3-b4a3-bea42f75f772 up in Southbound
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.746 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:02.745 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[62d76ff2-e09c-4426-9439-7405689a952c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:52:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap624d51a9-fd: No such device
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.775 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap624d51a9-fd: No such device
Feb 20 09:52:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap624d51a9-fd: No such device
Feb 20 09:52:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap624d51a9-fd: No such device
Feb 20 09:52:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap624d51a9-fd: No such device
Feb 20 09:52:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap624d51a9-fd: No such device
Feb 20 09:52:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap624d51a9-fd: No such device
Feb 20 09:52:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap624d51a9-fd: No such device
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.829 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:02.918 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.032 279640 DEBUG nova.virt.driver [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Emitting event <LifecycleEvent: 1771581123.0319824, 43720f70-168d-461a-8b52-ba71de6033a0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.033 279640 INFO nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] VM Resumed (Lifecycle Event)
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.037 279640 DEBUG nova.compute.manager [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.037 279640 DEBUG nova.virt.libvirt.driver [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.042 279640 INFO nova.virt.libvirt.driver [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance spawned successfully.
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.055 279640 DEBUG nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.060 279640 DEBUG nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.092 279640 INFO nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.093 279640 DEBUG nova.virt.driver [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] Emitting event <LifecycleEvent: 1771581123.0333483, 43720f70-168d-461a-8b52-ba71de6033a0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.094 279640 INFO nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] VM Started (Lifecycle Event)
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.115 279640 DEBUG nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.120 279640 DEBUG nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.138 279640 INFO nova.compute.manager [None req-a0f2f9b3-3ee1-46fa-9635-b395fd892cf9 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] During sync_power_state the instance has a pending task (spawning). Skip.
Feb 20 09:52:03 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e108 e108: 6 total, 6 up, 6 in
Feb 20 09:52:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1024349497' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:52:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:52:03 np0005625203.localdomain podman[308480]: 2026-02-20 09:52:03.845554959 +0000 UTC m=+0.155913920 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Feb 20 09:52:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:52:03 np0005625203.localdomain podman[308508]: 
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.915 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:52:03 np0005625203.localdomain podman[308508]: 2026-02-20 09:52:03.852797891 +0000 UTC m=+0.056330538 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:52:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:03.986 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:04 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:52:04 np0005625203.localdomain podman[308508]: 2026-02-20 09:52:04.707011786 +0000 UTC m=+0.910544403 container create d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-679f63a5-827b-4ada-9337-37070bdd98e3, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:52:04 np0005625203.localdomain ceph-mon[296066]: pgmap v129: 177 pgs: 177 active+clean; 273 MiB data, 960 MiB used, 41 GiB / 42 GiB avail; 5.1 MiB/s rd, 6.8 MiB/s wr, 250 op/s
Feb 20 09:52:04 np0005625203.localdomain ceph-mon[296066]: osdmap e108: 6 total, 6 up, 6 in
Feb 20 09:52:04 np0005625203.localdomain systemd[1]: Started libpod-conmon-d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e.scope.
Feb 20 09:52:04 np0005625203.localdomain podman[308526]: 2026-02-20 09:52:04.785962481 +0000 UTC m=+0.920743977 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9/ubi-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=openstack_network_exporter, vcs-type=git, release=1770267347, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9)
Feb 20 09:52:04 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:52:04 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58cab64dd61ad310fbfb08ad10e70436ec601d4031342bc2846c5887061da71b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:52:04 np0005625203.localdomain podman[308526]: 2026-02-20 09:52:04.802317666 +0000 UTC m=+0.937099112 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, io.buildah.version=1.33.7, container_name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.7, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., config_id=openstack_network_exporter, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Feb 20 09:52:04 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:52:04 np0005625203.localdomain podman[308480]: 2026-02-20 09:52:04.837075968 +0000 UTC m=+1.147434889 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute)
Feb 20 09:52:04 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:52:04 np0005625203.localdomain podman[308508]: 2026-02-20 09:52:04.858960512 +0000 UTC m=+1.062493119 container init d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-679f63a5-827b-4ada-9337-37070bdd98e3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:52:04 np0005625203.localdomain podman[308508]: 2026-02-20 09:52:04.866470494 +0000 UTC m=+1.070003131 container start d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-679f63a5-827b-4ada-9337-37070bdd98e3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:52:04 np0005625203.localdomain dnsmasq[308552]: started, version 2.85 cachesize 150
Feb 20 09:52:04 np0005625203.localdomain dnsmasq[308552]: DNS service limited to local subnets
Feb 20 09:52:04 np0005625203.localdomain dnsmasq[308552]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:52:04 np0005625203.localdomain dnsmasq[308552]: warning: no upstream servers configured
Feb 20 09:52:04 np0005625203.localdomain dnsmasq-dhcp[308552]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:52:04 np0005625203.localdomain dnsmasq[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/addn_hosts - 0 addresses
Feb 20 09:52:04 np0005625203.localdomain dnsmasq-dhcp[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/host
Feb 20 09:52:04 np0005625203.localdomain dnsmasq-dhcp[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/opts
Feb 20 09:52:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:05.237 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:05 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:05.570 262775 INFO neutron.agent.dhcp.agent [None req-499fd6b5-73e4-486c-b547-71f0706713ce - - - - - -] DHCP configuration for ports {'3587d4a9-98f7-4fcf-b7c3-9a1109640ea2'} is completed
Feb 20 09:52:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:05.812 279640 DEBUG nova.compute.manager [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 20 09:52:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:05.901 279640 DEBUG oslo_concurrency.lockutils [None req-69bd03de-4cf1-4c91-a3ac-afb49a54a494 a91dc5272d2640148196b60301210361 19914311848542d7bcbd3693656daddb - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 8.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:52:06 np0005625203.localdomain sshd[308553]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:52:06 np0005625203.localdomain ceph-mon[296066]: pgmap v131: 177 pgs: 177 active+clean; 354 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 7.2 MiB/s rd, 8.5 MiB/s wr, 282 op/s
Feb 20 09:52:07 np0005625203.localdomain sshd[308553]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:52:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:52:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:52:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:52:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:52:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:52:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:52:07 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e109 e109: 6 total, 6 up, 6 in
Feb 20 09:52:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:07.668 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:52:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:07.669 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:52:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:07.669 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:52:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:07.687 279640 DEBUG oslo_concurrency.lockutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "43720f70-168d-461a-8b52-ba71de6033a0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:52:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:07.688 279640 DEBUG oslo_concurrency.lockutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:52:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:07.689 279640 DEBUG oslo_concurrency.lockutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "43720f70-168d-461a-8b52-ba71de6033a0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:52:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:07.689 279640 DEBUG oslo_concurrency.lockutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:52:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:07.690 279640 DEBUG oslo_concurrency.lockutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:52:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:07.692 279640 INFO nova.compute.manager [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Terminating instance
Feb 20 09:52:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:07.693 279640 DEBUG oslo_concurrency.lockutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Feb 20 09:52:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:07.694 279640 DEBUG oslo_concurrency.lockutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquired lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Feb 20 09:52:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:07.695 279640 DEBUG nova.network.neutron [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Feb 20 09:52:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:07.809 279640 DEBUG nova.network.neutron [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 20 09:52:08 np0005625203.localdomain ceph-mon[296066]: pgmap v132: 177 pgs: 177 active+clean; 354 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 5.9 MiB/s rd, 6.9 MiB/s wr, 228 op/s
Feb 20 09:52:08 np0005625203.localdomain ceph-mon[296066]: osdmap e109: 6 total, 6 up, 6 in
Feb 20 09:52:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:08.657 279640 DEBUG nova.network.neutron [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 20 09:52:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:08.679 279640 DEBUG oslo_concurrency.lockutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Releasing lock "refresh_cache-43720f70-168d-461a-8b52-ba71de6033a0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Feb 20 09:52:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:08.680 279640 DEBUG nova.compute.manager [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Feb 20 09:52:08 np0005625203.localdomain systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Deactivated successfully.
Feb 20 09:52:08 np0005625203.localdomain systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000006.scope: Consumed 6.165s CPU time.
Feb 20 09:52:08 np0005625203.localdomain systemd-machined[204853]: Machine qemu-2-instance-00000006 terminated.
Feb 20 09:52:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:08.903 279640 INFO nova.virt.libvirt.driver [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance destroyed successfully.
Feb 20 09:52:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:08.904 279640 DEBUG nova.objects.instance [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lazy-loading 'resources' on Instance uuid 43720f70-168d-461a-8b52-ba71de6033a0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Feb 20 09:52:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:08.984 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:52:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:09.550 279640 INFO nova.virt.libvirt.driver [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Deleting instance files /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0_del
Feb 20 09:52:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:09.552 279640 INFO nova.virt.libvirt.driver [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Deletion of /var/lib/nova/instances/43720f70-168d-461a-8b52-ba71de6033a0_del complete
Feb 20 09:52:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:09.622 279640 DEBUG nova.virt.libvirt.host [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Feb 20 09:52:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:09.623 279640 INFO nova.virt.libvirt.host [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] UEFI support detected
Feb 20 09:52:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:09.626 279640 INFO nova.compute.manager [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Took 0.95 seconds to destroy the instance on the hypervisor.
Feb 20 09:52:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:09.627 279640 DEBUG oslo.service.loopingcall [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Feb 20 09:52:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:09.628 279640 DEBUG nova.compute.manager [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Feb 20 09:52:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:09.628 279640 DEBUG nova.network.neutron [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Feb 20 09:52:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:09.939 279640 DEBUG nova.network.neutron [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Feb 20 09:52:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:09.967 279640 DEBUG nova.network.neutron [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Feb 20 09:52:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:09.986 279640 INFO nova.compute.manager [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Took 0.36 seconds to deallocate network for instance.
Feb 20 09:52:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:10.040 279640 DEBUG oslo_concurrency.lockutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:52:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:10.041 279640 DEBUG oslo_concurrency.lockutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:52:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:10.088 279640 DEBUG oslo_concurrency.processutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:52:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:10.239 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:52:10 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1218142157' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:52:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:10.549 279640 DEBUG oslo_concurrency.processutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:52:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:10.555 279640 DEBUG nova.compute.provider_tree [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:52:10 np0005625203.localdomain ceph-mon[296066]: pgmap v134: 177 pgs: 177 active+clean; 282 MiB data, 1017 MiB used, 41 GiB / 42 GiB avail; 8.2 MiB/s rd, 5.9 MiB/s wr, 254 op/s
Feb 20 09:52:10 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1218142157' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:52:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:10.596 279640 DEBUG nova.scheduler.client.report [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:52:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:10.627 279640 DEBUG oslo_concurrency.lockutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:52:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:10.659 279640 INFO nova.scheduler.client.report [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Deleted allocations for instance 43720f70-168d-461a-8b52-ba71de6033a0
Feb 20 09:52:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:10.733 279640 DEBUG oslo_concurrency.lockutils [None req-e993f2d9-fe89-4340-a934-f56dc555e9ca 65489f8d7cbf42a2960f2d764c16b3f2 ff4cacca21b64031adfd6cb25f7e62fc - - default default] Lock "43720f70-168d-461a-8b52-ba71de6033a0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:52:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:11.738 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:12 np0005625203.localdomain dnsmasq[306019]: exiting on receipt of SIGTERM
Feb 20 09:52:12 np0005625203.localdomain podman[308617]: 2026-02-20 09:52:12.314458568 +0000 UTC m=+0.059764974 container kill 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:52:12 np0005625203.localdomain systemd[1]: libpod-4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f.scope: Deactivated successfully.
Feb 20 09:52:12 np0005625203.localdomain podman[308630]: 2026-02-20 09:52:12.385469318 +0000 UTC m=+0.058375111 container died 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 09:52:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f-userdata-shm.mount: Deactivated successfully.
Feb 20 09:52:12 np0005625203.localdomain podman[308630]: 2026-02-20 09:52:12.414779943 +0000 UTC m=+0.087685666 container cleanup 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 09:52:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-5520f4d6ab6ff12edf32111abd1576ee01576e40db83e45c16f10c5e9befd1f5-merged.mount: Deactivated successfully.
Feb 20 09:52:12 np0005625203.localdomain systemd[1]: libpod-conmon-4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f.scope: Deactivated successfully.
Feb 20 09:52:12 np0005625203.localdomain podman[308632]: 2026-02-20 09:52:12.462651499 +0000 UTC m=+0.125626426 container remove 4e1d46874fb2ac789dfa58942ad624227043ba69613dff4939453948ff9a5d2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-82c5dcbb-e77d-4af1-bf3e-89ecf6e35192, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:52:12 np0005625203.localdomain ceph-mon[296066]: pgmap v135: 177 pgs: 177 active+clean; 192 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 11 MiB/s rd, 5.9 MiB/s wr, 376 op/s
Feb 20 09:52:12 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:12.701 262775 INFO neutron.agent.dhcp.agent [None req-2acc9373-6e92-4004-8934-84819b5e8286 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:12 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:12.702 262775 INFO neutron.agent.dhcp.agent [None req-2acc9373-6e92-4004-8934-84819b5e8286 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:12 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:12.856 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:12 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:12.930 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:12Z, description=, device_id=3b959844-90d2-486b-9e34-b9eff25d51c3, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e73a60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4f67940>], id=83f4c6e2-c5d4-4b85-9f4b-6a2faab4d83a, ip_allocation=immediate, mac_address=fa:16:3e:4b:02:fb, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:51:59Z, description=, dns_domain=, id=679f63a5-827b-4ada-9337-37070bdd98e3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-1928182990, port_security_enabled=True, project_id=78fdd34f107b4ec7ac81795ecc3f677c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3560, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=730, status=ACTIVE, subnets=['010d5fdd-24c2-48ee-b557-f8879b2d4228'], tags=[], tenant_id=78fdd34f107b4ec7ac81795ecc3f677c, updated_at=2026-02-20T09:52:00Z, vlan_transparent=None, network_id=679f63a5-827b-4ada-9337-37070bdd98e3, port_security_enabled=False, project_id=78fdd34f107b4ec7ac81795ecc3f677c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=769, status=DOWN, tags=[], tenant_id=78fdd34f107b4ec7ac81795ecc3f677c, updated_at=2026-02-20T09:52:12Z on network 679f63a5-827b-4ada-9337-37070bdd98e3
Feb 20 09:52:13 np0005625203.localdomain dnsmasq[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/addn_hosts - 1 addresses
Feb 20 09:52:13 np0005625203.localdomain dnsmasq-dhcp[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/host
Feb 20 09:52:13 np0005625203.localdomain podman[308676]: 2026-02-20 09:52:13.154261398 +0000 UTC m=+0.063544610 container kill d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-679f63a5-827b-4ada-9337-37070bdd98e3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 20 09:52:13 np0005625203.localdomain dnsmasq-dhcp[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/opts
Feb 20 09:52:13 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d82c5dcbb\x2de77d\x2d4af1\x2dbf3e\x2d89ecf6e35192.mount: Deactivated successfully.
Feb 20 09:52:13 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:13.365 262775 INFO neutron.agent.dhcp.agent [None req-49223948-85a1-4784-82dd-7d038e79aba1 - - - - - -] DHCP configuration for ports {'83f4c6e2-c5d4-4b85-9f4b-6a2faab4d83a'} is completed
Feb 20 09:52:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:14.012 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:14Z|00068|ovn_bfd|INFO|Disabled BFD on interface ovn-0c414b-0
Feb 20 09:52:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:14Z|00069|ovn_bfd|INFO|Disabled BFD on interface ovn-2275c3-0
Feb 20 09:52:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:14Z|00070|ovn_bfd|INFO|Disabled BFD on interface ovn-2df8cc-0
Feb 20 09:52:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:14.198 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:14.201 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:14.220 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:14 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:52:14 np0005625203.localdomain dnsmasq[306288]: read /var/lib/neutron/dhcp/aeac20da-1ef4-4e07-847a-a0d1f8a80ad9/addn_hosts - 0 addresses
Feb 20 09:52:14 np0005625203.localdomain dnsmasq-dhcp[306288]: read /var/lib/neutron/dhcp/aeac20da-1ef4-4e07-847a-a0d1f8a80ad9/host
Feb 20 09:52:14 np0005625203.localdomain podman[308716]: 2026-02-20 09:52:14.375903986 +0000 UTC m=+0.059604170 container kill d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:52:14 np0005625203.localdomain dnsmasq-dhcp[306288]: read /var/lib/neutron/dhcp/aeac20da-1ef4-4e07-847a-a0d1f8a80ad9/opts
Feb 20 09:52:14 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:14.630 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:12Z, description=, device_id=3b959844-90d2-486b-9e34-b9eff25d51c3, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e49940>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e496d0>], id=83f4c6e2-c5d4-4b85-9f4b-6a2faab4d83a, ip_allocation=immediate, mac_address=fa:16:3e:4b:02:fb, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:51:59Z, description=, dns_domain=, id=679f63a5-827b-4ada-9337-37070bdd98e3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-1928182990, port_security_enabled=True, project_id=78fdd34f107b4ec7ac81795ecc3f677c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3560, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=730, status=ACTIVE, subnets=['010d5fdd-24c2-48ee-b557-f8879b2d4228'], tags=[], tenant_id=78fdd34f107b4ec7ac81795ecc3f677c, updated_at=2026-02-20T09:52:00Z, vlan_transparent=None, network_id=679f63a5-827b-4ada-9337-37070bdd98e3, port_security_enabled=False, project_id=78fdd34f107b4ec7ac81795ecc3f677c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=769, status=DOWN, tags=[], tenant_id=78fdd34f107b4ec7ac81795ecc3f677c, updated_at=2026-02-20T09:52:12Z on network 679f63a5-827b-4ada-9337-37070bdd98e3
Feb 20 09:52:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:14Z|00071|binding|INFO|Releasing lport 189f50d0-ec3c-4391-ae4a-06afd6ad7a16 from this chassis (sb_readonly=0)
Feb 20 09:52:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:14Z|00072|binding|INFO|Setting lport 189f50d0-ec3c-4391-ae4a-06afd6ad7a16 down in Southbound
Feb 20 09:52:14 np0005625203.localdomain kernel: device tap189f50d0-ec left promiscuous mode
Feb 20 09:52:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:14.638 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:14.643 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65685fb154414740b6b5e1276111b8bb', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6a19a018-28c8-4ea6-8726-adf082e39248, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=189f50d0-ec3c-4391-ae4a-06afd6ad7a16) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:52:14 np0005625203.localdomain ceph-mon[296066]: pgmap v136: 177 pgs: 177 active+clean; 192 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 9.2 MiB/s rd, 4.8 MiB/s wr, 308 op/s
Feb 20 09:52:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:14.646 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 189f50d0-ec3c-4391-ae4a-06afd6ad7a16 in datapath aeac20da-1ef4-4e07-847a-a0d1f8a80ad9 unbound from our chassis
Feb 20 09:52:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:14.649 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:52:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:14.651 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[97c7ab50-a757-4c3b-a3d4-38e4ef1fa737]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:52:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:14.658 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:14 np0005625203.localdomain dnsmasq[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/addn_hosts - 1 addresses
Feb 20 09:52:14 np0005625203.localdomain dnsmasq-dhcp[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/host
Feb 20 09:52:14 np0005625203.localdomain dnsmasq-dhcp[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/opts
Feb 20 09:52:14 np0005625203.localdomain podman[308756]: 2026-02-20 09:52:14.819139976 +0000 UTC m=+0.050344494 container kill d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-679f63a5-827b-4ada-9337-37070bdd98e3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:52:15 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:15.037 262775 INFO neutron.agent.dhcp.agent [None req-2af830a3-ed5b-4c5b-9048-10d12a1c8e68 - - - - - -] DHCP configuration for ports {'83f4c6e2-c5d4-4b85-9f4b-6a2faab4d83a'} is completed
Feb 20 09:52:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:15.266 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:15 np0005625203.localdomain sshd[308778]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:52:15 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:52:15.941 2 INFO neutron.agent.securitygroups_rpc [None req-343265ee-aef8-4c0b-8b69-d5c79e80995b ad3bee90b7c843958ab29e9ae5697cd5 78fdd34f107b4ec7ac81795ecc3f677c - - default default] Security group member updated ['7f2f6730-5897-423d-9b80-6a0cf94c3a8f']
Feb 20 09:52:16 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:15.999 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:15Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ebaa60>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ebaaf0>], id=421fd434-ea19-4214-8538-ebea0bc2687e, ip_allocation=immediate, mac_address=fa:16:3e:bb:28:30, name=tempest-FloatingIPAdminTestJSON-198108060, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:51:59Z, description=, dns_domain=, id=679f63a5-827b-4ada-9337-37070bdd98e3, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPAdminTestJSON-test-network-1928182990, port_security_enabled=True, project_id=78fdd34f107b4ec7ac81795ecc3f677c, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=3560, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=730, status=ACTIVE, subnets=['010d5fdd-24c2-48ee-b557-f8879b2d4228'], tags=[], tenant_id=78fdd34f107b4ec7ac81795ecc3f677c, updated_at=2026-02-20T09:52:00Z, vlan_transparent=None, network_id=679f63a5-827b-4ada-9337-37070bdd98e3, port_security_enabled=True, project_id=78fdd34f107b4ec7ac81795ecc3f677c, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['7f2f6730-5897-423d-9b80-6a0cf94c3a8f'], standard_attr_id=778, status=DOWN, tags=[], tenant_id=78fdd34f107b4ec7ac81795ecc3f677c, updated_at=2026-02-20T09:52:15Z on network 679f63a5-827b-4ada-9337-37070bdd98e3
Feb 20 09:52:16 np0005625203.localdomain dnsmasq[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/addn_hosts - 2 addresses
Feb 20 09:52:16 np0005625203.localdomain podman[308795]: 2026-02-20 09:52:16.244211835 +0000 UTC m=+0.058419092 container kill d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-679f63a5-827b-4ada-9337-37070bdd98e3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:52:16 np0005625203.localdomain dnsmasq-dhcp[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/host
Feb 20 09:52:16 np0005625203.localdomain dnsmasq-dhcp[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/opts
Feb 20 09:52:16 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:16.498 262775 INFO neutron.agent.dhcp.agent [None req-24ed3f08-0217-4fa7-99b1-8a948b242faa - - - - - -] DHCP configuration for ports {'421fd434-ea19-4214-8538-ebea0bc2687e'} is completed
Feb 20 09:52:16 np0005625203.localdomain sshd[308778]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:52:16 np0005625203.localdomain ceph-mon[296066]: pgmap v137: 177 pgs: 177 active+clean; 192 MiB data, 823 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 19 KiB/s wr, 168 op/s
Feb 20 09:52:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:52:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:52:17 np0005625203.localdomain systemd[1]: tmp-crun.wDjtcm.mount: Deactivated successfully.
Feb 20 09:52:17 np0005625203.localdomain podman[308816]: 2026-02-20 09:52:17.787324137 +0000 UTC m=+0.097217669 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:52:17 np0005625203.localdomain podman[308816]: 2026-02-20 09:52:17.801475723 +0000 UTC m=+0.111369305 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:52:17 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:52:17 np0005625203.localdomain podman[308815]: 2026-02-20 09:52:17.877108606 +0000 UTC m=+0.188431172 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:52:17 np0005625203.localdomain podman[308815]: 2026-02-20 09:52:17.915400107 +0000 UTC m=+0.226722633 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:52:17 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:52:18 np0005625203.localdomain dnsmasq[306288]: exiting on receipt of SIGTERM
Feb 20 09:52:18 np0005625203.localdomain podman[308874]: 2026-02-20 09:52:18.059220962 +0000 UTC m=+0.056182303 container kill d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 20 09:52:18 np0005625203.localdomain systemd[1]: libpod-d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f.scope: Deactivated successfully.
Feb 20 09:52:18 np0005625203.localdomain podman[308886]: 2026-02-20 09:52:18.119976677 +0000 UTC m=+0.042346947 container died d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:52:18 np0005625203.localdomain podman[308886]: 2026-02-20 09:52:18.146219125 +0000 UTC m=+0.068589375 container cleanup d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 20 09:52:18 np0005625203.localdomain systemd[1]: libpod-conmon-d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f.scope: Deactivated successfully.
Feb 20 09:52:18 np0005625203.localdomain podman[308888]: 2026-02-20 09:52:18.204575215 +0000 UTC m=+0.122874060 container remove d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-aeac20da-1ef4-4e07-847a-a0d1f8a80ad9, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 20 09:52:18 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:18.424 262775 INFO neutron.agent.dhcp.agent [None req-28e310df-761c-48b2-b3f8-41733d9e6a33 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:18 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:18.433 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:18 np0005625203.localdomain ceph-mon[296066]: pgmap v138: 177 pgs: 177 active+clean; 192 MiB data, 823 MiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 19 KiB/s wr, 168 op/s
Feb 20 09:52:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4a8043a7ae832302aaf67084eba3cbada4e8d8137db62aea245334f7b2fefefa-merged.mount: Deactivated successfully.
Feb 20 09:52:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d53174bdce0f9eef72ccb1d87f96b7a358c96717cf9a4e0ad5736eaa2a4bf98f-userdata-shm.mount: Deactivated successfully.
Feb 20 09:52:18 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2daeac20da\x2d1ef4\x2d4e07\x2d847a\x2da0d1f8a80ad9.mount: Deactivated successfully.
Feb 20 09:52:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:19.059 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:19 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:19.062 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:52:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:19.414 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:19 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:52:19.525 2 INFO neutron.agent.securitygroups_rpc [None req-e2d6b938-f6f5-4317-a8d2-0776bdf5afe2 ad3bee90b7c843958ab29e9ae5697cd5 78fdd34f107b4ec7ac81795ecc3f677c - - default default] Security group member updated ['7f2f6730-5897-423d-9b80-6a0cf94c3a8f']
Feb 20 09:52:19 np0005625203.localdomain dnsmasq[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/addn_hosts - 1 addresses
Feb 20 09:52:19 np0005625203.localdomain dnsmasq-dhcp[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/host
Feb 20 09:52:19 np0005625203.localdomain dnsmasq-dhcp[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/opts
Feb 20 09:52:19 np0005625203.localdomain podman[308933]: 2026-02-20 09:52:19.770656235 +0000 UTC m=+0.063905741 container kill d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-679f63a5-827b-4ada-9337-37070bdd98e3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:52:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:20.301 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:20 np0005625203.localdomain sshd[308953]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:52:20 np0005625203.localdomain ceph-mon[296066]: pgmap v139: 177 pgs: 177 active+clean; 220 MiB data, 864 MiB used, 41 GiB / 42 GiB avail; 3.3 MiB/s rd, 1.5 MiB/s wr, 183 op/s
Feb 20 09:52:21 np0005625203.localdomain dnsmasq[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/addn_hosts - 0 addresses
Feb 20 09:52:21 np0005625203.localdomain dnsmasq-dhcp[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/host
Feb 20 09:52:21 np0005625203.localdomain podman[308972]: 2026-02-20 09:52:21.24059771 +0000 UTC m=+0.062332953 container kill d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-679f63a5-827b-4ada-9337-37070bdd98e3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 20 09:52:21 np0005625203.localdomain dnsmasq-dhcp[308552]: read /var/lib/neutron/dhcp/679f63a5-827b-4ada-9337-37070bdd98e3/opts
Feb 20 09:52:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:21.454 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:21 np0005625203.localdomain kernel: device tap624d51a9-fd left promiscuous mode
Feb 20 09:52:21 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:21Z|00073|binding|INFO|Releasing lport 624d51a9-fd7a-4df3-b4a3-bea42f75f772 from this chassis (sb_readonly=0)
Feb 20 09:52:21 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:21Z|00074|binding|INFO|Setting lport 624d51a9-fd7a-4df3-b4a3-bea42f75f772 down in Southbound
Feb 20 09:52:21 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:21.467 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-679f63a5-827b-4ada-9337-37070bdd98e3', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-679f63a5-827b-4ada-9337-37070bdd98e3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '78fdd34f107b4ec7ac81795ecc3f677c', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=101c75d7-e64b-45a2-ad87-6430f31217e2, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=624d51a9-fd7a-4df3-b4a3-bea42f75f772) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:52:21 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:21.469 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 624d51a9-fd7a-4df3-b4a3-bea42f75f772 in datapath 679f63a5-827b-4ada-9337-37070bdd98e3 unbound from our chassis
Feb 20 09:52:21 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:21.472 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 679f63a5-827b-4ada-9337-37070bdd98e3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:52:21 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:21.473 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[9bd9cb65-129f-4402-a89c-627355b1ae63]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:52:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:21.486 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:21 np0005625203.localdomain sshd[308953]: Invalid user luis from 103.48.192.48 port 58737
Feb 20 09:52:22 np0005625203.localdomain sshd[308953]: Received disconnect from 103.48.192.48 port 58737:11: Bye Bye [preauth]
Feb 20 09:52:22 np0005625203.localdomain sshd[308953]: Disconnected from invalid user luis 103.48.192.48 port 58737 [preauth]
Feb 20 09:52:22 np0005625203.localdomain ceph-mon[296066]: pgmap v140: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Feb 20 09:52:23 np0005625203.localdomain dnsmasq[308552]: exiting on receipt of SIGTERM
Feb 20 09:52:23 np0005625203.localdomain podman[309012]: 2026-02-20 09:52:23.374025158 +0000 UTC m=+0.057684260 container kill d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-679f63a5-827b-4ada-9337-37070bdd98e3, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:52:23 np0005625203.localdomain systemd[1]: libpod-d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e.scope: Deactivated successfully.
Feb 20 09:52:23 np0005625203.localdomain podman[309024]: 2026-02-20 09:52:23.443421098 +0000 UTC m=+0.055691719 container died d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-679f63a5-827b-4ada-9337-37070bdd98e3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:52:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e-userdata-shm.mount: Deactivated successfully.
Feb 20 09:52:23 np0005625203.localdomain podman[309024]: 2026-02-20 09:52:23.475423365 +0000 UTC m=+0.087693936 container cleanup d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-679f63a5-827b-4ada-9337-37070bdd98e3, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:52:23 np0005625203.localdomain systemd[1]: libpod-conmon-d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e.scope: Deactivated successfully.
Feb 20 09:52:23 np0005625203.localdomain podman[309026]: 2026-02-20 09:52:23.533580659 +0000 UTC m=+0.139297587 container remove d0598e39a5a3921bf101d353f683d79e3de54381cc76557174903b3ec6951f9e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-679f63a5-827b-4ada-9337-37070bdd98e3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Feb 20 09:52:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:23.558 262775 INFO neutron.agent.dhcp.agent [None req-9de40da9-4d04-4134-9720-daeae49cef15 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:23.620 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:23.816 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:23.902 279640 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1771581128.9012985, 43720f70-168d-461a-8b52-ba71de6033a0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Feb 20 09:52:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:23.903 279640 INFO nova.compute.manager [-] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] VM Stopped (Lifecycle Event)
Feb 20 09:52:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:23.936 279640 DEBUG nova.compute.manager [None req-0487a863-bdf7-48fa-a7a9-faddb88523c3 - - - - - -] [instance: 43720f70-168d-461a-8b52-ba71de6033a0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Feb 20 09:52:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:24.062 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:52:24 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-58cab64dd61ad310fbfb08ad10e70436ec601d4031342bc2846c5887061da71b-merged.mount: Deactivated successfully.
Feb 20 09:52:24 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d679f63a5\x2d827b\x2d4ada\x2d9337\x2d37070bdd98e3.mount: Deactivated successfully.
Feb 20 09:52:24 np0005625203.localdomain ceph-mon[296066]: pgmap v141: 177 pgs: 177 active+clean; 225 MiB data, 889 MiB used, 41 GiB / 42 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Feb 20 09:52:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:25.304 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:25 np0005625203.localdomain sshd[309054]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:52:25 np0005625203.localdomain sshd[309054]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:52:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:52:26 np0005625203.localdomain podman[309056]: 2026-02-20 09:52:26.028445844 +0000 UTC m=+0.081265728 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:52:26 np0005625203.localdomain podman[309056]: 2026-02-20 09:52:26.036262315 +0000 UTC m=+0.089082199 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Feb 20 09:52:26 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:52:26 np0005625203.localdomain ceph-mon[296066]: pgmap v142: 177 pgs: 177 active+clean; 225 MiB data, 890 MiB used, 41 GiB / 42 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Feb 20 09:52:28 np0005625203.localdomain ceph-mon[296066]: pgmap v143: 177 pgs: 177 active+clean; 225 MiB data, 890 MiB used, 41 GiB / 42 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Feb 20 09:52:28 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:52:28.828 2 INFO neutron.agent.securitygroups_rpc [req-b5848cd8-466b-4ef0-b46b-f454b04eeec2 req-730cd1ba-0675-45ee-8c23-360f67ec8632 2188e6de9cae445dadfba1541701ebd2 f299da1b635f4dafbe62328983ad1fae - - default default] Security group member updated ['4439e19b-bf91-4420-aff1-6854f961fef4']
Feb 20 09:52:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:52:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:52:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:52:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 09:52:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18292 "" "Go-http-client/1.1"
Feb 20 09:52:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:29.065 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:29 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:52:29 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3541994609' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:52:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:30.307 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:30 np0005625203.localdomain ceph-mon[296066]: pgmap v144: 177 pgs: 177 active+clean; 176 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 380 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Feb 20 09:52:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:52:31 np0005625203.localdomain systemd[1]: tmp-crun.gtWGF8.mount: Deactivated successfully.
Feb 20 09:52:31 np0005625203.localdomain podman[309074]: 2026-02-20 09:52:31.772123025 +0000 UTC m=+0.090453761 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:52:31 np0005625203.localdomain podman[309074]: 2026-02-20 09:52:31.814810032 +0000 UTC m=+0.133140828 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb 20 09:52:31 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:52:32 np0005625203.localdomain ceph-mon[296066]: pgmap v145: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 177 KiB/s rd, 660 KiB/s wr, 52 op/s
Feb 20 09:52:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:34.070 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:52:34 np0005625203.localdomain ceph-mon[296066]: pgmap v146: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Feb 20 09:52:34 np0005625203.localdomain sudo[309100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:52:35 np0005625203.localdomain sudo[309100]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:52:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:52:35 np0005625203.localdomain sudo[309100]: pam_unix(sudo:session): session closed for user root
Feb 20 09:52:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:52:35 np0005625203.localdomain sudo[309120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:52:35 np0005625203.localdomain sudo[309120]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:52:35 np0005625203.localdomain podman[309119]: 2026-02-20 09:52:35.109279478 +0000 UTC m=+0.091433541 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, distribution-scope=public, version=9.7, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, architecture=x86_64, release=1770267347, config_id=openstack_network_exporter, io.openshift.expose-services=, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:52:35 np0005625203.localdomain podman[309117]: 2026-02-20 09:52:35.165809811 +0000 UTC m=+0.148273414 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:52:35 np0005625203.localdomain podman[309117]: 2026-02-20 09:52:35.180456293 +0000 UTC m=+0.162919886 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:52:35 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:52:35 np0005625203.localdomain podman[309119]: 2026-02-20 09:52:35.234099267 +0000 UTC m=+0.216253320 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, distribution-scope=public, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Feb 20 09:52:35 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:52:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:35.353 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:35 np0005625203.localdomain sudo[309120]: pam_unix(sudo:session): session closed for user root
Feb 20 09:52:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:52:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:52:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:52:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:52:36 np0005625203.localdomain sudo[309205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:52:36 np0005625203.localdomain sudo[309205]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:52:36 np0005625203.localdomain sudo[309205]: pam_unix(sudo:session): session closed for user root
Feb 20 09:52:36 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:52:36.829 2 INFO neutron.agent.securitygroups_rpc [None req-d63b0875-b3f1-4849-b165-16313644e666 eab28fccca6a48139a7d8b395d8f0b9a dc182b0a7cbb4e47b6b88befc2c48022 - - default default] Security group member updated ['8d0cb685-1e0f-43aa-973a-a081d9962496']
Feb 20 09:52:36 np0005625203.localdomain ceph-mon[296066]: pgmap v147: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Feb 20 09:52:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:52:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:52:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:52:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:52:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:52:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:52:37 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:52:37.390 2 INFO neutron.agent.securitygroups_rpc [None req-51ac2042-ec94-4975-95ca-42a72712c92b eab28fccca6a48139a7d8b395d8f0b9a dc182b0a7cbb4e47b6b88befc2c48022 - - default default] Security group member updated ['8d0cb685-1e0f-43aa-973a-a081d9962496']
Feb 20 09:52:37 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:37.421 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:38 np0005625203.localdomain ceph-mon[296066]: pgmap v148: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 20 09:52:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:52:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:39.073 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:52:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:39.394 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:40.006 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:40.387 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:41 np0005625203.localdomain ceph-mon[296066]: pgmap v149: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.593429) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581162593472, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1126, "num_deletes": 253, "total_data_size": 1367012, "memory_usage": 1395816, "flush_reason": "Manual Compaction"}
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581162600140, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 646394, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20323, "largest_seqno": 21444, "table_properties": {"data_size": 642460, "index_size": 1597, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10892, "raw_average_key_size": 21, "raw_value_size": 633788, "raw_average_value_size": 1245, "num_data_blocks": 71, "num_entries": 509, "num_filter_entries": 509, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581095, "oldest_key_time": 1771581095, "file_creation_time": 1771581162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 6755 microseconds, and 2919 cpu microseconds.
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.600185) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 646394 bytes OK
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.600210) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.601863) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.601897) EVENT_LOG_v1 {"time_micros": 1771581162601872, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.601913) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1361433, prev total WAL file size 1361757, number of live WAL files 2.
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.602694) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373536' seq:72057594037927935, type:22 .. '6D6772737461740034303037' seq:0, type:0; will stop at (end)
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(631KB)], [30(18MB)]
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581162602728, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 19536628, "oldest_snapshot_seqno": -1}
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 12298 keys, 17598389 bytes, temperature: kUnknown
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581162673485, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 17598389, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17529966, "index_size": 36578, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30789, "raw_key_size": 328417, "raw_average_key_size": 26, "raw_value_size": 17322283, "raw_average_value_size": 1408, "num_data_blocks": 1390, "num_entries": 12298, "num_filter_entries": 12298, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581162, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.673762) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 17598389 bytes
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.675588) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 275.8 rd, 248.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 18.0 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(57.4) write-amplify(27.2) OK, records in: 12797, records dropped: 499 output_compression: NoCompression
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.675602) EVENT_LOG_v1 {"time_micros": 1771581162675595, "job": 16, "event": "compaction_finished", "compaction_time_micros": 70848, "compaction_time_cpu_micros": 34099, "output_level": 6, "num_output_files": 1, "total_output_size": 17598389, "num_input_records": 12797, "num_output_records": 12298, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581162675815, "job": 16, "event": "table_file_deletion", "file_number": 32}
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581162677362, "job": 16, "event": "table_file_deletion", "file_number": 30}
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.602608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.677513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.677521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.677524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.677527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:52:42 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:52:42.677530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:52:42 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:42.835 262775 INFO neutron.agent.linux.ip_lib [None req-d88826bc-ef82-4a06-8d2c-38a8b0618ddc - - - - - -] Device tap64d14e4b-22 cannot be used as it has no MAC address
Feb 20 09:52:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:42.862 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:42 np0005625203.localdomain kernel: device tap64d14e4b-22 entered promiscuous mode
Feb 20 09:52:42 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:42Z|00075|binding|INFO|Claiming lport 64d14e4b-224c-4007-8506-1948d7f0895b for this chassis.
Feb 20 09:52:42 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:42Z|00076|binding|INFO|64d14e4b-224c-4007-8506-1948d7f0895b: Claiming unknown
Feb 20 09:52:42 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581162.8746] manager: (tap64d14e4b-22): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Feb 20 09:52:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:42.875 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:42 np0005625203.localdomain systemd-udevd[309233]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:52:42 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:42.890 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-fc216fd0-deaa-4436-ae16-5542823e1a24', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc216fd0-deaa-4436-ae16-5542823e1a24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1912094b79b7458d948f9c005e08fee7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d46681-a868-422d-bed4-f4e97aa3239e, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=64d14e4b-224c-4007-8506-1948d7f0895b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:52:42 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:42.892 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 64d14e4b-224c-4007-8506-1948d7f0895b in datapath fc216fd0-deaa-4436-ae16-5542823e1a24 bound to our chassis
Feb 20 09:52:42 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:42.896 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 81e55ab7-8c67-4438-9f20-7e89352e6098 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:52:42 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:42.897 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc216fd0-deaa-4436-ae16-5542823e1a24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:52:42 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:42.897 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf25f57-65f4-4a0b-8cf3-13f1b26cb23c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:52:42 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d14e4b-22: No such device
Feb 20 09:52:42 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d14e4b-22: No such device
Feb 20 09:52:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:42.911 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:42 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:42Z|00077|binding|INFO|Setting lport 64d14e4b-224c-4007-8506-1948d7f0895b ovn-installed in OVS
Feb 20 09:52:42 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:42Z|00078|binding|INFO|Setting lport 64d14e4b-224c-4007-8506-1948d7f0895b up in Southbound
Feb 20 09:52:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:42.915 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:42 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d14e4b-22: No such device
Feb 20 09:52:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:42.918 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:42 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d14e4b-22: No such device
Feb 20 09:52:42 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d14e4b-22: No such device
Feb 20 09:52:42 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d14e4b-22: No such device
Feb 20 09:52:42 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d14e4b-22: No such device
Feb 20 09:52:42 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d14e4b-22: No such device
Feb 20 09:52:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:42.949 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:42.979 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:43 np0005625203.localdomain ceph-mon[296066]: pgmap v150: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 8.8 KiB/s rd, 341 B/s wr, 12 op/s
Feb 20 09:52:43 np0005625203.localdomain podman[309305]: 
Feb 20 09:52:43 np0005625203.localdomain podman[309305]: 2026-02-20 09:52:43.877150012 +0000 UTC m=+0.080831013 container create 316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc216fd0-deaa-4436-ae16-5542823e1a24, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 20 09:52:43 np0005625203.localdomain systemd[1]: Started libpod-conmon-316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5.scope.
Feb 20 09:52:43 np0005625203.localdomain podman[309305]: 2026-02-20 09:52:43.833417764 +0000 UTC m=+0.037098805 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:52:43 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:52:43 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2552cc1cea6b6d3aa035558a84b4599e3bb9ca427a25b8cabf98721467f44837/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:52:43 np0005625203.localdomain podman[309305]: 2026-02-20 09:52:43.952038592 +0000 UTC m=+0.155719593 container init 316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc216fd0-deaa-4436-ae16-5542823e1a24, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 20 09:52:43 np0005625203.localdomain podman[309305]: 2026-02-20 09:52:43.961913757 +0000 UTC m=+0.165594778 container start 316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc216fd0-deaa-4436-ae16-5542823e1a24, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:52:43 np0005625203.localdomain dnsmasq[309323]: started, version 2.85 cachesize 150
Feb 20 09:52:43 np0005625203.localdomain dnsmasq[309323]: DNS service limited to local subnets
Feb 20 09:52:43 np0005625203.localdomain dnsmasq[309323]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:52:43 np0005625203.localdomain dnsmasq[309323]: warning: no upstream servers configured
Feb 20 09:52:43 np0005625203.localdomain dnsmasq-dhcp[309323]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:52:43 np0005625203.localdomain dnsmasq[309323]: read /var/lib/neutron/dhcp/fc216fd0-deaa-4436-ae16-5542823e1a24/addn_hosts - 0 addresses
Feb 20 09:52:43 np0005625203.localdomain dnsmasq-dhcp[309323]: read /var/lib/neutron/dhcp/fc216fd0-deaa-4436-ae16-5542823e1a24/host
Feb 20 09:52:43 np0005625203.localdomain dnsmasq-dhcp[309323]: read /var/lib/neutron/dhcp/fc216fd0-deaa-4436-ae16-5542823e1a24/opts
Feb 20 09:52:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:44.074 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:44 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:44.156 262775 INFO neutron.agent.dhcp.agent [None req-042c0b8d-4c54-47ec-9824-df8c19649d8f - - - - - -] DHCP configuration for ports {'c1603454-842a-4c5e-8e5c-379ff0021842'} is completed
Feb 20 09:52:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:52:45 np0005625203.localdomain ceph-mon[296066]: pgmap v151: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:52:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e110 e110: 6 total, 6 up, 6 in
Feb 20 09:52:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:45.427 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:46 np0005625203.localdomain ceph-mon[296066]: osdmap e110: 6 total, 6 up, 6 in
Feb 20 09:52:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e111 e111: 6 total, 6 up, 6 in
Feb 20 09:52:47 np0005625203.localdomain ceph-mon[296066]: pgmap v153: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 3.8 KiB/s rd, 307 B/s wr, 5 op/s
Feb 20 09:52:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:47.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:52:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:47.678 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:47Z, description=, device_id=e9eaf7e2-a744-4430-90a3-738e36e65f14, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e49c10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e498e0>], id=71fa4970-a1e8-4266-a570-045d01602781, ip_allocation=immediate, mac_address=fa:16:3e:a2:b1:22, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:52:40Z, description=, dns_domain=, id=fc216fd0-deaa-4436-ae16-5542823e1a24, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-670031142-network, port_security_enabled=True, project_id=1912094b79b7458d948f9c005e08fee7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21294, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=915, status=ACTIVE, subnets=['2441d8e8-86f1-47c2-8325-9200013561bd'], tags=[], tenant_id=1912094b79b7458d948f9c005e08fee7, updated_at=2026-02-20T09:52:41Z, vlan_transparent=None, network_id=fc216fd0-deaa-4436-ae16-5542823e1a24, port_security_enabled=False, project_id=1912094b79b7458d948f9c005e08fee7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=964, status=DOWN, tags=[], tenant_id=1912094b79b7458d948f9c005e08fee7, updated_at=2026-02-20T09:52:47Z on network fc216fd0-deaa-4436-ae16-5542823e1a24
Feb 20 09:52:47 np0005625203.localdomain podman[309342]: 2026-02-20 09:52:47.929835802 +0000 UTC m=+0.073457337 container kill 316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc216fd0-deaa-4436-ae16-5542823e1a24, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:52:47 np0005625203.localdomain dnsmasq[309323]: read /var/lib/neutron/dhcp/fc216fd0-deaa-4436-ae16-5542823e1a24/addn_hosts - 1 addresses
Feb 20 09:52:47 np0005625203.localdomain dnsmasq-dhcp[309323]: read /var/lib/neutron/dhcp/fc216fd0-deaa-4436-ae16-5542823e1a24/host
Feb 20 09:52:47 np0005625203.localdomain dnsmasq-dhcp[309323]: read /var/lib/neutron/dhcp/fc216fd0-deaa-4436-ae16-5542823e1a24/opts
Feb 20 09:52:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:52:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:52:48 np0005625203.localdomain podman[309356]: 2026-02-20 09:52:48.05752186 +0000 UTC m=+0.096594701 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:52:48 np0005625203.localdomain podman[309356]: 2026-02-20 09:52:48.094657835 +0000 UTC m=+0.133730656 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:52:48 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:52:48 np0005625203.localdomain podman[309357]: 2026-02-20 09:52:48.114862238 +0000 UTC m=+0.152656729 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:52:48 np0005625203.localdomain podman[309357]: 2026-02-20 09:52:48.122651598 +0000 UTC m=+0.160446089 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:52:48 np0005625203.localdomain ceph-mon[296066]: osdmap e111: 6 total, 6 up, 6 in
Feb 20 09:52:48 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:52:48 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:48.241 262775 INFO neutron.agent.dhcp.agent [None req-b02590d9-518b-4630-a1b9-f05c7f2397bd - - - - - -] DHCP configuration for ports {'71fa4970-a1e8-4266-a570-045d01602781'} is completed
Feb 20 09:52:48 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:48.767 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:52:47Z, description=, device_id=e9eaf7e2-a744-4430-90a3-738e36e65f14, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ebac10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4eba3a0>], id=71fa4970-a1e8-4266-a570-045d01602781, ip_allocation=immediate, mac_address=fa:16:3e:a2:b1:22, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:52:40Z, description=, dns_domain=, id=fc216fd0-deaa-4436-ae16-5542823e1a24, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ServersTestJSON-670031142-network, port_security_enabled=True, project_id=1912094b79b7458d948f9c005e08fee7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21294, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=915, status=ACTIVE, subnets=['2441d8e8-86f1-47c2-8325-9200013561bd'], tags=[], tenant_id=1912094b79b7458d948f9c005e08fee7, updated_at=2026-02-20T09:52:41Z, vlan_transparent=None, network_id=fc216fd0-deaa-4436-ae16-5542823e1a24, port_security_enabled=False, project_id=1912094b79b7458d948f9c005e08fee7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=964, status=DOWN, tags=[], tenant_id=1912094b79b7458d948f9c005e08fee7, updated_at=2026-02-20T09:52:47Z on network fc216fd0-deaa-4436-ae16-5542823e1a24
Feb 20 09:52:48 np0005625203.localdomain podman[309427]: 2026-02-20 09:52:48.998212092 +0000 UTC m=+0.072220139 container kill 316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc216fd0-deaa-4436-ae16-5542823e1a24, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:52:48 np0005625203.localdomain dnsmasq[309323]: read /var/lib/neutron/dhcp/fc216fd0-deaa-4436-ae16-5542823e1a24/addn_hosts - 1 addresses
Feb 20 09:52:48 np0005625203.localdomain dnsmasq-dhcp[309323]: read /var/lib/neutron/dhcp/fc216fd0-deaa-4436-ae16-5542823e1a24/host
Feb 20 09:52:48 np0005625203.localdomain dnsmasq-dhcp[309323]: read /var/lib/neutron/dhcp/fc216fd0-deaa-4436-ae16-5542823e1a24/opts
Feb 20 09:52:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:49.077 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:49 np0005625203.localdomain ceph-mon[296066]: pgmap v155: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 4.7 KiB/s rd, 383 B/s wr, 6 op/s
Feb 20 09:52:49 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:49.292 262775 INFO neutron.agent.dhcp.agent [None req-eb5f7184-30ad-466d-b347-b0cab50e14b1 - - - - - -] DHCP configuration for ports {'71fa4970-a1e8-4266-a570-045d01602781'} is completed
Feb 20 09:52:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:52:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:49Z|00079|ovn_bfd|INFO|Enabled BFD on interface ovn-0c414b-0
Feb 20 09:52:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:49Z|00080|ovn_bfd|INFO|Enabled BFD on interface ovn-2275c3-0
Feb 20 09:52:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:49Z|00081|ovn_bfd|INFO|Enabled BFD on interface ovn-2df8cc-0
Feb 20 09:52:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:49.913 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:49.932 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:49.937 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:49.948 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:49.957 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:50.431 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:50.967 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:50.970 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:51 np0005625203.localdomain ceph-mon[296066]: pgmap v156: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 2.9 KiB/s wr, 20 op/s
Feb 20 09:52:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:51.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:52:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:51.360 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:52:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:51.360 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:52:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:51.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:52:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:51.361 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:52:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:51.362 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:52:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:51.703 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:52:51 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/85473296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:52:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:51.824 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:52:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:52.049 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:52:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:52.052 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11746MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:52:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:52.052 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:52:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:52.053 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:52:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:52.159 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:52:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:52.160 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:52:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/85473296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:52:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:52.204 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:52:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e112 e112: 6 total, 6 up, 6 in
Feb 20 09:52:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:52:52 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3931567334' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:52:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:52.670 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:52:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:52.678 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:52:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:52.705 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:52:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:52.746 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:52:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:52.747 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:52:53 np0005625203.localdomain ceph-mon[296066]: pgmap v157: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 4.1 KiB/s wr, 49 op/s
Feb 20 09:52:53 np0005625203.localdomain ceph-mon[296066]: osdmap e112: 6 total, 6 up, 6 in
Feb 20 09:52:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3931567334' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:52:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2540140338' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:52:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:54.080 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2299012886' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:52:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:52:54 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:54Z|00082|ovn_bfd|INFO|Disabled BFD on interface ovn-0c414b-0
Feb 20 09:52:54 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:54Z|00083|ovn_bfd|INFO|Disabled BFD on interface ovn-2275c3-0
Feb 20 09:52:54 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:54Z|00084|ovn_bfd|INFO|Disabled BFD on interface ovn-2df8cc-0
Feb 20 09:52:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:54.889 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:54.890 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:54.901 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:54 np0005625203.localdomain podman[309512]: 2026-02-20 09:52:54.997702083 +0000 UTC m=+0.043379269 container kill 316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc216fd0-deaa-4436-ae16-5542823e1a24, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:52:54 np0005625203.localdomain dnsmasq[309323]: read /var/lib/neutron/dhcp/fc216fd0-deaa-4436-ae16-5542823e1a24/addn_hosts - 0 addresses
Feb 20 09:52:54 np0005625203.localdomain dnsmasq-dhcp[309323]: read /var/lib/neutron/dhcp/fc216fd0-deaa-4436-ae16-5542823e1a24/host
Feb 20 09:52:54 np0005625203.localdomain dnsmasq-dhcp[309323]: read /var/lib/neutron/dhcp/fc216fd0-deaa-4436-ae16-5542823e1a24/opts
Feb 20 09:52:55 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:55Z|00085|binding|INFO|Releasing lport 64d14e4b-224c-4007-8506-1948d7f0895b from this chassis (sb_readonly=0)
Feb 20 09:52:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:55.179 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:55 np0005625203.localdomain kernel: device tap64d14e4b-22 left promiscuous mode
Feb 20 09:52:55 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:55Z|00086|binding|INFO|Setting lport 64d14e4b-224c-4007-8506-1948d7f0895b down in Southbound
Feb 20 09:52:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:55.192 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-fc216fd0-deaa-4436-ae16-5542823e1a24', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc216fd0-deaa-4436-ae16-5542823e1a24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1912094b79b7458d948f9c005e08fee7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a5d46681-a868-422d-bed4-f4e97aa3239e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=64d14e4b-224c-4007-8506-1948d7f0895b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:52:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:55.194 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 64d14e4b-224c-4007-8506-1948d7f0895b in datapath fc216fd0-deaa-4436-ae16-5542823e1a24 unbound from our chassis
Feb 20 09:52:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:55.197 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc216fd0-deaa-4436-ae16-5542823e1a24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:52:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:55.198 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[fa431451-c52e-4f08-a4de-7aeb6c0027ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:52:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:55.202 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:55 np0005625203.localdomain ceph-mon[296066]: pgmap v159: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 3.7 KiB/s wr, 43 op/s
Feb 20 09:52:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:55.433 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:56.050 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:52:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:56.051 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:52:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:56.053 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:52:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:56.096 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:56 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:52:56.311 2 INFO neutron.agent.securitygroups_rpc [None req-35b4089d-d96b-4223-91a7-29363be26031 9b5edcaf5d0f48eea2ef440e3b3c2f79 85741ccf160049968710bbf0d3ed7a21 - - default default] Security group member updated ['1f0747df-ad50-4106-9a56-f1b68b2201c8']
Feb 20 09:52:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:52:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:56.747 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:52:56 np0005625203.localdomain podman[309534]: 2026-02-20 09:52:56.761637005 +0000 UTC m=+0.079526864 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:52:56 np0005625203.localdomain podman[309534]: 2026-02-20 09:52:56.795291363 +0000 UTC m=+0.113181232 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:52:56 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:52:57 np0005625203.localdomain sshd[309552]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:52:57 np0005625203.localdomain ceph-mon[296066]: pgmap v160: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 3.6 KiB/s wr, 41 op/s
Feb 20 09:52:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:57.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:52:57 np0005625203.localdomain sshd[309552]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:52:58 np0005625203.localdomain podman[309571]: 2026-02-20 09:52:58.221978244 +0000 UTC m=+0.044571316 container kill 316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc216fd0-deaa-4436-ae16-5542823e1a24, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true)
Feb 20 09:52:58 np0005625203.localdomain dnsmasq[309323]: exiting on receipt of SIGTERM
Feb 20 09:52:58 np0005625203.localdomain systemd[1]: libpod-316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5.scope: Deactivated successfully.
Feb 20 09:52:58 np0005625203.localdomain podman[309586]: 2026-02-20 09:52:58.315682444 +0000 UTC m=+0.070481135 container died 316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc216fd0-deaa-4436-ae16-5542823e1a24, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 20 09:52:58 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5-userdata-shm.mount: Deactivated successfully.
Feb 20 09:52:58 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-2552cc1cea6b6d3aa035558a84b4599e3bb9ca427a25b8cabf98721467f44837-merged.mount: Deactivated successfully.
Feb 20 09:52:58 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:52:58.359 2 INFO neutron.agent.securitygroups_rpc [None req-3bf8b391-96d6-4728-ae92-83d8f7b4ba3a 9b5edcaf5d0f48eea2ef440e3b3c2f79 85741ccf160049968710bbf0d3ed7a21 - - default default] Security group member updated ['1f0747df-ad50-4106-9a56-f1b68b2201c8']
Feb 20 09:52:58 np0005625203.localdomain podman[309586]: 2026-02-20 09:52:58.362157827 +0000 UTC m=+0.116956478 container remove 316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fc216fd0-deaa-4436-ae16-5542823e1a24, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 09:52:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:58.375 262775 INFO neutron.agent.linux.ip_lib [None req-6fd0b92a-c42a-4acc-935f-8fcd87fbec89 - - - - - -] Device tap9a92a23c-a9 cannot be used as it has no MAC address
Feb 20 09:52:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:58.400 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:58 np0005625203.localdomain kernel: device tap9a92a23c-a9 entered promiscuous mode
Feb 20 09:52:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:58.407 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:58Z|00087|binding|INFO|Claiming lport 9a92a23c-a969-47b1-9de5-0b873c09ca6b for this chassis.
Feb 20 09:52:58 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581178.4129] manager: (tap9a92a23c-a9): new Generic device (/org/freedesktop/NetworkManager/Devices/23)
Feb 20 09:52:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:58Z|00088|binding|INFO|9a92a23c-a969-47b1-9de5-0b873c09ca6b: Claiming unknown
Feb 20 09:52:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:58.412 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:58 np0005625203.localdomain systemd-udevd[309619]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:52:58 np0005625203.localdomain systemd[1]: libpod-conmon-316d64a306b02f1ee62c09b899cf23f96dd8ae1ae2d0ccce400f6f2d6a450db5.scope: Deactivated successfully.
Feb 20 09:52:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:58.419 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:58.426 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-ff12f093-f8bd-48c7-b88e-d26a538371b2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff12f093-f8bd-48c7-b88e-d26a538371b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ceb06a838504815b25cba2f3a349263', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe2aba22-2e36-4d47-8aaa-eccfe0c0f092, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=9a92a23c-a969-47b1-9de5-0b873c09ca6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:52:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:58.428 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 9a92a23c-a969-47b1-9de5-0b873c09ca6b in datapath ff12f093-f8bd-48c7-b88e-d26a538371b2 bound to our chassis
Feb 20 09:52:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:58.431 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ff12f093-f8bd-48c7-b88e-d26a538371b2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:52:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:52:58.432 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[3519b89e-6d5a-46e3-b21d-2aa53b850fa5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:52:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap9a92a23c-a9: No such device
Feb 20 09:52:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:58Z|00089|binding|INFO|Setting lport 9a92a23c-a969-47b1-9de5-0b873c09ca6b ovn-installed in OVS
Feb 20 09:52:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:52:58Z|00090|binding|INFO|Setting lport 9a92a23c-a969-47b1-9de5-0b873c09ca6b up in Southbound
Feb 20 09:52:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap9a92a23c-a9: No such device
Feb 20 09:52:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:58.444 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap9a92a23c-a9: No such device
Feb 20 09:52:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap9a92a23c-a9: No such device
Feb 20 09:52:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap9a92a23c-a9: No such device
Feb 20 09:52:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap9a92a23c-a9: No such device
Feb 20 09:52:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap9a92a23c-a9: No such device
Feb 20 09:52:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap9a92a23c-a9: No such device
Feb 20 09:52:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:58.477 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:58.506 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:52:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:52:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:52:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 09:52:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18298 "" "Go-http-client/1.1"
Feb 20 09:52:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:59.083 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:52:59 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2dfc216fd0\x2ddeaa\x2d4436\x2dae16\x2d5542823e1a24.mount: Deactivated successfully.
Feb 20 09:52:59 np0005625203.localdomain ceph-mon[296066]: pgmap v161: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 3.0 KiB/s wr, 34 op/s
Feb 20 09:52:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:52:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:59.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:52:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:59.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:52:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:52:59.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:52:59 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:59.381 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:52:59 np0005625203.localdomain podman[309690]: 
Feb 20 09:52:59 np0005625203.localdomain podman[309690]: 2026-02-20 09:52:59.474561985 +0000 UTC m=+0.086344494 container create b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff12f093-f8bd-48c7-b88e-d26a538371b2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 20 09:52:59 np0005625203.localdomain systemd[1]: Started libpod-conmon-b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b.scope.
Feb 20 09:52:59 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:52:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746fcd815e6e199bb86766490ceaa14216130e819ce15f64568c61b02240a846/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:52:59 np0005625203.localdomain podman[309690]: 2026-02-20 09:52:59.437497222 +0000 UTC m=+0.049279761 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:52:59 np0005625203.localdomain podman[309690]: 2026-02-20 09:52:59.546268697 +0000 UTC m=+0.158051206 container init b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff12f093-f8bd-48c7-b88e-d26a538371b2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Feb 20 09:52:59 np0005625203.localdomain podman[309690]: 2026-02-20 09:52:59.554914704 +0000 UTC m=+0.166697213 container start b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff12f093-f8bd-48c7-b88e-d26a538371b2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 20 09:52:59 np0005625203.localdomain dnsmasq[309709]: started, version 2.85 cachesize 150
Feb 20 09:52:59 np0005625203.localdomain dnsmasq[309709]: DNS service limited to local subnets
Feb 20 09:52:59 np0005625203.localdomain dnsmasq[309709]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:52:59 np0005625203.localdomain dnsmasq[309709]: warning: no upstream servers configured
Feb 20 09:52:59 np0005625203.localdomain dnsmasq-dhcp[309709]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:52:59 np0005625203.localdomain dnsmasq[309709]: read /var/lib/neutron/dhcp/ff12f093-f8bd-48c7-b88e-d26a538371b2/addn_hosts - 0 addresses
Feb 20 09:52:59 np0005625203.localdomain dnsmasq-dhcp[309709]: read /var/lib/neutron/dhcp/ff12f093-f8bd-48c7-b88e-d26a538371b2/host
Feb 20 09:52:59 np0005625203.localdomain dnsmasq-dhcp[309709]: read /var/lib/neutron/dhcp/ff12f093-f8bd-48c7-b88e-d26a538371b2/opts
Feb 20 09:52:59 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:52:59.702 262775 INFO neutron.agent.dhcp.agent [None req-87ee0fa0-44a9-496e-9e2b-558e4ea46085 - - - - - -] DHCP configuration for ports {'ef66d003-d5b0-44e9-a80f-49dd4f6aac3e'} is completed
Feb 20 09:53:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:00.014 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/4201398583' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:53:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:00.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:53:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:00.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:53:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:00.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:53:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:00.355 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:53:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:00.356 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:53:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:00.435 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:01 np0005625203.localdomain ceph-mon[296066]: pgmap v162: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 1023 B/s wr, 23 op/s
Feb 20 09:53:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3538498167' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:53:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:01.352 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:53:02 np0005625203.localdomain ceph-mon[296066]: pgmap v163: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4284690376' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:53:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4284690376' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:53:02 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:53:02 np0005625203.localdomain podman[309710]: 2026-02-20 09:53:02.778792473 +0000 UTC m=+0.094972761 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible)
Feb 20 09:53:02 np0005625203.localdomain podman[309710]: 2026-02-20 09:53:02.844490238 +0000 UTC m=+0.160670486 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Feb 20 09:53:02 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:53:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:03.326 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:02Z, description=, device_id=da5bdbc2-1507-4da5-9265-efc184d2b2bc, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ef2520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ef9a00>], id=ac46d125-05cb-40fe-a14a-7dd6c818633f, ip_allocation=immediate, mac_address=fa:16:3e:dc:1a:52, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:52:56Z, description=, dns_domain=, id=ff12f093-f8bd-48c7-b88e-d26a538371b2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsTestJSON-1205263893-network, port_security_enabled=True, project_id=5ceb06a838504815b25cba2f3a349263, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5108, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1023, status=ACTIVE, subnets=['2ff834e4-0ad7-4a74-b6fb-0c2efc4eca49'], tags=[], tenant_id=5ceb06a838504815b25cba2f3a349263, updated_at=2026-02-20T09:52:57Z, vlan_transparent=None, network_id=ff12f093-f8bd-48c7-b88e-d26a538371b2, port_security_enabled=False, project_id=5ceb06a838504815b25cba2f3a349263, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1064, status=DOWN, tags=[], tenant_id=5ceb06a838504815b25cba2f3a349263, updated_at=2026-02-20T09:53:02Z on network ff12f093-f8bd-48c7-b88e-d26a538371b2
Feb 20 09:53:03 np0005625203.localdomain dnsmasq[309709]: read /var/lib/neutron/dhcp/ff12f093-f8bd-48c7-b88e-d26a538371b2/addn_hosts - 1 addresses
Feb 20 09:53:03 np0005625203.localdomain dnsmasq-dhcp[309709]: read /var/lib/neutron/dhcp/ff12f093-f8bd-48c7-b88e-d26a538371b2/host
Feb 20 09:53:03 np0005625203.localdomain podman[309751]: 2026-02-20 09:53:03.558476409 +0000 UTC m=+0.063353265 container kill b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff12f093-f8bd-48c7-b88e-d26a538371b2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:53:03 np0005625203.localdomain dnsmasq-dhcp[309709]: read /var/lib/neutron/dhcp/ff12f093-f8bd-48c7-b88e-d26a538371b2/opts
Feb 20 09:53:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:03.864 262775 INFO neutron.agent.dhcp.agent [None req-34ceb966-5167-4b90-8dfb-eef612e9b0c7 - - - - - -] DHCP configuration for ports {'ac46d125-05cb-40fe-a14a-7dd6c818633f'} is completed
Feb 20 09:53:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:04.085 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:04 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:53:04 np0005625203.localdomain ceph-mon[296066]: pgmap v164: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:05.470 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:05 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:05.614 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:02Z, description=, device_id=da5bdbc2-1507-4da5-9265-efc184d2b2bc, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4eacb50>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4eacc70>], id=ac46d125-05cb-40fe-a14a-7dd6c818633f, ip_allocation=immediate, mac_address=fa:16:3e:dc:1a:52, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:52:56Z, description=, dns_domain=, id=ff12f093-f8bd-48c7-b88e-d26a538371b2, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-SecurityGroupsTestJSON-1205263893-network, port_security_enabled=True, project_id=5ceb06a838504815b25cba2f3a349263, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5108, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1023, status=ACTIVE, subnets=['2ff834e4-0ad7-4a74-b6fb-0c2efc4eca49'], tags=[], tenant_id=5ceb06a838504815b25cba2f3a349263, updated_at=2026-02-20T09:52:57Z, vlan_transparent=None, network_id=ff12f093-f8bd-48c7-b88e-d26a538371b2, port_security_enabled=False, project_id=5ceb06a838504815b25cba2f3a349263, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=1064, status=DOWN, tags=[], tenant_id=5ceb06a838504815b25cba2f3a349263, updated_at=2026-02-20T09:53:02Z on network ff12f093-f8bd-48c7-b88e-d26a538371b2
Feb 20 09:53:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:53:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:53:05 np0005625203.localdomain systemd[1]: tmp-crun.bokkP0.mount: Deactivated successfully.
Feb 20 09:53:05 np0005625203.localdomain systemd[1]: tmp-crun.VdrYHj.mount: Deactivated successfully.
Feb 20 09:53:05 np0005625203.localdomain dnsmasq[309709]: read /var/lib/neutron/dhcp/ff12f093-f8bd-48c7-b88e-d26a538371b2/addn_hosts - 1 addresses
Feb 20 09:53:05 np0005625203.localdomain dnsmasq-dhcp[309709]: read /var/lib/neutron/dhcp/ff12f093-f8bd-48c7-b88e-d26a538371b2/host
Feb 20 09:53:05 np0005625203.localdomain dnsmasq-dhcp[309709]: read /var/lib/neutron/dhcp/ff12f093-f8bd-48c7-b88e-d26a538371b2/opts
Feb 20 09:53:05 np0005625203.localdomain podman[309810]: 2026-02-20 09:53:05.855122861 +0000 UTC m=+0.076685127 container kill b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff12f093-f8bd-48c7-b88e-d26a538371b2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:53:05 np0005625203.localdomain podman[309777]: 2026-02-20 09:53:05.835405183 +0000 UTC m=+0.145369355 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:53:05 np0005625203.localdomain podman[309778]: 2026-02-20 09:53:05.889375707 +0000 UTC m=+0.196032467 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7)
Feb 20 09:53:05 np0005625203.localdomain podman[309777]: 2026-02-20 09:53:05.9183208 +0000 UTC m=+0.228284912 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:53:05 np0005625203.localdomain podman[309778]: 2026-02-20 09:53:05.931264899 +0000 UTC m=+0.237921639 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, release=1770267347, vcs-type=git, io.buildah.version=1.33.7, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9/ubi-minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Feb 20 09:53:05 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:53:05 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:53:06 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:06.137 262775 INFO neutron.agent.dhcp.agent [None req-261f2c0e-fe66-469b-9b53-f9802c85a031 - - - - - -] DHCP configuration for ports {'ac46d125-05cb-40fe-a14a-7dd6c818633f'} is completed
Feb 20 09:53:06 np0005625203.localdomain ceph-mon[296066]: pgmap v165: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:53:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:53:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:53:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:53:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:53:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:53:07 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:07Z|00091|ovn_bfd|INFO|Enabled BFD on interface ovn-0c414b-0
Feb 20 09:53:07 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:07Z|00092|ovn_bfd|INFO|Enabled BFD on interface ovn-2275c3-0
Feb 20 09:53:07 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:07Z|00093|ovn_bfd|INFO|Enabled BFD on interface ovn-2df8cc-0
Feb 20 09:53:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:07.669 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:53:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:07.670 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:53:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:07.670 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:53:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:07.711 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:07.714 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:07.717 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:08 np0005625203.localdomain sshd[309851]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:53:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:08.570 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:08.643 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:08.659 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:08 np0005625203.localdomain ceph-mon[296066]: pgmap v166: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:08 np0005625203.localdomain sshd[309851]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:53:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:09.088 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:53:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:09.470 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:09.554 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:09.579 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:10.509 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:10 np0005625203.localdomain ceph-mon[296066]: pgmap v167: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:12 np0005625203.localdomain ceph-mon[296066]: pgmap v168: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:14.090 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:14 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:53:14 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:14.372 2 INFO neutron.agent.securitygroups_rpc [None req-8db2cda0-f70c-405a-8e32-bbd09e8f7101 253c13edd6b940a9b5cd64cc7bfa0ff5 bae77758d77d4d43af7ac10744892742 - - default default] Security group member updated ['f7e09389-cc97-415e-b643-e34abd10c9b7']
Feb 20 09:53:14 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:14.450 2 INFO neutron.agent.securitygroups_rpc [None req-8db2cda0-f70c-405a-8e32-bbd09e8f7101 253c13edd6b940a9b5cd64cc7bfa0ff5 bae77758d77d4d43af7ac10744892742 - - default default] Security group member updated ['f7e09389-cc97-415e-b643-e34abd10c9b7']
Feb 20 09:53:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:14Z|00094|ovn_bfd|INFO|Disabled BFD on interface ovn-0c414b-0
Feb 20 09:53:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:14Z|00095|ovn_bfd|INFO|Disabled BFD on interface ovn-2275c3-0
Feb 20 09:53:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:14Z|00096|ovn_bfd|INFO|Disabled BFD on interface ovn-2df8cc-0
Feb 20 09:53:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:14.603 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:14.606 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:14.626 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:14 np0005625203.localdomain dnsmasq[309709]: read /var/lib/neutron/dhcp/ff12f093-f8bd-48c7-b88e-d26a538371b2/addn_hosts - 0 addresses
Feb 20 09:53:14 np0005625203.localdomain dnsmasq-dhcp[309709]: read /var/lib/neutron/dhcp/ff12f093-f8bd-48c7-b88e-d26a538371b2/host
Feb 20 09:53:14 np0005625203.localdomain dnsmasq-dhcp[309709]: read /var/lib/neutron/dhcp/ff12f093-f8bd-48c7-b88e-d26a538371b2/opts
Feb 20 09:53:14 np0005625203.localdomain podman[309872]: 2026-02-20 09:53:14.720815771 +0000 UTC m=+0.059048632 container kill b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff12f093-f8bd-48c7-b88e-d26a538371b2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:53:14 np0005625203.localdomain ceph-mon[296066]: pgmap v169: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:14 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:14.974 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:14.974 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:14 np0005625203.localdomain kernel: device tap9a92a23c-a9 left promiscuous mode
Feb 20 09:53:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:14Z|00097|binding|INFO|Releasing lport 9a92a23c-a969-47b1-9de5-0b873c09ca6b from this chassis (sb_readonly=0)
Feb 20 09:53:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:14Z|00098|binding|INFO|Setting lport 9a92a23c-a969-47b1-9de5-0b873c09ca6b down in Southbound
Feb 20 09:53:14 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:14.976 2 INFO neutron.agent.securitygroups_rpc [None req-ea37899b-0895-4039-936c-a92dc4af71cc 253c13edd6b940a9b5cd64cc7bfa0ff5 bae77758d77d4d43af7ac10744892742 - - default default] Security group member updated ['f7e09389-cc97-415e-b643-e34abd10c9b7']
Feb 20 09:53:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:14.984 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-ff12f093-f8bd-48c7-b88e-d26a538371b2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ff12f093-f8bd-48c7-b88e-d26a538371b2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ceb06a838504815b25cba2f3a349263', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fe2aba22-2e36-4d47-8aaa-eccfe0c0f092, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=9a92a23c-a969-47b1-9de5-0b873c09ca6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:53:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:14.987 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 9a92a23c-a969-47b1-9de5-0b873c09ca6b in datapath ff12f093-f8bd-48c7-b88e-d26a538371b2 unbound from our chassis
Feb 20 09:53:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:14.989 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ff12f093-f8bd-48c7-b88e-d26a538371b2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:53:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:14.991 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[35afc60e-93fd-458a-8827-6251161e726f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:53:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:14.998 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:15 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:15.282 2 INFO neutron.agent.securitygroups_rpc [None req-6181ff08-47fa-4ccb-88a6-fd4810762b1a 253c13edd6b940a9b5cd64cc7bfa0ff5 bae77758d77d4d43af7ac10744892742 - - default default] Security group member updated ['f7e09389-cc97-415e-b643-e34abd10c9b7']
Feb 20 09:53:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:15.511 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:16 np0005625203.localdomain ceph-mon[296066]: pgmap v170: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.194 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.195 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:53:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:53:17 np0005625203.localdomain dnsmasq[309709]: exiting on receipt of SIGTERM
Feb 20 09:53:17 np0005625203.localdomain systemd[1]: libpod-b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b.scope: Deactivated successfully.
Feb 20 09:53:17 np0005625203.localdomain podman[309910]: 2026-02-20 09:53:17.832968082 +0000 UTC m=+0.065649556 container kill b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff12f093-f8bd-48c7-b88e-d26a538371b2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 20 09:53:17 np0005625203.localdomain podman[309924]: 2026-02-20 09:53:17.908286204 +0000 UTC m=+0.062936602 container died b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff12f093-f8bd-48c7-b88e-d26a538371b2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 20 09:53:17 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b-userdata-shm.mount: Deactivated successfully.
Feb 20 09:53:17 np0005625203.localdomain podman[309924]: 2026-02-20 09:53:17.947502143 +0000 UTC m=+0.102152501 container cleanup b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff12f093-f8bd-48c7-b88e-d26a538371b2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 09:53:17 np0005625203.localdomain systemd[1]: libpod-conmon-b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b.scope: Deactivated successfully.
Feb 20 09:53:17 np0005625203.localdomain podman[309926]: 2026-02-20 09:53:17.990308924 +0000 UTC m=+0.135560262 container remove b3b294359a7652d04c341ea4700bf9870cf78819c956a94e56eb3cfc5b3c1a6b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-ff12f093-f8bd-48c7-b88e-d26a538371b2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:53:18 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:18.180 262775 INFO neutron.agent.dhcp.agent [None req-afe4edf9-5a47-4b98-982a-b83bb9090330 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:18 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:18.182 262775 INFO neutron.agent.dhcp.agent [None req-afe4edf9-5a47-4b98-982a-b83bb9090330 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:18 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:18.546 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:53:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:53:18 np0005625203.localdomain podman[309955]: 2026-02-20 09:53:18.76774001 +0000 UTC m=+0.080555155 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:53:18 np0005625203.localdomain ceph-mon[296066]: pgmap v171: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:18 np0005625203.localdomain podman[309954]: 2026-02-20 09:53:18.820311772 +0000 UTC m=+0.136282385 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:53:18 np0005625203.localdomain podman[309954]: 2026-02-20 09:53:18.82609361 +0000 UTC m=+0.142064273 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:53:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-746fcd815e6e199bb86766490ceaa14216130e819ce15f64568c61b02240a846-merged.mount: Deactivated successfully.
Feb 20 09:53:18 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2dff12f093\x2df8bd\x2d48c7\x2db88e\x2dd26a538371b2.mount: Deactivated successfully.
Feb 20 09:53:18 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:53:18 np0005625203.localdomain podman[309955]: 2026-02-20 09:53:18.85237448 +0000 UTC m=+0.165189635 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:53:18 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:53:18 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:18.902 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:19.093 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:53:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:20.514 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:20 np0005625203.localdomain ceph-mon[296066]: pgmap v172: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:21 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:21.737 262775 INFO neutron.agent.linux.ip_lib [None req-a8f5f4c4-106e-475c-8297-b9dfba7300f0 - - - - - -] Device tapc40a1d99-89 cannot be used as it has no MAC address
Feb 20 09:53:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:21.768 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:21 np0005625203.localdomain kernel: device tapc40a1d99-89 entered promiscuous mode
Feb 20 09:53:21 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581201.7787] manager: (tapc40a1d99-89): new Generic device (/org/freedesktop/NetworkManager/Devices/24)
Feb 20 09:53:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:21.780 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:21 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:21Z|00099|binding|INFO|Claiming lport c40a1d99-89cb-432f-968e-ecbcda2aa422 for this chassis.
Feb 20 09:53:21 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:21Z|00100|binding|INFO|c40a1d99-89cb-432f-968e-ecbcda2aa422: Claiming unknown
Feb 20 09:53:21 np0005625203.localdomain systemd-udevd[310010]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:53:21 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:21.790 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-487c0c01-f5a0-46d2-a762-7820c2dbcbc2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-487c0c01-f5a0-46d2-a762-7820c2dbcbc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bae77758d77d4d43af7ac10744892742', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1ea5f5c-587f-48e0-b8af-546f4545fb12, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=c40a1d99-89cb-432f-968e-ecbcda2aa422) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:53:21 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:21.792 161112 INFO neutron.agent.ovn.metadata.agent [-] Port c40a1d99-89cb-432f-968e-ecbcda2aa422 in datapath 487c0c01-f5a0-46d2-a762-7820c2dbcbc2 bound to our chassis
Feb 20 09:53:21 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:21.794 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 487c0c01-f5a0-46d2-a762-7820c2dbcbc2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:53:21 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:21.795 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[9001afde-c6ad-433b-881c-dd142f245395]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:53:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc40a1d99-89: No such device
Feb 20 09:53:21 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:21Z|00101|binding|INFO|Setting lport c40a1d99-89cb-432f-968e-ecbcda2aa422 ovn-installed in OVS
Feb 20 09:53:21 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:21Z|00102|binding|INFO|Setting lport c40a1d99-89cb-432f-968e-ecbcda2aa422 up in Southbound
Feb 20 09:53:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc40a1d99-89: No such device
Feb 20 09:53:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:21.815 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc40a1d99-89: No such device
Feb 20 09:53:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc40a1d99-89: No such device
Feb 20 09:53:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc40a1d99-89: No such device
Feb 20 09:53:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc40a1d99-89: No such device
Feb 20 09:53:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc40a1d99-89: No such device
Feb 20 09:53:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc40a1d99-89: No such device
Feb 20 09:53:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:21.860 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:21.889 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.624946) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581202625040, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 752, "num_deletes": 256, "total_data_size": 682253, "memory_usage": 696648, "flush_reason": "Manual Compaction"}
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581202630858, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 444548, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21449, "largest_seqno": 22196, "table_properties": {"data_size": 441222, "index_size": 1181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7843, "raw_average_key_size": 18, "raw_value_size": 434335, "raw_average_value_size": 1049, "num_data_blocks": 52, "num_entries": 414, "num_filter_entries": 414, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581162, "oldest_key_time": 1771581162, "file_creation_time": 1771581202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5991 microseconds, and 2438 cpu microseconds.
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.630946) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 444548 bytes OK
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.630968) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.633158) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.633176) EVENT_LOG_v1 {"time_micros": 1771581202633170, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.633196) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 678233, prev total WAL file size 678557, number of live WAL files 2.
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.634097) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373731' seq:72057594037927935, type:22 .. '6C6F676D0034303232' seq:0, type:0; will stop at (end)
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(434KB)], [33(16MB)]
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581202634176, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 18042937, "oldest_snapshot_seqno": -1}
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 12183 keys, 17901647 bytes, temperature: kUnknown
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581202718460, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 17901647, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17832999, "index_size": 37123, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30469, "raw_key_size": 327009, "raw_average_key_size": 26, "raw_value_size": 17626358, "raw_average_value_size": 1446, "num_data_blocks": 1409, "num_entries": 12183, "num_filter_entries": 12183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.718812) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 17901647 bytes
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.721001) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 213.8 rd, 212.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 16.8 +0.0 blob) out(17.1 +0.0 blob), read-write-amplify(80.9) write-amplify(40.3) OK, records in: 12712, records dropped: 529 output_compression: NoCompression
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.721035) EVENT_LOG_v1 {"time_micros": 1771581202721020, "job": 18, "event": "compaction_finished", "compaction_time_micros": 84402, "compaction_time_cpu_micros": 49001, "output_level": 6, "num_output_files": 1, "total_output_size": 17901647, "num_input_records": 12712, "num_output_records": 12183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581202721363, "job": 18, "event": "table_file_deletion", "file_number": 35}
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581202724829, "job": 18, "event": "table_file_deletion", "file_number": 33}
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.633954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.724934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.724941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.724945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.724949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:53:22.724953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:53:22 np0005625203.localdomain podman[310079]: 
Feb 20 09:53:22 np0005625203.localdomain podman[310079]: 2026-02-20 09:53:22.816920143 +0000 UTC m=+0.100352606 container create 86b33a8d347140993f8262a7d00e984e06565de5bcafcfd4b545e0ae3b3a0c32 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-487c0c01-f5a0-46d2-a762-7820c2dbcbc2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:53:22 np0005625203.localdomain ceph-mon[296066]: pgmap v173: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:22 np0005625203.localdomain systemd[1]: Started libpod-conmon-86b33a8d347140993f8262a7d00e984e06565de5bcafcfd4b545e0ae3b3a0c32.scope.
Feb 20 09:53:22 np0005625203.localdomain systemd[1]: tmp-crun.ZQZojk.mount: Deactivated successfully.
Feb 20 09:53:22 np0005625203.localdomain podman[310079]: 2026-02-20 09:53:22.770638175 +0000 UTC m=+0.054070658 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:53:22 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:53:22 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e72df0ac08398acbf0027f3821923aadda93c78f7b30c64958805c9b8b05d7e9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:53:22 np0005625203.localdomain podman[310079]: 2026-02-20 09:53:22.909431136 +0000 UTC m=+0.192863599 container init 86b33a8d347140993f8262a7d00e984e06565de5bcafcfd4b545e0ae3b3a0c32 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-487c0c01-f5a0-46d2-a762-7820c2dbcbc2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 20 09:53:22 np0005625203.localdomain podman[310079]: 2026-02-20 09:53:22.919262889 +0000 UTC m=+0.202695342 container start 86b33a8d347140993f8262a7d00e984e06565de5bcafcfd4b545e0ae3b3a0c32 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-487c0c01-f5a0-46d2-a762-7820c2dbcbc2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 20 09:53:22 np0005625203.localdomain dnsmasq[310098]: started, version 2.85 cachesize 150
Feb 20 09:53:22 np0005625203.localdomain dnsmasq[310098]: DNS service limited to local subnets
Feb 20 09:53:22 np0005625203.localdomain dnsmasq[310098]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:53:22 np0005625203.localdomain dnsmasq[310098]: warning: no upstream servers configured
Feb 20 09:53:22 np0005625203.localdomain dnsmasq-dhcp[310098]: DHCP, static leases only on 10.100.0.16, lease time 1d
Feb 20 09:53:22 np0005625203.localdomain dnsmasq[310098]: read /var/lib/neutron/dhcp/487c0c01-f5a0-46d2-a762-7820c2dbcbc2/addn_hosts - 0 addresses
Feb 20 09:53:22 np0005625203.localdomain dnsmasq-dhcp[310098]: read /var/lib/neutron/dhcp/487c0c01-f5a0-46d2-a762-7820c2dbcbc2/host
Feb 20 09:53:22 np0005625203.localdomain dnsmasq-dhcp[310098]: read /var/lib/neutron/dhcp/487c0c01-f5a0-46d2-a762-7820c2dbcbc2/opts
Feb 20 09:53:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:23.083 262775 INFO neutron.agent.dhcp.agent [None req-d196e1fa-d76d-455b-8561-081a2a4fb768 - - - - - -] DHCP configuration for ports {'40956bcc-23de-492f-893a-78b829500b94'} is completed
Feb 20 09:53:23 np0005625203.localdomain dnsmasq[310098]: exiting on receipt of SIGTERM
Feb 20 09:53:23 np0005625203.localdomain podman[310116]: 2026-02-20 09:53:23.232115528 +0000 UTC m=+0.062573051 container kill 86b33a8d347140993f8262a7d00e984e06565de5bcafcfd4b545e0ae3b3a0c32 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-487c0c01-f5a0-46d2-a762-7820c2dbcbc2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:53:23 np0005625203.localdomain systemd[1]: libpod-86b33a8d347140993f8262a7d00e984e06565de5bcafcfd4b545e0ae3b3a0c32.scope: Deactivated successfully.
Feb 20 09:53:23 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:23Z|00103|binding|INFO|Removing iface tapc40a1d99-89 ovn-installed in OVS
Feb 20 09:53:23 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:23.238 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port f6abcf43-313a-4c7b-bdc4-53edb3f18b15 with type ""
Feb 20 09:53:23 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:23Z|00104|binding|INFO|Removing lport c40a1d99-89cb-432f-968e-ecbcda2aa422 ovn-installed in OVS
Feb 20 09:53:23 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:23.240 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-487c0c01-f5a0-46d2-a762-7820c2dbcbc2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-487c0c01-f5a0-46d2-a762-7820c2dbcbc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bae77758d77d4d43af7ac10744892742', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b1ea5f5c-587f-48e0-b8af-546f4545fb12, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=c40a1d99-89cb-432f-968e-ecbcda2aa422) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:53:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:23.240 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:23 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:23.242 161112 INFO neutron.agent.ovn.metadata.agent [-] Port c40a1d99-89cb-432f-968e-ecbcda2aa422 in datapath 487c0c01-f5a0-46d2-a762-7820c2dbcbc2 unbound from our chassis
Feb 20 09:53:23 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:23.245 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 487c0c01-f5a0-46d2-a762-7820c2dbcbc2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:53:23 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:23.246 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[d25f8607-cebb-4021-87d9-605bf11e0ce2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:53:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:23.249 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:23 np0005625203.localdomain podman[310131]: 2026-02-20 09:53:23.317616315 +0000 UTC m=+0.058460134 container died 86b33a8d347140993f8262a7d00e984e06565de5bcafcfd4b545e0ae3b3a0c32 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-487c0c01-f5a0-46d2-a762-7820c2dbcbc2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 20 09:53:23 np0005625203.localdomain podman[310131]: 2026-02-20 09:53:23.363978345 +0000 UTC m=+0.104822124 container remove 86b33a8d347140993f8262a7d00e984e06565de5bcafcfd4b545e0ae3b3a0c32 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-487c0c01-f5a0-46d2-a762-7820c2dbcbc2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:53:23 np0005625203.localdomain systemd[1]: libpod-conmon-86b33a8d347140993f8262a7d00e984e06565de5bcafcfd4b545e0ae3b3a0c32.scope: Deactivated successfully.
Feb 20 09:53:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:23.377 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:23 np0005625203.localdomain kernel: device tapc40a1d99-89 left promiscuous mode
Feb 20 09:53:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:23.400 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:23.415 262775 INFO neutron.agent.dhcp.agent [None req-4460b7d6-6dc4-497e-a054-1b1519a19577 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:23.450 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e72df0ac08398acbf0027f3821923aadda93c78f7b30c64958805c9b8b05d7e9-merged.mount: Deactivated successfully.
Feb 20 09:53:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-86b33a8d347140993f8262a7d00e984e06565de5bcafcfd4b545e0ae3b3a0c32-userdata-shm.mount: Deactivated successfully.
Feb 20 09:53:23 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d487c0c01\x2df5a0\x2d46d2\x2da762\x2d7820c2dbcbc2.mount: Deactivated successfully.
Feb 20 09:53:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:23.823 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:24.095 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:53:24 np0005625203.localdomain ceph-mon[296066]: pgmap v174: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:25 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:25.080 2 INFO neutron.agent.securitygroups_rpc [None req-9d454723-199e-4c87-997c-435a75780787 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']
Feb 20 09:53:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:25.517 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:25 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:25.550 2 INFO neutron.agent.securitygroups_rpc [None req-1234b8d9-654d-451e-95cb-316b1fc4ede0 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']
Feb 20 09:53:26 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:26.199 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:26 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:26.320 2 INFO neutron.agent.securitygroups_rpc [None req-2cdd2daf-d30d-4deb-a790-e995ba310f91 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']
Feb 20 09:53:26 np0005625203.localdomain ceph-mon[296066]: pgmap v175: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:27 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:27.307 2 INFO neutron.agent.securitygroups_rpc [None req-170d638f-d647-4f80-a7a3-f133bd9dbf7c 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']
Feb 20 09:53:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:53:27 np0005625203.localdomain podman[310155]: 2026-02-20 09:53:27.767695691 +0000 UTC m=+0.084365353 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:53:27 np0005625203.localdomain podman[310155]: 2026-02-20 09:53:27.776493362 +0000 UTC m=+0.093163024 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Feb 20 09:53:27 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:53:28 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:28.915 2 INFO neutron.agent.securitygroups_rpc [None req-c8a66b7a-94e3-4469-9bdc-a709861d759e 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']
Feb 20 09:53:28 np0005625203.localdomain ceph-mon[296066]: pgmap v176: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:53:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:53:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:53:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 09:53:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18294 "" "Go-http-client/1.1"
Feb 20 09:53:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:29.099 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:29 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:53:29 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:29.561 2 INFO neutron.agent.securitygroups_rpc [None req-47b66a4e-d860-4539-a526-725ae67efd11 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']
Feb 20 09:53:30 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:30.464 2 INFO neutron.agent.securitygroups_rpc [None req-11ba55d2-9392-4bf8-866e-0f6b7421a111 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']
Feb 20 09:53:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:30.521 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:30 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:30.872 2 INFO neutron.agent.securitygroups_rpc [None req-813a578a-3c42-4faa-a8fa-18fef9f75e4f 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']
Feb 20 09:53:30 np0005625203.localdomain ceph-mon[296066]: pgmap v177: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:31 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:31.475 2 INFO neutron.agent.securitygroups_rpc [None req-cf8e4aa2-945c-4732-b1d9-06cd52c9d8b9 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']
Feb 20 09:53:32 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:32.889 2 INFO neutron.agent.securitygroups_rpc [None req-0ba4bd77-300a-4aef-8bf9-70b27ff0d0d5 eedc91db7da847aab912b3b8401d5b18 8d5c2f81bbf4423c8ccdbeb44081c499 - - default default] Security group member updated ['943c86ba-7264-4974-89ae-938b95d72620']
Feb 20 09:53:32 np0005625203.localdomain ceph-mon[296066]: pgmap v178: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:33 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:33.264 2 INFO neutron.agent.securitygroups_rpc [None req-e49f1726-acc6-4366-8506-477a12f2a7e4 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']
Feb 20 09:53:33 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:33.265 2 INFO neutron.agent.securitygroups_rpc [None req-bbdf9d9d-afcb-4396-b80a-79eb3001d8e5 eedc91db7da847aab912b3b8401d5b18 8d5c2f81bbf4423c8ccdbeb44081c499 - - default default] Security group member updated ['943c86ba-7264-4974-89ae-938b95d72620']
Feb 20 09:53:33 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:33.278 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:33 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:53:33 np0005625203.localdomain systemd[1]: tmp-crun.DxtcQw.mount: Deactivated successfully.
Feb 20 09:53:33 np0005625203.localdomain podman[310174]: 2026-02-20 09:53:33.764135627 +0000 UTC m=+0.084451235 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:53:33 np0005625203.localdomain podman[310174]: 2026-02-20 09:53:33.805280087 +0000 UTC m=+0.125595695 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:53:33 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:53:33 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:33.968 2 INFO neutron.agent.securitygroups_rpc [None req-55d03934-9517-419a-915b-7eb31a90c9a3 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']
Feb 20 09:53:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:34.122 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:53:34 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:34.371 2 INFO neutron.agent.securitygroups_rpc [None req-b58a03ae-8801-426a-8262-b9afab11fa37 945c7903a05c4b26bd0f26bad77773be 75007688d77c439d8ee3fe7c58acf581 - - default default] Security group member updated ['9ad83b0d-3fc1-4df1-be0c-b38363c67626']
Feb 20 09:53:34 np0005625203.localdomain ceph-mon[296066]: pgmap v179: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:35.525 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:35 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:35.579 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:36 np0005625203.localdomain sudo[310201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:53:36 np0005625203.localdomain sudo[310201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:53:36 np0005625203.localdomain sudo[310201]: pam_unix(sudo:session): session closed for user root
Feb 20 09:53:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:53:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:53:36 np0005625203.localdomain sudo[310226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:53:36 np0005625203.localdomain sudo[310226]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:53:36 np0005625203.localdomain podman[310219]: 2026-02-20 09:53:36.411569417 +0000 UTC m=+0.107300220 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:53:36 np0005625203.localdomain podman[310220]: 2026-02-20 09:53:36.457949548 +0000 UTC m=+0.146915122 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, release=1770267347, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Feb 20 09:53:36 np0005625203.localdomain podman[310219]: 2026-02-20 09:53:36.478255824 +0000 UTC m=+0.173986637 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 20 09:53:36 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:53:36 np0005625203.localdomain podman[310220]: 2026-02-20 09:53:36.501377617 +0000 UTC m=+0.190343231 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.7, io.openshift.expose-services=, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=openstack_network_exporter)
Feb 20 09:53:36 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:53:36 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:36.921 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:36 np0005625203.localdomain ceph-mon[296066]: pgmap v180: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:37 np0005625203.localdomain sudo[310226]: pam_unix(sudo:session): session closed for user root
Feb 20 09:53:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:53:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:53:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:53:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:53:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:53:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:53:37 np0005625203.localdomain sudo[310306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:53:37 np0005625203.localdomain sudo[310306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:53:37 np0005625203.localdomain sudo[310306]: pam_unix(sudo:session): session closed for user root
Feb 20 09:53:37 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:53:37 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:53:37 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:53:37 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:53:38 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:38.290 262775 INFO neutron.agent.linux.ip_lib [None req-28398989-b72f-43d2-b814-707501cfbbb9 - - - - - -] Device tapeec877ca-de cannot be used as it has no MAC address
Feb 20 09:53:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:38.314 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:38 np0005625203.localdomain kernel: device tapeec877ca-de entered promiscuous mode
Feb 20 09:53:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:38.323 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:38 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581218.3244] manager: (tapeec877ca-de): new Generic device (/org/freedesktop/NetworkManager/Devices/25)
Feb 20 09:53:38 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:38Z|00105|binding|INFO|Claiming lport eec877ca-dee7-4bf3-90ec-2903b183b384 for this chassis.
Feb 20 09:53:38 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:38Z|00106|binding|INFO|eec877ca-dee7-4bf3-90ec-2903b183b384: Claiming unknown
Feb 20 09:53:38 np0005625203.localdomain systemd-udevd[310334]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:53:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:38.338 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-a83f0345-a8fd-4c40-975a-3f22acff054f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a83f0345-a8fd-4c40-975a-3f22acff054f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bf29d3b-d050-425a-8d24-0d904cb3f51c, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=eec877ca-dee7-4bf3-90ec-2903b183b384) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:53:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:38.340 161112 INFO neutron.agent.ovn.metadata.agent [-] Port eec877ca-dee7-4bf3-90ec-2903b183b384 in datapath a83f0345-a8fd-4c40-975a-3f22acff054f bound to our chassis
Feb 20 09:53:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:38.342 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a83f0345-a8fd-4c40-975a-3f22acff054f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:53:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:38.343 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[0e616ebf-fd97-45f2-ab8d-d4bad68edfc8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:53:38 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:38Z|00107|binding|INFO|Setting lport eec877ca-dee7-4bf3-90ec-2903b183b384 ovn-installed in OVS
Feb 20 09:53:38 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:38Z|00108|binding|INFO|Setting lport eec877ca-dee7-4bf3-90ec-2903b183b384 up in Southbound
Feb 20 09:53:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:38.357 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:38 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapeec877ca-de: No such device
Feb 20 09:53:38 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapeec877ca-de: No such device
Feb 20 09:53:38 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapeec877ca-de: No such device
Feb 20 09:53:38 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapeec877ca-de: No such device
Feb 20 09:53:38 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapeec877ca-de: No such device
Feb 20 09:53:38 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapeec877ca-de: No such device
Feb 20 09:53:38 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapeec877ca-de: No such device
Feb 20 09:53:38 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapeec877ca-de: No such device
Feb 20 09:53:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:38.398 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:38 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:38Z|00109|binding|INFO|Removing iface tapeec877ca-de ovn-installed in OVS
Feb 20 09:53:38 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:38Z|00110|binding|INFO|Removing lport eec877ca-dee7-4bf3-90ec-2903b183b384 ovn-installed in OVS
Feb 20 09:53:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:38.426 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:38.424 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 79ebf4da-7aae-4d50-86c9-4ff2c75feed2 with type ""
Feb 20 09:53:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:38.425 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-a83f0345-a8fd-4c40-975a-3f22acff054f', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a83f0345-a8fd-4c40-975a-3f22acff054f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5bf29d3b-d050-425a-8d24-0d904cb3f51c, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=eec877ca-dee7-4bf3-90ec-2903b183b384) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:53:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:38.427 161112 INFO neutron.agent.ovn.metadata.agent [-] Port eec877ca-dee7-4bf3-90ec-2903b183b384 in datapath a83f0345-a8fd-4c40-975a-3f22acff054f unbound from our chassis
Feb 20 09:53:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:38.429 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a83f0345-a8fd-4c40-975a-3f22acff054f or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:53:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:38.430 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f4b6ea-3cea-4107-a660-47e6ea979e85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:53:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:38.432 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:38.434 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:39.101 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:39.124 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:39 np0005625203.localdomain ceph-mon[296066]: pgmap v181: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:53:39 np0005625203.localdomain sshd[310416]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:53:39 np0005625203.localdomain podman[310405]: 
Feb 20 09:53:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:53:39 np0005625203.localdomain podman[310405]: 2026-02-20 09:53:39.327802007 +0000 UTC m=+0.090370408 container create 871aa04d1212ac202a8def77a2a03f80509a592db06b3fc8832ea3a80450600a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83f0345-a8fd-4c40-975a-3f22acff054f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:53:39 np0005625203.localdomain systemd[1]: Started libpod-conmon-871aa04d1212ac202a8def77a2a03f80509a592db06b3fc8832ea3a80450600a.scope.
Feb 20 09:53:39 np0005625203.localdomain podman[310405]: 2026-02-20 09:53:39.285126651 +0000 UTC m=+0.047695062 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:53:39 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:53:39 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adc8cb4acb1f5f55220156cf854f795f76c89764763cb867cfdc44649f65ce93/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:53:39 np0005625203.localdomain podman[310405]: 2026-02-20 09:53:39.411411535 +0000 UTC m=+0.173979946 container init 871aa04d1212ac202a8def77a2a03f80509a592db06b3fc8832ea3a80450600a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83f0345-a8fd-4c40-975a-3f22acff054f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:53:39 np0005625203.localdomain podman[310405]: 2026-02-20 09:53:39.421217248 +0000 UTC m=+0.183785649 container start 871aa04d1212ac202a8def77a2a03f80509a592db06b3fc8832ea3a80450600a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83f0345-a8fd-4c40-975a-3f22acff054f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 20 09:53:39 np0005625203.localdomain dnsmasq[310424]: started, version 2.85 cachesize 150
Feb 20 09:53:39 np0005625203.localdomain dnsmasq[310424]: DNS service limited to local subnets
Feb 20 09:53:39 np0005625203.localdomain dnsmasq[310424]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:53:39 np0005625203.localdomain dnsmasq[310424]: warning: no upstream servers configured
Feb 20 09:53:39 np0005625203.localdomain dnsmasq-dhcp[310424]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:53:39 np0005625203.localdomain dnsmasq[310424]: read /var/lib/neutron/dhcp/a83f0345-a8fd-4c40-975a-3f22acff054f/addn_hosts - 0 addresses
Feb 20 09:53:39 np0005625203.localdomain dnsmasq-dhcp[310424]: read /var/lib/neutron/dhcp/a83f0345-a8fd-4c40-975a-3f22acff054f/host
Feb 20 09:53:39 np0005625203.localdomain dnsmasq-dhcp[310424]: read /var/lib/neutron/dhcp/a83f0345-a8fd-4c40-975a-3f22acff054f/opts
Feb 20 09:53:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:39.604 262775 INFO neutron.agent.dhcp.agent [None req-40ca4874-ebea-4958-bf3e-17ec8dd37719 - - - - - -] DHCP configuration for ports {'d4c8b7c9-913a-4e73-ad75-69068ebe174f'} is completed
Feb 20 09:53:39 np0005625203.localdomain dnsmasq[310424]: exiting on receipt of SIGTERM
Feb 20 09:53:39 np0005625203.localdomain systemd[1]: libpod-871aa04d1212ac202a8def77a2a03f80509a592db06b3fc8832ea3a80450600a.scope: Deactivated successfully.
Feb 20 09:53:39 np0005625203.localdomain sshd[310416]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:53:39 np0005625203.localdomain podman[310443]: 2026-02-20 09:53:39.688026227 +0000 UTC m=+0.065291464 container kill 871aa04d1212ac202a8def77a2a03f80509a592db06b3fc8832ea3a80450600a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83f0345-a8fd-4c40-975a-3f22acff054f, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:53:39 np0005625203.localdomain podman[310458]: 2026-02-20 09:53:39.761283456 +0000 UTC m=+0.058300089 container died 871aa04d1212ac202a8def77a2a03f80509a592db06b3fc8832ea3a80450600a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83f0345-a8fd-4c40-975a-3f22acff054f, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:53:39 np0005625203.localdomain podman[310458]: 2026-02-20 09:53:39.792986424 +0000 UTC m=+0.090003017 container cleanup 871aa04d1212ac202a8def77a2a03f80509a592db06b3fc8832ea3a80450600a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83f0345-a8fd-4c40-975a-3f22acff054f, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:53:39 np0005625203.localdomain systemd[1]: libpod-conmon-871aa04d1212ac202a8def77a2a03f80509a592db06b3fc8832ea3a80450600a.scope: Deactivated successfully.
Feb 20 09:53:39 np0005625203.localdomain podman[310460]: 2026-02-20 09:53:39.857790683 +0000 UTC m=+0.143680383 container remove 871aa04d1212ac202a8def77a2a03f80509a592db06b3fc8832ea3a80450600a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a83f0345-a8fd-4c40-975a-3f22acff054f, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:53:39 np0005625203.localdomain kernel: device tapeec877ca-de left promiscuous mode
Feb 20 09:53:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:39.876 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:39.896 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:39.920 262775 INFO neutron.agent.dhcp.agent [None req-244a4c61-1c6e-4f3c-b063-fa6720125a36 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:39.920 262775 INFO neutron.agent.dhcp.agent [None req-244a4c61-1c6e-4f3c-b063-fa6720125a36 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-adc8cb4acb1f5f55220156cf854f795f76c89764763cb867cfdc44649f65ce93-merged.mount: Deactivated successfully.
Feb 20 09:53:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-871aa04d1212ac202a8def77a2a03f80509a592db06b3fc8832ea3a80450600a-userdata-shm.mount: Deactivated successfully.
Feb 20 09:53:40 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2da83f0345\x2da8fd\x2d4c40\x2d975a\x2d3f22acff054f.mount: Deactivated successfully.
Feb 20 09:53:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:40.527 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:41 np0005625203.localdomain ceph-mon[296066]: pgmap v182: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:43 np0005625203.localdomain ceph-mon[296066]: pgmap v183: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:44 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:44.125 262775 INFO neutron.agent.linux.ip_lib [None req-19808d2b-fefd-4f2f-a3b2-74523c2101f1 - - - - - -] Device tap3ec38c71-ae cannot be used as it has no MAC address
Feb 20 09:53:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:44.169 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:44.188 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:44 np0005625203.localdomain kernel: device tap3ec38c71-ae entered promiscuous mode
Feb 20 09:53:44 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581224.1967] manager: (tap3ec38c71-ae): new Generic device (/org/freedesktop/NetworkManager/Devices/26)
Feb 20 09:53:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:44.197 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:44 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:44Z|00111|binding|INFO|Claiming lport 3ec38c71-aee0-40e3-81cf-7245cebd355e for this chassis.
Feb 20 09:53:44 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:44Z|00112|binding|INFO|3ec38c71-aee0-40e3-81cf-7245cebd355e: Claiming unknown
Feb 20 09:53:44 np0005625203.localdomain systemd-udevd[310499]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:53:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:44.211 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c91094550bca41b8ac81e1aef3ebc3a4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55dc7b1-4234-4dd6-b9e8-035a3129941f, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=3ec38c71-aee0-40e3-81cf-7245cebd355e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:53:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:44.214 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 3ec38c71-aee0-40e3-81cf-7245cebd355e in datapath 4eae03f8-9b4a-42c1-a31a-da4ef49ab13c bound to our chassis
Feb 20 09:53:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:44.220 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 4eae03f8-9b4a-42c1-a31a-da4ef49ab13c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:53:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:44.221 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[ce5f6340-2545-4214-ba29-d1e7553683b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:53:44 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3ec38c71-ae: No such device
Feb 20 09:53:44 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3ec38c71-ae: No such device
Feb 20 09:53:44 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:44Z|00113|binding|INFO|Setting lport 3ec38c71-aee0-40e3-81cf-7245cebd355e ovn-installed in OVS
Feb 20 09:53:44 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:44Z|00114|binding|INFO|Setting lport 3ec38c71-aee0-40e3-81cf-7245cebd355e up in Southbound
Feb 20 09:53:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:44.239 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:44 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3ec38c71-ae: No such device
Feb 20 09:53:44 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3ec38c71-ae: No such device
Feb 20 09:53:44 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3ec38c71-ae: No such device
Feb 20 09:53:44 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3ec38c71-ae: No such device
Feb 20 09:53:44 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3ec38c71-ae: No such device
Feb 20 09:53:44 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3ec38c71-ae: No such device
Feb 20 09:53:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:44.278 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:44.311 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:53:45 np0005625203.localdomain podman[310570]: 
Feb 20 09:53:45 np0005625203.localdomain podman[310570]: 2026-02-20 09:53:45.184517995 +0000 UTC m=+0.090534772 container create ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 20 09:53:45 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:45.184 2 INFO neutron.agent.securitygroups_rpc [None req-a4da6bb1-700c-4d71-a646-fe34335ad1c4 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']
Feb 20 09:53:45 np0005625203.localdomain ceph-mon[296066]: pgmap v184: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:45 np0005625203.localdomain podman[310570]: 2026-02-20 09:53:45.139389604 +0000 UTC m=+0.045406411 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:53:45 np0005625203.localdomain systemd[1]: Started libpod-conmon-ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab.scope.
Feb 20 09:53:45 np0005625203.localdomain systemd[1]: tmp-crun.aaayTZ.mount: Deactivated successfully.
Feb 20 09:53:45 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:53:45 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf905a3af5063e52856c1c2c372a4361889a4cf36f4ce9e7f0452ae8d3d10bad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:53:45 np0005625203.localdomain podman[310570]: 2026-02-20 09:53:45.273266223 +0000 UTC m=+0.179283010 container init ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:53:45 np0005625203.localdomain podman[310570]: 2026-02-20 09:53:45.282452186 +0000 UTC m=+0.188468963 container start ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 20 09:53:45 np0005625203.localdomain dnsmasq[310589]: started, version 2.85 cachesize 150
Feb 20 09:53:45 np0005625203.localdomain dnsmasq[310589]: DNS service limited to local subnets
Feb 20 09:53:45 np0005625203.localdomain dnsmasq[310589]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:53:45 np0005625203.localdomain dnsmasq[310589]: warning: no upstream servers configured
Feb 20 09:53:45 np0005625203.localdomain dnsmasq-dhcp[310589]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:53:45 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 0 addresses
Feb 20 09:53:45 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:45 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:45 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:45.344 262775 INFO neutron.agent.dhcp.agent [None req-15509389-761c-40aa-9133-00581c60476c - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:44Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ebe070>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ebe310>], id=7f8d5f99-769b-4a2f-81ae-5506fa321370, ip_allocation=immediate, mac_address=fa:16:3e:27:30:1b, name=tempest-AllowedAddressPairTestJSON-1847138307, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:53:42Z, description=, dns_domain=, id=4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-529488877, port_security_enabled=True, project_id=c91094550bca41b8ac81e1aef3ebc3a4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25324, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1354, status=ACTIVE, subnets=['b1c095c3-280f-4675-9048-4d707998066a'], tags=[], tenant_id=c91094550bca41b8ac81e1aef3ebc3a4, updated_at=2026-02-20T09:53:43Z, vlan_transparent=None, network_id=4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, port_security_enabled=True, project_id=c91094550bca41b8ac81e1aef3ebc3a4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab'], standard_attr_id=1382, status=DOWN, tags=[], tenant_id=c91094550bca41b8ac81e1aef3ebc3a4, updated_at=2026-02-20T09:53:45Z on network 4eae03f8-9b4a-42c1-a31a-da4ef49ab13c
Feb 20 09:53:45 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:45.471 262775 INFO neutron.agent.dhcp.agent [None req-170168e9-0b73-4c50-9a53-cd73093daf7f - - - - - -] DHCP configuration for ports {'f4a399f1-d048-4a32-8e2a-2152e4d6d3bc'} is completed
Feb 20 09:53:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:45.531 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:45 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 1 addresses
Feb 20 09:53:45 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:45 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:45 np0005625203.localdomain podman[310607]: 2026-02-20 09:53:45.552284857 +0000 UTC m=+0.058446613 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:53:45 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:45.823 262775 INFO neutron.agent.dhcp.agent [None req-8f231a04-d4dc-4d03-9e40-aba4d39ad1d5 - - - - - -] DHCP configuration for ports {'7f8d5f99-769b-4a2f-81ae-5506fa321370'} is completed
Feb 20 09:53:46 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:46.466 2 INFO neutron.agent.securitygroups_rpc [None req-82d4853b-8792-42bf-a9bd-621206147606 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']
Feb 20 09:53:46 np0005625203.localdomain sshd[310627]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:53:46 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:46.564 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da57cd640>], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:46Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da57cd520>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ed1550>], id=09e3ec21-9ddd-4e1d-932f-8f88d838d72c, ip_allocation=immediate, mac_address=fa:16:3e:f3:bc:ec, name=tempest-AllowedAddressPairTestJSON-1322716030, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:53:42Z, description=, dns_domain=, id=4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-529488877, port_security_enabled=True, project_id=c91094550bca41b8ac81e1aef3ebc3a4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25324, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1354, status=ACTIVE, subnets=['b1c095c3-280f-4675-9048-4d707998066a'], tags=[], tenant_id=c91094550bca41b8ac81e1aef3ebc3a4, updated_at=2026-02-20T09:53:43Z, vlan_transparent=None, network_id=4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, port_security_enabled=True, project_id=c91094550bca41b8ac81e1aef3ebc3a4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab'], standard_attr_id=1400, status=DOWN, tags=[], tenant_id=c91094550bca41b8ac81e1aef3ebc3a4, updated_at=2026-02-20T09:53:46Z on network 4eae03f8-9b4a-42c1-a31a-da4ef49ab13c
Feb 20 09:53:46 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 2 addresses
Feb 20 09:53:46 np0005625203.localdomain podman[310646]: 2026-02-20 09:53:46.769096846 +0000 UTC m=+0.057356731 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:53:46 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:46 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:46 np0005625203.localdomain sshd[310627]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:53:46 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:46.989 262775 INFO neutron.agent.dhcp.agent [None req-334be9fe-cf81-4e89-bf4b-4b8fd80a00f2 - - - - - -] DHCP configuration for ports {'09e3ec21-9ddd-4e1d-932f-8f88d838d72c'} is completed
Feb 20 09:53:47 np0005625203.localdomain ceph-mon[296066]: pgmap v185: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:47 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:47.544 2 INFO neutron.agent.securitygroups_rpc [None req-2045a801-a884-4a06-b206-987ac9e8d82c 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']
Feb 20 09:53:47 np0005625203.localdomain podman[310684]: 2026-02-20 09:53:47.794632966 +0000 UTC m=+0.060978422 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:53:47 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 1 addresses
Feb 20 09:53:47 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:47 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e113 e113: 6 total, 6 up, 6 in
Feb 20 09:53:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:48.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:53:48 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:48.554 2 INFO neutron.agent.securitygroups_rpc [None req-34460107-5767-4790-bb51-43f170627a06 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']
Feb 20 09:53:48 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:48.604 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:48Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ed91c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ed9490>], id=fe364c44-a14f-413d-8b64-5ad8b6fdf7f1, ip_allocation=immediate, mac_address=fa:16:3e:70:b1:41, name=tempest-AllowedAddressPairTestJSON-759964011, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:53:42Z, description=, dns_domain=, id=4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-529488877, port_security_enabled=True, project_id=c91094550bca41b8ac81e1aef3ebc3a4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25324, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1354, status=ACTIVE, subnets=['b1c095c3-280f-4675-9048-4d707998066a'], tags=[], tenant_id=c91094550bca41b8ac81e1aef3ebc3a4, updated_at=2026-02-20T09:53:43Z, vlan_transparent=None, network_id=4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, port_security_enabled=True, project_id=c91094550bca41b8ac81e1aef3ebc3a4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab'], standard_attr_id=1413, status=DOWN, tags=[], tenant_id=c91094550bca41b8ac81e1aef3ebc3a4, updated_at=2026-02-20T09:53:48Z on network 4eae03f8-9b4a-42c1-a31a-da4ef49ab13c
Feb 20 09:53:48 np0005625203.localdomain sshd[310706]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:53:48 np0005625203.localdomain podman[310724]: 2026-02-20 09:53:48.805051677 +0000 UTC m=+0.054114639 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:53:48 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 2 addresses
Feb 20 09:53:48 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:48 np0005625203.localdomain systemd[1]: tmp-crun.V7TEoB.mount: Deactivated successfully.
Feb 20 09:53:48 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:53:48 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:53:49 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:49.027 262775 INFO neutron.agent.linux.ip_lib [None req-73bcd15e-4983-410c-8b5b-b8bd8e2340d4 - - - - - -] Device tapd355a6a9-0a cannot be used as it has no MAC address
Feb 20 09:53:49 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:49.035 262775 INFO neutron.agent.dhcp.agent [None req-433d4f79-2d92-4487-8a79-7419fa53e2db - - - - - -] DHCP configuration for ports {'fe364c44-a14f-413d-8b64-5ad8b6fdf7f1'} is completed
Feb 20 09:53:49 np0005625203.localdomain podman[310747]: 2026-02-20 09:53:49.042014276 +0000 UTC m=+0.086158868 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:53:49 np0005625203.localdomain podman[310747]: 2026-02-20 09:53:49.049766395 +0000 UTC m=+0.093910967 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:53:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:49.054 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:49 np0005625203.localdomain kernel: device tapd355a6a9-0a entered promiscuous mode
Feb 20 09:53:49 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581229.0622] manager: (tapd355a6a9-0a): new Generic device (/org/freedesktop/NetworkManager/Devices/27)
Feb 20 09:53:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:49.064 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:49Z|00115|binding|INFO|Claiming lport d355a6a9-0ac0-4a98-ad8e-ad23b05d8d02 for this chassis.
Feb 20 09:53:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:49Z|00116|binding|INFO|d355a6a9-0ac0-4a98-ad8e-ad23b05d8d02: Claiming unknown
Feb 20 09:53:49 np0005625203.localdomain systemd-udevd[310787]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:53:49 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:53:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:49.079 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::3/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-76c853d7-468d-4b4d-bf27-00393af052fc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76c853d7-468d-4b4d-bf27-00393af052fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba367dce-c320-466d-883f-d78bdcf71827, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=d355a6a9-0ac0-4a98-ad8e-ad23b05d8d02) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:53:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:49.081 161112 INFO neutron.agent.ovn.metadata.agent [-] Port d355a6a9-0ac0-4a98-ad8e-ad23b05d8d02 in datapath 76c853d7-468d-4b4d-bf27-00393af052fc bound to our chassis
Feb 20 09:53:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:49.082 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 76c853d7-468d-4b4d-bf27-00393af052fc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:53:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:49.083 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[0afa9d7a-c13b-401e-8861-13aadc533ec0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:53:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd355a6a9-0a: No such device
Feb 20 09:53:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:49Z|00117|binding|INFO|Setting lport d355a6a9-0ac0-4a98-ad8e-ad23b05d8d02 ovn-installed in OVS
Feb 20 09:53:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:49Z|00118|binding|INFO|Setting lport d355a6a9-0ac0-4a98-ad8e-ad23b05d8d02 up in Southbound
Feb 20 09:53:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:49.097 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:49.098 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd355a6a9-0a: No such device
Feb 20 09:53:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd355a6a9-0a: No such device
Feb 20 09:53:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd355a6a9-0a: No such device
Feb 20 09:53:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd355a6a9-0a: No such device
Feb 20 09:53:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd355a6a9-0a: No such device
Feb 20 09:53:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd355a6a9-0a: No such device
Feb 20 09:53:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd355a6a9-0a: No such device
Feb 20 09:53:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:49.134 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:49 np0005625203.localdomain podman[310748]: 2026-02-20 09:53:49.152934287 +0000 UTC m=+0.194569092 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:53:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:49.164 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:49.170 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:49 np0005625203.localdomain sshd[310706]: Invalid user ts1 from 5.253.59.68 port 56738
Feb 20 09:53:49 np0005625203.localdomain podman[310748]: 2026-02-20 09:53:49.183776068 +0000 UTC m=+0.225410883 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:53:49 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:53:49 np0005625203.localdomain ceph-mon[296066]: pgmap v186: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:53:49 np0005625203.localdomain ceph-mon[296066]: osdmap e113: 6 total, 6 up, 6 in
Feb 20 09:53:49 np0005625203.localdomain sshd[310706]: Received disconnect from 5.253.59.68 port 56738:11: Bye Bye [preauth]
Feb 20 09:53:49 np0005625203.localdomain sshd[310706]: Disconnected from invalid user ts1 5.253.59.68 port 56738 [preauth]
Feb 20 09:53:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:53:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:49Z|00119|binding|INFO|Removing iface tapd355a6a9-0a ovn-installed in OVS
Feb 20 09:53:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:49.768 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port b3d61bb7-2929-43a3-b74c-481cca7fc267 with type ""
Feb 20 09:53:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:49Z|00120|binding|INFO|Removing lport d355a6a9-0ac0-4a98-ad8e-ad23b05d8d02 ovn-installed in OVS
Feb 20 09:53:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:49.770 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:49.772 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-76c853d7-468d-4b4d-bf27-00393af052fc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76c853d7-468d-4b4d-bf27-00393af052fc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba367dce-c320-466d-883f-d78bdcf71827, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=d355a6a9-0ac0-4a98-ad8e-ad23b05d8d02) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:53:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:49.775 161112 INFO neutron.agent.ovn.metadata.agent [-] Port d355a6a9-0ac0-4a98-ad8e-ad23b05d8d02 in datapath 76c853d7-468d-4b4d-bf27-00393af052fc unbound from our chassis
Feb 20 09:53:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:49.777 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 76c853d7-468d-4b4d-bf27-00393af052fc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:53:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:49.778 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc85e82-76b2-4852-b1d0-c6fa6ab9d864]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:53:49 np0005625203.localdomain systemd[1]: tmp-crun.LMknhS.mount: Deactivated successfully.
Feb 20 09:53:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:49.945 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:50 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:50.001 2 INFO neutron.agent.securitygroups_rpc [None req-f7428e6a-a4fd-4f95-a528-55a12406007a 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']
Feb 20 09:53:50 np0005625203.localdomain podman[310869]: 
Feb 20 09:53:50 np0005625203.localdomain podman[310869]: 2026-02-20 09:53:50.019489292 +0000 UTC m=+0.108996652 container create 3aacc1d1438cfa46112b6757c3907ce2ea08b6bf6646431e4f04d3a3b5cadebd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76c853d7-468d-4b4d-bf27-00393af052fc, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:53:50 np0005625203.localdomain systemd[1]: Started libpod-conmon-3aacc1d1438cfa46112b6757c3907ce2ea08b6bf6646431e4f04d3a3b5cadebd.scope.
Feb 20 09:53:50 np0005625203.localdomain podman[310869]: 2026-02-20 09:53:49.969738948 +0000 UTC m=+0.059246388 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:53:50 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:53:50 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b597187fee33cd673f52912f506791ef9d338368ca3686b24ddaaf1b167c15c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:53:50 np0005625203.localdomain podman[310869]: 2026-02-20 09:53:50.11474246 +0000 UTC m=+0.204249810 container init 3aacc1d1438cfa46112b6757c3907ce2ea08b6bf6646431e4f04d3a3b5cadebd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76c853d7-468d-4b4d-bf27-00393af052fc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:53:50 np0005625203.localdomain podman[310869]: 2026-02-20 09:53:50.121453417 +0000 UTC m=+0.210960777 container start 3aacc1d1438cfa46112b6757c3907ce2ea08b6bf6646431e4f04d3a3b5cadebd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76c853d7-468d-4b4d-bf27-00393af052fc, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:53:50 np0005625203.localdomain dnsmasq[310896]: started, version 2.85 cachesize 150
Feb 20 09:53:50 np0005625203.localdomain dnsmasq[310896]: DNS service limited to local subnets
Feb 20 09:53:50 np0005625203.localdomain dnsmasq[310896]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:53:50 np0005625203.localdomain dnsmasq[310896]: warning: no upstream servers configured
Feb 20 09:53:50 np0005625203.localdomain dnsmasq-dhcp[310896]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:53:50 np0005625203.localdomain dnsmasq[310896]: read /var/lib/neutron/dhcp/76c853d7-468d-4b4d-bf27-00393af052fc/addn_hosts - 0 addresses
Feb 20 09:53:50 np0005625203.localdomain dnsmasq-dhcp[310896]: read /var/lib/neutron/dhcp/76c853d7-468d-4b4d-bf27-00393af052fc/host
Feb 20 09:53:50 np0005625203.localdomain dnsmasq-dhcp[310896]: read /var/lib/neutron/dhcp/76c853d7-468d-4b4d-bf27-00393af052fc/opts
Feb 20 09:53:50 np0005625203.localdomain podman[310906]: 2026-02-20 09:53:50.269632507 +0000 UTC m=+0.065634015 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:53:50 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 1 addresses
Feb 20 09:53:50 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:50 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:50 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:50.310 262775 INFO neutron.agent.dhcp.agent [None req-e7dc53bd-11be-41c4-9e34-8cf328c4e45f - - - - - -] DHCP configuration for ports {'0c37c4aa-2ca1-4557-a529-cc620391b44c'} is completed
Feb 20 09:53:50 np0005625203.localdomain dnsmasq[310896]: exiting on receipt of SIGTERM
Feb 20 09:53:50 np0005625203.localdomain podman[310936]: 2026-02-20 09:53:50.385335466 +0000 UTC m=+0.072900079 container kill 3aacc1d1438cfa46112b6757c3907ce2ea08b6bf6646431e4f04d3a3b5cadebd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76c853d7-468d-4b4d-bf27-00393af052fc, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:53:50 np0005625203.localdomain systemd[1]: libpod-3aacc1d1438cfa46112b6757c3907ce2ea08b6bf6646431e4f04d3a3b5cadebd.scope: Deactivated successfully.
Feb 20 09:53:50 np0005625203.localdomain podman[310954]: 2026-02-20 09:53:50.467014795 +0000 UTC m=+0.067987689 container died 3aacc1d1438cfa46112b6757c3907ce2ea08b6bf6646431e4f04d3a3b5cadebd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76c853d7-468d-4b4d-bf27-00393af052fc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:53:50 np0005625203.localdomain podman[310954]: 2026-02-20 09:53:50.500593701 +0000 UTC m=+0.101566555 container cleanup 3aacc1d1438cfa46112b6757c3907ce2ea08b6bf6646431e4f04d3a3b5cadebd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76c853d7-468d-4b4d-bf27-00393af052fc, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 20 09:53:50 np0005625203.localdomain systemd[1]: libpod-conmon-3aacc1d1438cfa46112b6757c3907ce2ea08b6bf6646431e4f04d3a3b5cadebd.scope: Deactivated successfully.
Feb 20 09:53:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:50.534 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:50 np0005625203.localdomain podman[310957]: 2026-02-20 09:53:50.548862089 +0000 UTC m=+0.138777062 container remove 3aacc1d1438cfa46112b6757c3907ce2ea08b6bf6646431e4f04d3a3b5cadebd (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-76c853d7-468d-4b4d-bf27-00393af052fc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:53:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:50.560 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:50 np0005625203.localdomain kernel: device tapd355a6a9-0a left promiscuous mode
Feb 20 09:53:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:50.572 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:50 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:50.619 262775 INFO neutron.agent.dhcp.agent [None req-b279245a-075c-48aa-be2d-66e21212f17a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:50 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:50.620 262775 INFO neutron.agent.dhcp.agent [None req-b279245a-075c-48aa-be2d-66e21212f17a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-2b597187fee33cd673f52912f506791ef9d338368ca3686b24ddaaf1b167c15c-merged.mount: Deactivated successfully.
Feb 20 09:53:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3aacc1d1438cfa46112b6757c3907ce2ea08b6bf6646431e4f04d3a3b5cadebd-userdata-shm.mount: Deactivated successfully.
Feb 20 09:53:50 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d76c853d7\x2d468d\x2d4b4d\x2dbf27\x2d00393af052fc.mount: Deactivated successfully.
Feb 20 09:53:50 np0005625203.localdomain sshd[310986]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:53:51 np0005625203.localdomain sshd[310986]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:53:51 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:51.248 2 INFO neutron.agent.securitygroups_rpc [None req-e18dac8b-7691-49db-a6f9-a9ef86b93bee 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']
Feb 20 09:53:51 np0005625203.localdomain ceph-mon[296066]: pgmap v188: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Feb 20 09:53:51 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:51.290 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:50Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da56f83d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da574b700>], id=b38539dc-f956-48b6-87f3-659f598370c2, ip_allocation=immediate, mac_address=fa:16:3e:0b:87:0e, name=tempest-AllowedAddressPairTestJSON-224002809, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:53:42Z, description=, dns_domain=, id=4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-529488877, port_security_enabled=True, project_id=c91094550bca41b8ac81e1aef3ebc3a4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25324, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1354, status=ACTIVE, subnets=['b1c095c3-280f-4675-9048-4d707998066a'], tags=[], tenant_id=c91094550bca41b8ac81e1aef3ebc3a4, updated_at=2026-02-20T09:53:43Z, vlan_transparent=None, network_id=4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, port_security_enabled=True, project_id=c91094550bca41b8ac81e1aef3ebc3a4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab'], standard_attr_id=1430, status=DOWN, tags=[], tenant_id=c91094550bca41b8ac81e1aef3ebc3a4, updated_at=2026-02-20T09:53:51Z on network 4eae03f8-9b4a-42c1-a31a-da4ef49ab13c
Feb 20 09:53:51 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 2 addresses
Feb 20 09:53:51 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:51 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:51 np0005625203.localdomain podman[311005]: 2026-02-20 09:53:51.550058647 +0000 UTC m=+0.073095316 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:53:51 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:51.769 262775 INFO neutron.agent.dhcp.agent [None req-6f5f6973-7ddc-4aba-bc5c-e7fb53200b56 - - - - - -] DHCP configuration for ports {'b38539dc-f956-48b6-87f3-659f598370c2'} is completed
Feb 20 09:53:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e114 e114: 6 total, 6 up, 6 in
Feb 20 09:53:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:52.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:53:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:52.367 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:53:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:52.367 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:53:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:52.368 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:53:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:52.368 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:53:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:52.369 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:53:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:52.520 2 INFO neutron.agent.securitygroups_rpc [None req-debccdd3-a1fb-4577-8737-97107297a2b7 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']
Feb 20 09:53:52 np0005625203.localdomain podman[311063]: 2026-02-20 09:53:52.810151899 +0000 UTC m=+0.081766392 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:53:52 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 1 addresses
Feb 20 09:53:52 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:52 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:53:52 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/870495445' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:53:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:52.886 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:53:53 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:53.064 2 INFO neutron.agent.securitygroups_rpc [None req-f48de32a-1488-41ca-9ac7-21eba1b907c6 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']
Feb 20 09:53:53 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:53.109 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:52Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da501fc10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da572c580>], id=2e224c9a-46ca-4071-8636-8c9b33a9edd1, ip_allocation=immediate, mac_address=fa:16:3e:1d:27:0a, name=tempest-AllowedAddressPairTestJSON-1903434999, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:53:42Z, description=, dns_domain=, id=4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-529488877, port_security_enabled=True, project_id=c91094550bca41b8ac81e1aef3ebc3a4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25324, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1354, status=ACTIVE, subnets=['b1c095c3-280f-4675-9048-4d707998066a'], tags=[], tenant_id=c91094550bca41b8ac81e1aef3ebc3a4, updated_at=2026-02-20T09:53:43Z, vlan_transparent=None, network_id=4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, port_security_enabled=True, project_id=c91094550bca41b8ac81e1aef3ebc3a4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab'], standard_attr_id=1440, status=DOWN, tags=[], tenant_id=c91094550bca41b8ac81e1aef3ebc3a4, updated_at=2026-02-20T09:53:52Z on network 4eae03f8-9b4a-42c1-a31a-da4ef49ab13c
Feb 20 09:53:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:53.135 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:53:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:53.137 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11697MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:53:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:53.138 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:53:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:53.138 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:53:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:53.202 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:53:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:53.202 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:53:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:53.217 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:53:53 np0005625203.localdomain ceph-mon[296066]: pgmap v189: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Feb 20 09:53:53 np0005625203.localdomain ceph-mon[296066]: osdmap e114: 6 total, 6 up, 6 in
Feb 20 09:53:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/870495445' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:53:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e115 e115: 6 total, 6 up, 6 in
Feb 20 09:53:53 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 2 addresses
Feb 20 09:53:53 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:53 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:53 np0005625203.localdomain podman[311105]: 2026-02-20 09:53:53.374084341 +0000 UTC m=+0.092773682 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:53:53 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:53.498 2 INFO neutron.agent.securitygroups_rpc [None req-2540619c-1d57-4e86-a386-76258833753f 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']
Feb 20 09:53:53 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:53.572 262775 INFO neutron.agent.dhcp.agent [None req-30fcc815-e4d8-44d4-b44f-f3793c69d07b - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:53:53Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e60fa0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da56f4f40>], id=49ecab80-926a-406f-ab89-e08e23dbe3ab, ip_allocation=immediate, mac_address=fa:16:3e:4b:93:43, name=tempest-AllowedAddressPairTestJSON-994353339, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:53:42Z, description=, dns_domain=, id=4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-AllowedAddressPairTestJSON-test-network-529488877, port_security_enabled=True, project_id=c91094550bca41b8ac81e1aef3ebc3a4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=25324, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1354, status=ACTIVE, subnets=['b1c095c3-280f-4675-9048-4d707998066a'], tags=[], tenant_id=c91094550bca41b8ac81e1aef3ebc3a4, updated_at=2026-02-20T09:53:43Z, vlan_transparent=None, network_id=4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, port_security_enabled=True, project_id=c91094550bca41b8ac81e1aef3ebc3a4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab'], standard_attr_id=1442, status=DOWN, tags=[], tenant_id=c91094550bca41b8ac81e1aef3ebc3a4, updated_at=2026-02-20T09:53:53Z on network 4eae03f8-9b4a-42c1-a31a-da4ef49ab13c
Feb 20 09:53:53 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:53.634 262775 INFO neutron.agent.dhcp.agent [None req-542f47d3-c403-47ce-b61f-ccbb734ae958 - - - - - -] DHCP configuration for ports {'2e224c9a-46ca-4071-8636-8c9b33a9edd1'} is completed
Feb 20 09:53:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:53:53 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4028388329' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:53:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:53.725 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:53:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:53.732 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:53:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:53.749 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:53:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:53.751 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:53:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:53.751 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:53:53 np0005625203.localdomain podman[311164]: 2026-02-20 09:53:53.771925791 +0000 UTC m=+0.056615257 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 20 09:53:53 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 3 addresses
Feb 20 09:53:53 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:53 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:53 np0005625203.localdomain systemd[1]: tmp-crun.G2oGvP.mount: Deactivated successfully.
Feb 20 09:53:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:54.102 262775 INFO neutron.agent.dhcp.agent [None req-ab9839fe-cea1-45b7-af37-d40b75c00a17 - - - - - -] DHCP configuration for ports {'49ecab80-926a-406f-ab89-e08e23dbe3ab'} is completed
Feb 20 09:53:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:54.211 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:53:54 np0005625203.localdomain ceph-mon[296066]: osdmap e115: 6 total, 6 up, 6 in
Feb 20 09:53:54 np0005625203.localdomain ceph-mon[296066]: pgmap v192: 177 pgs: 177 active+clean; 145 MiB data, 750 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 2.7 KiB/s wr, 24 op/s
Feb 20 09:53:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/4028388329' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:53:54 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:54.643 2 INFO neutron.agent.securitygroups_rpc [None req-6b4ba7bd-a2d9-4ae3-ba6a-627aca1feb8f 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']
Feb 20 09:53:54 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 2 addresses
Feb 20 09:53:54 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:54 np0005625203.localdomain podman[311202]: 2026-02-20 09:53:54.918191784 +0000 UTC m=+0.063250212 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:53:54 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e116 e116: 6 total, 6 up, 6 in
Feb 20 09:53:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2307489764' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:53:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:55.537 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:55 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:55.616 2 INFO neutron.agent.securitygroups_rpc [None req-11ea8750-a0ca-4484-988a-d6e41cea3e7c 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']
Feb 20 09:53:55 np0005625203.localdomain systemd[1]: tmp-crun.chp48h.mount: Deactivated successfully.
Feb 20 09:53:55 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 1 addresses
Feb 20 09:53:55 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:55 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:55 np0005625203.localdomain podman[311239]: 2026-02-20 09:53:55.906327938 +0000 UTC m=+0.071722372 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:53:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e117 e117: 6 total, 6 up, 6 in
Feb 20 09:53:56 np0005625203.localdomain ceph-mon[296066]: osdmap e116: 6 total, 6 up, 6 in
Feb 20 09:53:56 np0005625203.localdomain ceph-mon[296066]: pgmap v194: 177 pgs: 177 active+clean; 257 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 32 KiB/s rd, 19 MiB/s wr, 50 op/s
Feb 20 09:53:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2163954098' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:53:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:56.511 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:53:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:56.514 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:53:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:56.515 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:53:56 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:53:56.522 2 INFO neutron.agent.securitygroups_rpc [None req-2195fa93-d724-491c-94cb-1a1a7da48d3e 1b2d25a511a14396a23c89999045cc74 c91094550bca41b8ac81e1aef3ebc3a4 - - default default] Security group member updated ['075aba65-9a9e-4a7a-9bd7-67cbc5ab58ab']
Feb 20 09:53:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:56.536 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:56.752 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:53:56 np0005625203.localdomain dnsmasq[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/addn_hosts - 0 addresses
Feb 20 09:53:56 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/host
Feb 20 09:53:56 np0005625203.localdomain podman[311277]: 2026-02-20 09:53:56.810806643 +0000 UTC m=+0.066696958 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:53:56 np0005625203.localdomain dnsmasq-dhcp[310589]: read /var/lib/neutron/dhcp/4eae03f8-9b4a-42c1-a31a-da4ef49ab13c/opts
Feb 20 09:53:57 np0005625203.localdomain dnsmasq[310589]: exiting on receipt of SIGTERM
Feb 20 09:53:57 np0005625203.localdomain podman[311316]: 2026-02-20 09:53:57.436291214 +0000 UTC m=+0.065990436 container kill ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:53:57 np0005625203.localdomain systemd[1]: libpod-ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab.scope: Deactivated successfully.
Feb 20 09:53:57 np0005625203.localdomain ceph-mon[296066]: osdmap e117: 6 total, 6 up, 6 in
Feb 20 09:53:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e118 e118: 6 total, 6 up, 6 in
Feb 20 09:53:57 np0005625203.localdomain podman[311328]: 2026-02-20 09:53:57.512286628 +0000 UTC m=+0.064226162 container died ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 20 09:53:57 np0005625203.localdomain podman[311328]: 2026-02-20 09:53:57.548809194 +0000 UTC m=+0.100748688 container cleanup ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:53:57 np0005625203.localdomain systemd[1]: libpod-conmon-ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab.scope: Deactivated successfully.
Feb 20 09:53:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:57.562 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 263cc90c-9cdd-4d52-a987-5acaffaafb0b with type ""
Feb 20 09:53:57 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:57Z|00121|binding|INFO|Removing iface tap3ec38c71-ae ovn-installed in OVS
Feb 20 09:53:57 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:57Z|00122|binding|INFO|Removing lport 3ec38c71-aee0-40e3-81cf-7245cebd355e ovn-installed in OVS
Feb 20 09:53:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:57.565 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c91094550bca41b8ac81e1aef3ebc3a4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d55dc7b1-4234-4dd6-b9e8-035a3129941f, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=3ec38c71-aee0-40e3-81cf-7245cebd355e) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:53:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:57.568 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 3ec38c71-aee0-40e3-81cf-7245cebd355e in datapath 4eae03f8-9b4a-42c1-a31a-da4ef49ab13c unbound from our chassis
Feb 20 09:53:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:57.571 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:53:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:57.572 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[6f544bd9-6295-44d8-b229-63e7611c68f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:53:57 np0005625203.localdomain podman[311335]: 2026-02-20 09:53:57.612140877 +0000 UTC m=+0.150597815 container remove ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4eae03f8-9b4a-42c1-a31a-da4ef49ab13c, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:53:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:57.614 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:57.628 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:57 np0005625203.localdomain kernel: device tap3ec38c71-ae left promiscuous mode
Feb 20 09:53:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:57.646 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e119 e119: 6 total, 6 up, 6 in
Feb 20 09:53:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:57.675 262775 INFO neutron.agent.dhcp.agent [None req-8dc1e887-d5dd-43a9-aed6-e50e15fc19a5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-cf905a3af5063e52856c1c2c372a4361889a4cf36f4ce9e7f0452ae8d3d10bad-merged.mount: Deactivated successfully.
Feb 20 09:53:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ccce1db0f52d0fdb610536e2867ff81267ee95600573f1141291a946cc7a78ab-userdata-shm.mount: Deactivated successfully.
Feb 20 09:53:57 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d4eae03f8\x2d9b4a\x2d42c1\x2da31a\x2dda4ef49ab13c.mount: Deactivated successfully.
Feb 20 09:53:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:53:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:57.842 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:53:57 np0005625203.localdomain podman[311355]: 2026-02-20 09:53:57.947083548 +0000 UTC m=+0.093525676 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 20 09:53:57 np0005625203.localdomain podman[311355]: 2026-02-20 09:53:57.983444569 +0000 UTC m=+0.129886667 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Feb 20 09:53:57 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:53:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:58.201 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:58.372 262775 INFO neutron.agent.linux.ip_lib [None req-03177fe8-6f29-425a-9fc4-3bc2f29d901b - - - - - -] Device tapa36366bb-cf cannot be used as it has no MAC address
Feb 20 09:53:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:58.394 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:58 np0005625203.localdomain kernel: device tapa36366bb-cf entered promiscuous mode
Feb 20 09:53:58 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581238.4043] manager: (tapa36366bb-cf): new Generic device (/org/freedesktop/NetworkManager/Devices/28)
Feb 20 09:53:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:58.402 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:58Z|00123|binding|INFO|Claiming lport a36366bb-cfdb-4db1-8c1f-4e445eb5415c for this chassis.
Feb 20 09:53:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:58Z|00124|binding|INFO|a36366bb-cfdb-4db1-8c1f-4e445eb5415c: Claiming unknown
Feb 20 09:53:58 np0005625203.localdomain systemd-udevd[311382]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:53:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:58.415 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-017f5f57-fb36-4de5-8780-62face7472fd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-017f5f57-fb36-4de5-8780-62face7472fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08e2fe8e-20b9-4384-a874-a3bcf59ddd26, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=a36366bb-cfdb-4db1-8c1f-4e445eb5415c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:53:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:58.417 161112 INFO neutron.agent.ovn.metadata.agent [-] Port a36366bb-cfdb-4db1-8c1f-4e445eb5415c in datapath 017f5f57-fb36-4de5-8780-62face7472fd bound to our chassis
Feb 20 09:53:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:58.419 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 017f5f57-fb36-4de5-8780-62face7472fd or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:53:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:53:58.420 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[256d7c35-471e-4315-a569-2259f2038eeb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:53:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapa36366bb-cf: No such device
Feb 20 09:53:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:58Z|00125|binding|INFO|Setting lport a36366bb-cfdb-4db1-8c1f-4e445eb5415c ovn-installed in OVS
Feb 20 09:53:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:53:58Z|00126|binding|INFO|Setting lport a36366bb-cfdb-4db1-8c1f-4e445eb5415c up in Southbound
Feb 20 09:53:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapa36366bb-cf: No such device
Feb 20 09:53:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:58.437 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapa36366bb-cf: No such device
Feb 20 09:53:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapa36366bb-cf: No such device
Feb 20 09:53:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapa36366bb-cf: No such device
Feb 20 09:53:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapa36366bb-cf: No such device
Feb 20 09:53:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapa36366bb-cf: No such device
Feb 20 09:53:58 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapa36366bb-cf: No such device
Feb 20 09:53:58 np0005625203.localdomain ceph-mon[296066]: osdmap e118: 6 total, 6 up, 6 in
Feb 20 09:53:58 np0005625203.localdomain ceph-mon[296066]: pgmap v197: 177 pgs: 177 active+clean; 257 MiB data, 1014 MiB used, 41 GiB / 42 GiB avail; 46 KiB/s rd, 27 MiB/s wr, 71 op/s
Feb 20 09:53:58 np0005625203.localdomain ceph-mon[296066]: osdmap e119: 6 total, 6 up, 6 in
Feb 20 09:53:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:58.479 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:58.512 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:53:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:53:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:53:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 09:53:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18296 "" "Go-http-client/1.1"
Feb 20 09:53:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:59.214 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:53:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:53:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:59.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:53:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:53:59.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:53:59 np0005625203.localdomain podman[311453]: 
Feb 20 09:53:59 np0005625203.localdomain podman[311453]: 2026-02-20 09:53:59.373470169 +0000 UTC m=+0.089098159 container create 16df9973c30ad08aa516392e8c5d38f387913906537ef3576421ff760190705a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb 20 09:53:59 np0005625203.localdomain systemd[1]: Started libpod-conmon-16df9973c30ad08aa516392e8c5d38f387913906537ef3576421ff760190705a.scope.
Feb 20 09:53:59 np0005625203.localdomain podman[311453]: 2026-02-20 09:53:59.329326907 +0000 UTC m=+0.044954947 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:53:59 np0005625203.localdomain systemd[1]: tmp-crun.4oqDW9.mount: Deactivated successfully.
Feb 20 09:53:59 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:53:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6a475714e48e3992169eb68ce33ff1313db8f06c8a82c4c3ed2d11e5b0aafad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:53:59 np0005625203.localdomain podman[311453]: 2026-02-20 09:53:59.456450928 +0000 UTC m=+0.172078918 container init 16df9973c30ad08aa516392e8c5d38f387913906537ef3576421ff760190705a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb 20 09:53:59 np0005625203.localdomain podman[311453]: 2026-02-20 09:53:59.464743454 +0000 UTC m=+0.180371454 container start 16df9973c30ad08aa516392e8c5d38f387913906537ef3576421ff760190705a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:53:59 np0005625203.localdomain dnsmasq[311471]: started, version 2.85 cachesize 150
Feb 20 09:53:59 np0005625203.localdomain dnsmasq[311471]: DNS service limited to local subnets
Feb 20 09:53:59 np0005625203.localdomain dnsmasq[311471]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:53:59 np0005625203.localdomain dnsmasq[311471]: warning: no upstream servers configured
Feb 20 09:53:59 np0005625203.localdomain dnsmasq-dhcp[311471]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:53:59 np0005625203.localdomain dnsmasq[311471]: read /var/lib/neutron/dhcp/017f5f57-fb36-4de5-8780-62face7472fd/addn_hosts - 0 addresses
Feb 20 09:53:59 np0005625203.localdomain dnsmasq-dhcp[311471]: read /var/lib/neutron/dhcp/017f5f57-fb36-4de5-8780-62face7472fd/host
Feb 20 09:53:59 np0005625203.localdomain dnsmasq-dhcp[311471]: read /var/lib/neutron/dhcp/017f5f57-fb36-4de5-8780-62face7472fd/opts
Feb 20 09:53:59 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:53:59.655 262775 INFO neutron.agent.dhcp.agent [None req-2aa807e7-883e-4bef-9552-738705ca769e - - - - - -] DHCP configuration for ports {'dcf4edef-c0c3-4f75-a276-022ca9745753'} is completed
Feb 20 09:53:59 np0005625203.localdomain dnsmasq[311471]: exiting on receipt of SIGTERM
Feb 20 09:53:59 np0005625203.localdomain podman[311490]: 2026-02-20 09:53:59.846112686 +0000 UTC m=+0.059401383 container kill 16df9973c30ad08aa516392e8c5d38f387913906537ef3576421ff760190705a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:53:59 np0005625203.localdomain systemd[1]: libpod-16df9973c30ad08aa516392e8c5d38f387913906537ef3576421ff760190705a.scope: Deactivated successfully.
Feb 20 09:53:59 np0005625203.localdomain podman[311504]: 2026-02-20 09:53:59.922321616 +0000 UTC m=+0.064076627 container died 16df9973c30ad08aa516392e8c5d38f387913906537ef3576421ff760190705a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:53:59 np0005625203.localdomain podman[311504]: 2026-02-20 09:53:59.955760978 +0000 UTC m=+0.097515979 container cleanup 16df9973c30ad08aa516392e8c5d38f387913906537ef3576421ff760190705a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:53:59 np0005625203.localdomain systemd[1]: libpod-conmon-16df9973c30ad08aa516392e8c5d38f387913906537ef3576421ff760190705a.scope: Deactivated successfully.
Feb 20 09:54:00 np0005625203.localdomain podman[311511]: 2026-02-20 09:54:00.012038093 +0000 UTC m=+0.138939936 container remove 16df9973c30ad08aa516392e8c5d38f387913906537ef3576421ff760190705a (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 20 09:54:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:00.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:54:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:00.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:54:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-b6a475714e48e3992169eb68ce33ff1313db8f06c8a82c4c3ed2d11e5b0aafad-merged.mount: Deactivated successfully.
Feb 20 09:54:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-16df9973c30ad08aa516392e8c5d38f387913906537ef3576421ff760190705a-userdata-shm.mount: Deactivated successfully.
Feb 20 09:54:00 np0005625203.localdomain ceph-mon[296066]: pgmap v199: 177 pgs: 177 active+clean; 169 MiB data, 855 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 7.4 KiB/s wr, 88 op/s
Feb 20 09:54:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:00.572 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:01.345 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:54:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:01.346 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:54:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:01.346 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:54:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:01.389 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:54:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:01.390 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:54:01 np0005625203.localdomain podman[311584]: 
Feb 20 09:54:01 np0005625203.localdomain podman[311584]: 2026-02-20 09:54:01.496598739 +0000 UTC m=+0.091848614 container create f3c6f4429206693e056abc5d4b4b3ab3cc63c2b056fadc62d643577a463417bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:54:01 np0005625203.localdomain systemd[1]: Started libpod-conmon-f3c6f4429206693e056abc5d4b4b3ab3cc63c2b056fadc62d643577a463417bb.scope.
Feb 20 09:54:01 np0005625203.localdomain podman[311584]: 2026-02-20 09:54:01.455295485 +0000 UTC m=+0.050545410 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:54:01 np0005625203.localdomain systemd[1]: tmp-crun.HojIKM.mount: Deactivated successfully.
Feb 20 09:54:01 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:54:01 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/373f3925c85b188e8bf9b872072ad4426b7d2d21083f7da3a22138ee8c7de23c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:54:01 np0005625203.localdomain podman[311584]: 2026-02-20 09:54:01.583336194 +0000 UTC m=+0.178586069 container init f3c6f4429206693e056abc5d4b4b3ab3cc63c2b056fadc62d643577a463417bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:54:01 np0005625203.localdomain podman[311584]: 2026-02-20 09:54:01.593043433 +0000 UTC m=+0.188293318 container start f3c6f4429206693e056abc5d4b4b3ab3cc63c2b056fadc62d643577a463417bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:54:01 np0005625203.localdomain dnsmasq[311602]: started, version 2.85 cachesize 150
Feb 20 09:54:01 np0005625203.localdomain dnsmasq[311602]: DNS service limited to local subnets
Feb 20 09:54:01 np0005625203.localdomain dnsmasq[311602]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:54:01 np0005625203.localdomain dnsmasq[311602]: warning: no upstream servers configured
Feb 20 09:54:01 np0005625203.localdomain dnsmasq-dhcp[311602]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:54:01 np0005625203.localdomain dnsmasq-dhcp[311602]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:54:01 np0005625203.localdomain dnsmasq[311602]: read /var/lib/neutron/dhcp/017f5f57-fb36-4de5-8780-62face7472fd/addn_hosts - 0 addresses
Feb 20 09:54:01 np0005625203.localdomain dnsmasq-dhcp[311602]: read /var/lib/neutron/dhcp/017f5f57-fb36-4de5-8780-62face7472fd/host
Feb 20 09:54:01 np0005625203.localdomain dnsmasq-dhcp[311602]: read /var/lib/neutron/dhcp/017f5f57-fb36-4de5-8780-62face7472fd/opts
Feb 20 09:54:01 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:01.736 262775 INFO neutron.agent.linux.ip_lib [None req-c880d633-be74-4d7f-8d99-edaee6167ff3 - - - - - -] Device tap90f14e8f-54 cannot be used as it has no MAC address
Feb 20 09:54:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:01.799 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:01 np0005625203.localdomain kernel: device tap90f14e8f-54 entered promiscuous mode
Feb 20 09:54:01 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581241.8098] manager: (tap90f14e8f-54): new Generic device (/org/freedesktop/NetworkManager/Devices/29)
Feb 20 09:54:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:01.810 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:01Z|00127|binding|INFO|Claiming lport 90f14e8f-5412-4c80-a451-399a921586db for this chassis.
Feb 20 09:54:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:01Z|00128|binding|INFO|90f14e8f-5412-4c80-a451-399a921586db: Claiming unknown
Feb 20 09:54:01 np0005625203.localdomain systemd-udevd[311613]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:54:01 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:01.815 262775 INFO neutron.agent.dhcp.agent [None req-3e2d924b-2063-4153-ac8a-23363d7ab778 - - - - - -] DHCP configuration for ports {'dcf4edef-c0c3-4f75-a276-022ca9745753', 'a36366bb-cfdb-4db1-8c1f-4e445eb5415c'} is completed
Feb 20 09:54:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:01.821 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72a1aebc-35f3-41d6-b9c6-ce0f193aeac3, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=90f14e8f-5412-4c80-a451-399a921586db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:01.823 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 90f14e8f-5412-4c80-a451-399a921586db in datapath fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc bound to our chassis
Feb 20 09:54:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:01.825 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:54:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:01.826 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[d77c9e33-acfc-496e-b92f-e7d3ed9a133e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90f14e8f-54: No such device
Feb 20 09:54:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90f14e8f-54: No such device
Feb 20 09:54:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:01Z|00129|binding|INFO|Setting lport 90f14e8f-5412-4c80-a451-399a921586db ovn-installed in OVS
Feb 20 09:54:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:01Z|00130|binding|INFO|Setting lport 90f14e8f-5412-4c80-a451-399a921586db up in Southbound
Feb 20 09:54:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:01.845 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90f14e8f-54: No such device
Feb 20 09:54:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90f14e8f-54: No such device
Feb 20 09:54:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90f14e8f-54: No such device
Feb 20 09:54:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90f14e8f-54: No such device
Feb 20 09:54:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90f14e8f-54: No such device
Feb 20 09:54:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90f14e8f-54: No such device
Feb 20 09:54:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:01.892 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:01.922 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:54:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4199280483' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:54:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4199280483' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:02 np0005625203.localdomain systemd[1]: virtsecretd.service: Deactivated successfully.
Feb 20 09:54:02 np0005625203.localdomain podman[311685]: 
Feb 20 09:54:02 np0005625203.localdomain podman[311685]: 2026-02-20 09:54:02.795924482 +0000 UTC m=+0.097889220 container create 169892e05bd18cbd3cd6ab39951b650a28471bd6e595b88834c680a19276bd38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:54:02 np0005625203.localdomain systemd[1]: Started libpod-conmon-169892e05bd18cbd3cd6ab39951b650a28471bd6e595b88834c680a19276bd38.scope.
Feb 20 09:54:02 np0005625203.localdomain systemd[1]: tmp-crun.pvN9lw.mount: Deactivated successfully.
Feb 20 09:54:02 np0005625203.localdomain podman[311685]: 2026-02-20 09:54:02.753304827 +0000 UTC m=+0.055269595 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:54:02 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:54:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/341d5fb0b66c7627c4230b43f3ca257d964ee4f6ee46b45f59983ac6092a9b0a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:54:02 np0005625203.localdomain podman[311685]: 2026-02-20 09:54:02.888492057 +0000 UTC m=+0.190456805 container init 169892e05bd18cbd3cd6ab39951b650a28471bd6e595b88834c680a19276bd38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Feb 20 09:54:02 np0005625203.localdomain podman[311685]: 2026-02-20 09:54:02.89802409 +0000 UTC m=+0.199988828 container start 169892e05bd18cbd3cd6ab39951b650a28471bd6e595b88834c680a19276bd38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb 20 09:54:02 np0005625203.localdomain dnsmasq[311704]: started, version 2.85 cachesize 150
Feb 20 09:54:02 np0005625203.localdomain dnsmasq[311704]: DNS service limited to local subnets
Feb 20 09:54:02 np0005625203.localdomain dnsmasq[311704]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:54:02 np0005625203.localdomain dnsmasq[311704]: warning: no upstream servers configured
Feb 20 09:54:02 np0005625203.localdomain dnsmasq-dhcp[311704]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:54:02 np0005625203.localdomain dnsmasq[311704]: read /var/lib/neutron/dhcp/fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc/addn_hosts - 0 addresses
Feb 20 09:54:02 np0005625203.localdomain dnsmasq-dhcp[311704]: read /var/lib/neutron/dhcp/fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc/host
Feb 20 09:54:02 np0005625203.localdomain dnsmasq-dhcp[311704]: read /var/lib/neutron/dhcp/fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc/opts
Feb 20 09:54:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:03.068 262775 INFO neutron.agent.dhcp.agent [None req-e4d86da2-a2ed-4926-8b41-c398b049dcce - - - - - -] DHCP configuration for ports {'3579c5c5-ccac-457e-bf89-244c920a6f7e'} is completed
Feb 20 09:54:03 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:03Z|00131|binding|INFO|Removing iface tap90f14e8f-54 ovn-installed in OVS
Feb 20 09:54:03 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:03.183 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 827453e8-bc7f-419b-93d7-b9c4a898ada2 with type ""
Feb 20 09:54:03 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:03Z|00132|binding|INFO|Removing lport 90f14e8f-5412-4c80-a451-399a921586db ovn-installed in OVS
Feb 20 09:54:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:03.185 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:03 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:03.187 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72a1aebc-35f3-41d6-b9c6-ce0f193aeac3, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=90f14e8f-5412-4c80-a451-399a921586db) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:03.192 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:03 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:03.192 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 90f14e8f-5412-4c80-a451-399a921586db in datapath fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc unbound from our chassis
Feb 20 09:54:03 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:03.195 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:54:03 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:03.197 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[e237244c-2274-467b-a715-e897b0653c48]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:03 np0005625203.localdomain dnsmasq[311704]: read /var/lib/neutron/dhcp/fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc/addn_hosts - 0 addresses
Feb 20 09:54:03 np0005625203.localdomain dnsmasq-dhcp[311704]: read /var/lib/neutron/dhcp/fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc/host
Feb 20 09:54:03 np0005625203.localdomain dnsmasq-dhcp[311704]: read /var/lib/neutron/dhcp/fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc/opts
Feb 20 09:54:03 np0005625203.localdomain podman[311722]: 2026-02-20 09:54:03.243519826 +0000 UTC m=+0.066231414 container kill 169892e05bd18cbd3cd6ab39951b650a28471bd6e595b88834c680a19276bd38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:54:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:03.382 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:54:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:03.434 262775 INFO neutron.agent.dhcp.agent [None req-6db95a30-c77b-4786-86d4-0126b1be699e - - - - - -] DHCP configuration for ports {'3579c5c5-ccac-457e-bf89-244c920a6f7e', '90f14e8f-5412-4c80-a451-399a921586db'} is completed
Feb 20 09:54:03 np0005625203.localdomain dnsmasq[311704]: exiting on receipt of SIGTERM
Feb 20 09:54:03 np0005625203.localdomain podman[311761]: 2026-02-20 09:54:03.549135682 +0000 UTC m=+0.065083069 container kill 169892e05bd18cbd3cd6ab39951b650a28471bd6e595b88834c680a19276bd38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 20 09:54:03 np0005625203.localdomain systemd[1]: libpod-169892e05bd18cbd3cd6ab39951b650a28471bd6e595b88834c680a19276bd38.scope: Deactivated successfully.
Feb 20 09:54:03 np0005625203.localdomain podman[311773]: 2026-02-20 09:54:03.625232039 +0000 UTC m=+0.063839401 container died 169892e05bd18cbd3cd6ab39951b650a28471bd6e595b88834c680a19276bd38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 20 09:54:03 np0005625203.localdomain podman[311773]: 2026-02-20 09:54:03.659106683 +0000 UTC m=+0.097714005 container cleanup 169892e05bd18cbd3cd6ab39951b650a28471bd6e595b88834c680a19276bd38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:54:03 np0005625203.localdomain systemd[1]: libpod-conmon-169892e05bd18cbd3cd6ab39951b650a28471bd6e595b88834c680a19276bd38.scope: Deactivated successfully.
Feb 20 09:54:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:03.685 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:03 np0005625203.localdomain podman[311775]: 2026-02-20 09:54:03.706305279 +0000 UTC m=+0.132229470 container remove 169892e05bd18cbd3cd6ab39951b650a28471bd6e595b88834c680a19276bd38 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-fe2d1044-29ca-418d-b5c9-3b7a0a23ddcc, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:54:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:03.720 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:03 np0005625203.localdomain kernel: device tap90f14e8f-54 left promiscuous mode
Feb 20 09:54:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:03.733 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:03.759 262775 INFO neutron.agent.dhcp.agent [-] Synchronizing state
Feb 20 09:54:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:03.993 262775 INFO neutron.agent.dhcp.agent [None req-5927803d-080b-4b31-8fb9-486293483423 - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:54:04 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:04.009 262775 INFO neutron.agent.dhcp.agent [None req-e2daf790-839f-4e71-8238-88b6ee7dd3df - - - - - -] Synchronizing state complete
Feb 20 09:54:04 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:04.010 262775 INFO neutron.agent.dhcp.agent [None req-412fb647-237d-4185-a0a6-e52f457e0b46 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:54:04 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3721321372' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:54:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:04.218 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:04 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:54:04 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:54:04 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-341d5fb0b66c7627c4230b43f3ca257d964ee4f6ee46b45f59983ac6092a9b0a-merged.mount: Deactivated successfully.
Feb 20 09:54:04 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-169892e05bd18cbd3cd6ab39951b650a28471bd6e595b88834c680a19276bd38-userdata-shm.mount: Deactivated successfully.
Feb 20 09:54:04 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2dfe2d1044\x2d29ca\x2d418d\x2db5c9\x2d3b7a0a23ddcc.mount: Deactivated successfully.
Feb 20 09:54:04 np0005625203.localdomain podman[311801]: 2026-02-20 09:54:04.517763625 +0000 UTC m=+0.081701911 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:54:04 np0005625203.localdomain podman[311801]: 2026-02-20 09:54:04.581363766 +0000 UTC m=+0.145302012 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 09:54:04 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:54:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:05.621 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:06 np0005625203.localdomain ceph-mon[296066]: pgmap v200: 177 pgs: 177 active+clean; 145 MiB data, 751 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 7.7 KiB/s wr, 99 op/s
Feb 20 09:54:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4199280483' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4199280483' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1336582611' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:54:06 np0005625203.localdomain ceph-mon[296066]: pgmap v201: 177 pgs: 177 active+clean; 145 MiB data, 751 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 6.5 KiB/s wr, 85 op/s
Feb 20 09:54:06 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:54:06 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:54:06 np0005625203.localdomain systemd[1]: tmp-crun.jPT81h.mount: Deactivated successfully.
Feb 20 09:54:06 np0005625203.localdomain podman[311826]: 2026-02-20 09:54:06.782054669 +0000 UTC m=+0.091604337 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute)
Feb 20 09:54:06 np0005625203.localdomain podman[311826]: 2026-02-20 09:54:06.797221886 +0000 UTC m=+0.106771614 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:54:06 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:54:06 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:54:06 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4199115342' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:06 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:54:06 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4199115342' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:06 np0005625203.localdomain podman[311827]: 2026-02-20 09:54:06.799125015 +0000 UTC m=+0.104782133 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, name=ubi9/ubi-minimal, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 09:54:06 np0005625203.localdomain podman[311827]: 2026-02-20 09:54:06.884665023 +0000 UTC m=+0.190322131 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1770267347, io.openshift.expose-services=, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, version=9.7, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter)
Feb 20 09:54:06 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:54:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:54:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:54:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:54:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:54:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:54:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:54:07 np0005625203.localdomain ceph-mon[296066]: pgmap v202: 177 pgs: 177 active+clean; 145 MiB data, 751 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 6.0 KiB/s wr, 92 op/s
Feb 20 09:54:07 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1081899576' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:07 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1081899576' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:07 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4199115342' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:07 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4199115342' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:07 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e120 e120: 6 total, 6 up, 6 in
Feb 20 09:54:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:07.338 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:54:07 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e121 e121: 6 total, 6 up, 6 in
Feb 20 09:54:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:07.669 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:54:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:07.670 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:54:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:07.670 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:54:07 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:54:07 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2750035915' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:07 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:54:07 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2750035915' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:08 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:08.137 262775 INFO neutron.agent.linux.ip_lib [None req-5b29f345-790d-44ab-84cc-907a1915cef3 - - - - - -] Device tap32d91190-34 cannot be used as it has no MAC address
Feb 20 09:54:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:08.169 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:08 np0005625203.localdomain kernel: device tap32d91190-34 entered promiscuous mode
Feb 20 09:54:08 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581248.1792] manager: (tap32d91190-34): new Generic device (/org/freedesktop/NetworkManager/Devices/30)
Feb 20 09:54:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:08.181 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:08 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:08Z|00133|binding|INFO|Claiming lport 32d91190-340b-48cf-8b03-b063659fa157 for this chassis.
Feb 20 09:54:08 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:08Z|00134|binding|INFO|32d91190-340b-48cf-8b03-b063659fa157: Claiming unknown
Feb 20 09:54:08 np0005625203.localdomain systemd-udevd[311873]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:54:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:08.202 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-6511024b-c11b-4dfb-a17c-be7cfe6a2ec0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6511024b-c11b-4dfb-a17c-be7cfe6a2ec0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24f80e98-c677-4df5-bfd0-9ea4ee2281ce, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=32d91190-340b-48cf-8b03-b063659fa157) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:08.206 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 32d91190-340b-48cf-8b03-b063659fa157 in datapath 6511024b-c11b-4dfb-a17c-be7cfe6a2ec0 bound to our chassis
Feb 20 09:54:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:08.207 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6511024b-c11b-4dfb-a17c-be7cfe6a2ec0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:54:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:08.209 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[aa14bc64-6526-4516-b8e4-af2be1c441b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap32d91190-34: No such device
Feb 20 09:54:08 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:08Z|00135|binding|INFO|Setting lport 32d91190-340b-48cf-8b03-b063659fa157 ovn-installed in OVS
Feb 20 09:54:08 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:08Z|00136|binding|INFO|Setting lport 32d91190-340b-48cf-8b03-b063659fa157 up in Southbound
Feb 20 09:54:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:08.215 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap32d91190-34: No such device
Feb 20 09:54:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap32d91190-34: No such device
Feb 20 09:54:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap32d91190-34: No such device
Feb 20 09:54:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap32d91190-34: No such device
Feb 20 09:54:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap32d91190-34: No such device
Feb 20 09:54:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap32d91190-34: No such device
Feb 20 09:54:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap32d91190-34: No such device
Feb 20 09:54:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:08.267 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:08.315 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:08 np0005625203.localdomain ceph-mon[296066]: osdmap e120: 6 total, 6 up, 6 in
Feb 20 09:54:08 np0005625203.localdomain ceph-mon[296066]: pgmap v204: 177 pgs: 177 active+clean; 145 MiB data, 751 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 4.9 KiB/s wr, 75 op/s
Feb 20 09:54:08 np0005625203.localdomain ceph-mon[296066]: osdmap e121: 6 total, 6 up, 6 in
Feb 20 09:54:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2750035915' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2750035915' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:08 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:08.500 2 INFO neutron.agent.securitygroups_rpc [None req-b9c4f92c-e0aa-4ddd-a393-48b8bc5d6b0b 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['b7daa996-a450-47f2-a46b-44613b415203']
Feb 20 09:54:08 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:08.650 2 INFO neutron.agent.securitygroups_rpc [None req-fa8e1861-ba97-4550-97ae-0d61a37c286d 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['b7daa996-a450-47f2-a46b-44613b415203']
Feb 20 09:54:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e122 e122: 6 total, 6 up, 6 in
Feb 20 09:54:09 np0005625203.localdomain podman[311944]: 
Feb 20 09:54:09 np0005625203.localdomain podman[311944]: 2026-02-20 09:54:09.132790028 +0000 UTC m=+0.070645520 container create 1d39adff0a2ca25b787f7280ae00a25f046f277ad98ac136dfdfeb38d0f88741 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6511024b-c11b-4dfb-a17c-be7cfe6a2ec0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 20 09:54:09 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:09.147 2 INFO neutron.agent.securitygroups_rpc [None req-60615bbe-69b8-4a6d-a777-f3532d76e589 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']
Feb 20 09:54:09 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:09Z|00137|binding|INFO|Removing iface tap32d91190-34 ovn-installed in OVS
Feb 20 09:54:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:09.158 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port c4dd788c-4b2e-4e4a-b839-fb17c5cddd94 with type ""
Feb 20 09:54:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:09.160 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-6511024b-c11b-4dfb-a17c-be7cfe6a2ec0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6511024b-c11b-4dfb-a17c-be7cfe6a2ec0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=24f80e98-c677-4df5-bfd0-9ea4ee2281ce, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=32d91190-340b-48cf-8b03-b063659fa157) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:09 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:09Z|00138|binding|INFO|Removing lport 32d91190-340b-48cf-8b03-b063659fa157 ovn-installed in OVS
Feb 20 09:54:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:09.161 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 32d91190-340b-48cf-8b03-b063659fa157 in datapath 6511024b-c11b-4dfb-a17c-be7cfe6a2ec0 unbound from our chassis
Feb 20 09:54:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:09.163 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 6511024b-c11b-4dfb-a17c-be7cfe6a2ec0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:54:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:09.164 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[1356353f-4f9e-4b41-b5e5-f16e7e58c6f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:09.197 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:09 np0005625203.localdomain podman[311944]: 2026-02-20 09:54:09.099182441 +0000 UTC m=+0.037037953 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:54:09 np0005625203.localdomain systemd[1]: Started libpod-conmon-1d39adff0a2ca25b787f7280ae00a25f046f277ad98ac136dfdfeb38d0f88741.scope.
Feb 20 09:54:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:09.220 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:09 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:54:09 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff5743c9347cec4e6ceabc5dc652b14ba927b64a0ad7c47c39484dc322d1493f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:54:09 np0005625203.localdomain podman[311944]: 2026-02-20 09:54:09.249226718 +0000 UTC m=+0.187082270 container init 1d39adff0a2ca25b787f7280ae00a25f046f277ad98ac136dfdfeb38d0f88741 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6511024b-c11b-4dfb-a17c-be7cfe6a2ec0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 20 09:54:09 np0005625203.localdomain podman[311944]: 2026-02-20 09:54:09.259441404 +0000 UTC m=+0.197296956 container start 1d39adff0a2ca25b787f7280ae00a25f046f277ad98ac136dfdfeb38d0f88741 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6511024b-c11b-4dfb-a17c-be7cfe6a2ec0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 20 09:54:09 np0005625203.localdomain dnsmasq[311963]: started, version 2.85 cachesize 150
Feb 20 09:54:09 np0005625203.localdomain dnsmasq[311963]: DNS service limited to local subnets
Feb 20 09:54:09 np0005625203.localdomain dnsmasq[311963]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:54:09 np0005625203.localdomain dnsmasq[311963]: warning: no upstream servers configured
Feb 20 09:54:09 np0005625203.localdomain dnsmasq-dhcp[311963]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:54:09 np0005625203.localdomain dnsmasq[311963]: read /var/lib/neutron/dhcp/6511024b-c11b-4dfb-a17c-be7cfe6a2ec0/addn_hosts - 0 addresses
Feb 20 09:54:09 np0005625203.localdomain dnsmasq-dhcp[311963]: read /var/lib/neutron/dhcp/6511024b-c11b-4dfb-a17c-be7cfe6a2ec0/host
Feb 20 09:54:09 np0005625203.localdomain dnsmasq-dhcp[311963]: read /var/lib/neutron/dhcp/6511024b-c11b-4dfb-a17c-be7cfe6a2ec0/opts
Feb 20 09:54:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:54:09 np0005625203.localdomain ceph-mon[296066]: osdmap e122: 6 total, 6 up, 6 in
Feb 20 09:54:09 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/523875761' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:09 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/523875761' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:10 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:10.503 262775 INFO neutron.agent.dhcp.agent [None req-030b9e4b-c3d8-425b-b8a5-d3552d42f65f - - - - - -] DHCP configuration for ports {'cfbd6275-0948-422b-9c0c-e936f54c2643'} is completed
Feb 20 09:54:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:10.585 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:10.622 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:10 np0005625203.localdomain systemd[1]: tmp-crun.ZBRQom.mount: Deactivated successfully.
Feb 20 09:54:10 np0005625203.localdomain dnsmasq[311963]: read /var/lib/neutron/dhcp/6511024b-c11b-4dfb-a17c-be7cfe6a2ec0/addn_hosts - 0 addresses
Feb 20 09:54:10 np0005625203.localdomain dnsmasq-dhcp[311963]: read /var/lib/neutron/dhcp/6511024b-c11b-4dfb-a17c-be7cfe6a2ec0/host
Feb 20 09:54:10 np0005625203.localdomain dnsmasq-dhcp[311963]: read /var/lib/neutron/dhcp/6511024b-c11b-4dfb-a17c-be7cfe6a2ec0/opts
Feb 20 09:54:10 np0005625203.localdomain podman[311981]: 2026-02-20 09:54:10.716585334 +0000 UTC m=+0.081105603 container kill 1d39adff0a2ca25b787f7280ae00a25f046f277ad98ac136dfdfeb38d0f88741 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6511024b-c11b-4dfb-a17c-be7cfe6a2ec0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:54:10 np0005625203.localdomain ceph-mon[296066]: pgmap v207: 177 pgs: 177 active+clean; 145 MiB data, 751 MiB used, 41 GiB / 42 GiB avail; 106 KiB/s rd, 7.2 KiB/s wr, 143 op/s
Feb 20 09:54:10 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:10.731 2 INFO neutron.agent.securitygroups_rpc [None req-05c5180a-791e-4a36-b283-1b3700162f32 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']
Feb 20 09:54:11 np0005625203.localdomain podman[312020]: 2026-02-20 09:54:11.072306555 +0000 UTC m=+0.063225171 container kill 1d39adff0a2ca25b787f7280ae00a25f046f277ad98ac136dfdfeb38d0f88741 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6511024b-c11b-4dfb-a17c-be7cfe6a2ec0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:54:11 np0005625203.localdomain dnsmasq[311963]: exiting on receipt of SIGTERM
Feb 20 09:54:11 np0005625203.localdomain systemd[1]: tmp-crun.L65DfR.mount: Deactivated successfully.
Feb 20 09:54:11 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:11.077 2 INFO neutron.agent.securitygroups_rpc [None req-fda66745-7058-43a6-bd22-7e541948feca 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']
Feb 20 09:54:11 np0005625203.localdomain systemd[1]: libpod-1d39adff0a2ca25b787f7280ae00a25f046f277ad98ac136dfdfeb38d0f88741.scope: Deactivated successfully.
Feb 20 09:54:11 np0005625203.localdomain podman[312033]: 2026-02-20 09:54:11.157478182 +0000 UTC m=+0.061621792 container died 1d39adff0a2ca25b787f7280ae00a25f046f277ad98ac136dfdfeb38d0f88741 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6511024b-c11b-4dfb-a17c-be7cfe6a2ec0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:54:11 np0005625203.localdomain podman[312033]: 2026-02-20 09:54:11.251856062 +0000 UTC m=+0.155999632 container remove 1d39adff0a2ca25b787f7280ae00a25f046f277ad98ac136dfdfeb38d0f88741 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-6511024b-c11b-4dfb-a17c-be7cfe6a2ec0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:11 np0005625203.localdomain systemd[1]: libpod-conmon-1d39adff0a2ca25b787f7280ae00a25f046f277ad98ac136dfdfeb38d0f88741.scope: Deactivated successfully.
Feb 20 09:54:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:11.265 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:11 np0005625203.localdomain kernel: device tap32d91190-34 left promiscuous mode
Feb 20 09:54:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:11.279 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:11 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:11.381 2 INFO neutron.agent.securitygroups_rpc [None req-484ad5d4-98d6-44d1-ade8-e83a00451c3f 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']
Feb 20 09:54:11 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:11.383 262775 INFO neutron.agent.dhcp.agent [None req-94393253-0c4f-45e3-bc33-87df55797ea2 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:54:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ff5743c9347cec4e6ceabc5dc652b14ba927b64a0ad7c47c39484dc322d1493f-merged.mount: Deactivated successfully.
Feb 20 09:54:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1d39adff0a2ca25b787f7280ae00a25f046f277ad98ac136dfdfeb38d0f88741-userdata-shm.mount: Deactivated successfully.
Feb 20 09:54:11 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d6511024b\x2dc11b\x2d4dfb\x2da17c\x2dbe7cfe6a2ec0.mount: Deactivated successfully.
Feb 20 09:54:11 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:11.853 2 INFO neutron.agent.securitygroups_rpc [None req-d28a91c5-19c2-44e5-9f1d-5b67d2d402bb 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']
Feb 20 09:54:12 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:12.053 2 INFO neutron.agent.securitygroups_rpc [None req-afec8fb8-e434-475a-b1f9-4fa41fdb543f 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']
Feb 20 09:54:12 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:12.319 2 INFO neutron.agent.securitygroups_rpc [None req-ce85a640-3696-4e3b-b081-e77c6a0d5165 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']
Feb 20 09:54:12 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:12.480 2 INFO neutron.agent.securitygroups_rpc [None req-2e25a288-cc52-4176-8755-11e2b4f58624 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']
Feb 20 09:54:12 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e123 e123: 6 total, 6 up, 6 in
Feb 20 09:54:12 np0005625203.localdomain ceph-mon[296066]: pgmap v208: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 110 KiB/s rd, 6.8 KiB/s wr, 151 op/s
Feb 20 09:54:12 np0005625203.localdomain ceph-mon[296066]: osdmap e123: 6 total, 6 up, 6 in
Feb 20 09:54:12 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:12.778 2 INFO neutron.agent.securitygroups_rpc [None req-8a952836-35e7-4a3b-8a56-14552dc3796b 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']
Feb 20 09:54:13 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:13.697 2 INFO neutron.agent.securitygroups_rpc [None req-72cb9bc1-a00e-467b-ba2c-70207cf7bcb5 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['5237cd22-9777-4b5a-aa35-83bccb0fdfdc']
Feb 20 09:54:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:14.254 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:14 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:54:14 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:14.484 2 INFO neutron.agent.securitygroups_rpc [None req-0e571d4b-2938-4d22-abb1-e6b913186df6 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['6b2659dc-8adf-40b4-b971-7bc179be3dc5']
Feb 20 09:54:14 np0005625203.localdomain ceph-mon[296066]: pgmap v210: 177 pgs: 177 active+clean; 145 MiB data, 755 MiB used, 41 GiB / 42 GiB avail; 110 KiB/s rd, 6.8 KiB/s wr, 151 op/s
Feb 20 09:54:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:15.658 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:16 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:16.041 2 INFO neutron.agent.securitygroups_rpc [None req-9065899c-9819-48e8-b360-946a69906bd9 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['56cc5e29-9f6f-4f35-9ade-42d618bdd35b']
Feb 20 09:54:16 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:16.288 2 INFO neutron.agent.securitygroups_rpc [None req-00b37041-3488-4088-b5e4-b424dd1f63aa 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['56cc5e29-9f6f-4f35-9ade-42d618bdd35b']
Feb 20 09:54:16 np0005625203.localdomain ceph-mon[296066]: pgmap v211: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 5.2 KiB/s wr, 116 op/s
Feb 20 09:54:17 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:17.518 2 INFO neutron.agent.securitygroups_rpc [None req-bee57eed-2319-4f09-8acf-ebe401de5df5 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['39c9ea95-070c-4bc4-9287-e329c91de991']
Feb 20 09:54:17 np0005625203.localdomain podman[312078]: 2026-02-20 09:54:17.626834176 +0000 UTC m=+0.060673712 container kill f3c6f4429206693e056abc5d4b4b3ab3cc63c2b056fadc62d643577a463417bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 20 09:54:17 np0005625203.localdomain dnsmasq[311602]: exiting on receipt of SIGTERM
Feb 20 09:54:17 np0005625203.localdomain systemd[1]: libpod-f3c6f4429206693e056abc5d4b4b3ab3cc63c2b056fadc62d643577a463417bb.scope: Deactivated successfully.
Feb 20 09:54:17 np0005625203.localdomain podman[312090]: 2026-02-20 09:54:17.702935733 +0000 UTC m=+0.060585280 container died f3c6f4429206693e056abc5d4b4b3ab3cc63c2b056fadc62d643577a463417bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:54:17 np0005625203.localdomain systemd[1]: tmp-crun.uaAtmE.mount: Deactivated successfully.
Feb 20 09:54:17 np0005625203.localdomain podman[312090]: 2026-02-20 09:54:17.737023444 +0000 UTC m=+0.094673001 container cleanup f3c6f4429206693e056abc5d4b4b3ab3cc63c2b056fadc62d643577a463417bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:17 np0005625203.localdomain systemd[1]: libpod-conmon-f3c6f4429206693e056abc5d4b4b3ab3cc63c2b056fadc62d643577a463417bb.scope: Deactivated successfully.
Feb 20 09:54:17 np0005625203.localdomain podman[312092]: 2026-02-20 09:54:17.782576469 +0000 UTC m=+0.131213767 container remove f3c6f4429206693e056abc5d4b4b3ab3cc63c2b056fadc62d643577a463417bb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 20 09:54:18 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:18.295 2 INFO neutron.agent.securitygroups_rpc [None req-63f4c14d-d8d8-4887-9902-5c1bd910d46b 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['39c9ea95-070c-4bc4-9287-e329c91de991']
Feb 20 09:54:18 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:18.579 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 857b82e8-34b2-4609-9106-d229e3f28e93 with type ""
Feb 20 09:54:18 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:18Z|00139|binding|INFO|Removing iface tapa36366bb-cf ovn-installed in OVS
Feb 20 09:54:18 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:18Z|00140|binding|INFO|Removing lport a36366bb-cfdb-4db1-8c1f-4e445eb5415c ovn-installed in OVS
Feb 20 09:54:18 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:18.582 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:18 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:18.583 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-017f5f57-fb36-4de5-8780-62face7472fd', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-017f5f57-fb36-4de5-8780-62face7472fd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b7ef526d9e1e4325afc5a8eb7c2f52b5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=08e2fe8e-20b9-4384-a874-a3bcf59ddd26, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=a36366bb-cfdb-4db1-8c1f-4e445eb5415c) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:18 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:18.586 161112 INFO neutron.agent.ovn.metadata.agent [-] Port a36366bb-cfdb-4db1-8c1f-4e445eb5415c in datapath 017f5f57-fb36-4de5-8780-62face7472fd unbound from our chassis
Feb 20 09:54:18 np0005625203.localdomain podman[312167]: 
Feb 20 09:54:18 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:18.590 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:18 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:18.590 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 017f5f57-fb36-4de5-8780-62face7472fd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:54:18 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:18.591 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[542ff409-5ab4-4f71-b245-6f1d40418efc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:18 np0005625203.localdomain podman[312167]: 2026-02-20 09:54:18.604721806 +0000 UTC m=+0.097814228 container create 81860bbcbe9fb78d595a14c07005de21d4823256f919be1925ac36c5994b25d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 20 09:54:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-373f3925c85b188e8bf9b872072ad4426b7d2d21083f7da3a22138ee8c7de23c-merged.mount: Deactivated successfully.
Feb 20 09:54:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f3c6f4429206693e056abc5d4b4b3ab3cc63c2b056fadc62d643577a463417bb-userdata-shm.mount: Deactivated successfully.
Feb 20 09:54:18 np0005625203.localdomain systemd[1]: Started libpod-conmon-81860bbcbe9fb78d595a14c07005de21d4823256f919be1925ac36c5994b25d2.scope.
Feb 20 09:54:18 np0005625203.localdomain podman[312167]: 2026-02-20 09:54:18.556613982 +0000 UTC m=+0.049706454 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:54:18 np0005625203.localdomain systemd[1]: tmp-crun.RMs2FL.mount: Deactivated successfully.
Feb 20 09:54:18 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:54:18 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8aaf47212a1daf1fb08be30e34f8904f47427c127dcaf79606e6b6692acfa44/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:54:18 np0005625203.localdomain podman[312167]: 2026-02-20 09:54:18.681785862 +0000 UTC m=+0.174878284 container init 81860bbcbe9fb78d595a14c07005de21d4823256f919be1925ac36c5994b25d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:54:18 np0005625203.localdomain podman[312167]: 2026-02-20 09:54:18.691168242 +0000 UTC m=+0.184260704 container start 81860bbcbe9fb78d595a14c07005de21d4823256f919be1925ac36c5994b25d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:54:18 np0005625203.localdomain dnsmasq[312185]: started, version 2.85 cachesize 150
Feb 20 09:54:18 np0005625203.localdomain dnsmasq[312185]: DNS service limited to local subnets
Feb 20 09:54:18 np0005625203.localdomain dnsmasq[312185]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:54:18 np0005625203.localdomain dnsmasq[312185]: warning: no upstream servers configured
Feb 20 09:54:18 np0005625203.localdomain dnsmasq-dhcp[312185]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:54:18 np0005625203.localdomain dnsmasq[312185]: read /var/lib/neutron/dhcp/017f5f57-fb36-4de5-8780-62face7472fd/addn_hosts - 0 addresses
Feb 20 09:54:18 np0005625203.localdomain dnsmasq-dhcp[312185]: read /var/lib/neutron/dhcp/017f5f57-fb36-4de5-8780-62face7472fd/host
Feb 20 09:54:18 np0005625203.localdomain dnsmasq-dhcp[312185]: read /var/lib/neutron/dhcp/017f5f57-fb36-4de5-8780-62face7472fd/opts
Feb 20 09:54:18 np0005625203.localdomain ceph-mon[296066]: pgmap v212: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 75 KiB/s rd, 4.7 KiB/s wr, 103 op/s
Feb 20 09:54:18 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:18.868 262775 INFO neutron.agent.dhcp.agent [None req-d6b67d18-1844-45c1-b763-6815b3a0d256 - - - - - -] DHCP configuration for ports {'dcf4edef-c0c3-4f75-a276-022ca9745753', 'a36366bb-cfdb-4db1-8c1f-4e445eb5415c'} is completed
Feb 20 09:54:19 np0005625203.localdomain dnsmasq[312185]: exiting on receipt of SIGTERM
Feb 20 09:54:19 np0005625203.localdomain podman[312202]: 2026-02-20 09:54:19.055499438 +0000 UTC m=+0.060191938 container kill 81860bbcbe9fb78d595a14c07005de21d4823256f919be1925ac36c5994b25d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:19 np0005625203.localdomain systemd[1]: libpod-81860bbcbe9fb78d595a14c07005de21d4823256f919be1925ac36c5994b25d2.scope: Deactivated successfully.
Feb 20 09:54:19 np0005625203.localdomain podman[312214]: 2026-02-20 09:54:19.130583715 +0000 UTC m=+0.060156458 container died 81860bbcbe9fb78d595a14c07005de21d4823256f919be1925ac36c5994b25d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:54:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:54:19 np0005625203.localdomain podman[312214]: 2026-02-20 09:54:19.183192906 +0000 UTC m=+0.112765609 container cleanup 81860bbcbe9fb78d595a14c07005de21d4823256f919be1925ac36c5994b25d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 09:54:19 np0005625203.localdomain systemd[1]: libpod-conmon-81860bbcbe9fb78d595a14c07005de21d4823256f919be1925ac36c5994b25d2.scope: Deactivated successfully.
Feb 20 09:54:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:19.260 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:19 np0005625203.localdomain podman[312216]: 2026-02-20 09:54:19.293028335 +0000 UTC m=+0.213593800 container remove 81860bbcbe9fb78d595a14c07005de21d4823256f919be1925ac36c5994b25d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-017f5f57-fb36-4de5-8780-62face7472fd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:54:19 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:19.292 2 INFO neutron.agent.securitygroups_rpc [None req-0eb3d32b-3601-4bfa-a4ec-07263fd65bfe 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['92902041-7601-4e4f-984f-f40e84e5d87a']
Feb 20 09:54:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:19.303 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:19 np0005625203.localdomain kernel: device tapa36366bb-cf left promiscuous mode
Feb 20 09:54:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:19.320 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:54:19 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:19.341 262775 INFO neutron.agent.dhcp.agent [None req-0144cb7f-bf7a-41d8-a320-e45d502d58e1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:54:19 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:19.342 262775 INFO neutron.agent.dhcp.agent [None req-0144cb7f-bf7a-41d8-a320-e45d502d58e1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:54:19 np0005625203.localdomain podman[312238]: 2026-02-20 09:54:19.35969784 +0000 UTC m=+0.184214882 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:54:19 np0005625203.localdomain podman[312238]: 2026-02-20 09:54:19.370085941 +0000 UTC m=+0.194602983 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:54:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:54:19 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:54:19 np0005625203.localdomain podman[312265]: 2026-02-20 09:54:19.474812091 +0000 UTC m=+0.073359204 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:54:19 np0005625203.localdomain podman[312265]: 2026-02-20 09:54:19.481040513 +0000 UTC m=+0.079587696 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:54:19 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:54:19 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c8aaf47212a1daf1fb08be30e34f8904f47427c127dcaf79606e6b6692acfa44-merged.mount: Deactivated successfully.
Feb 20 09:54:19 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-81860bbcbe9fb78d595a14c07005de21d4823256f919be1925ac36c5994b25d2-userdata-shm.mount: Deactivated successfully.
Feb 20 09:54:19 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d017f5f57\x2dfb36\x2d4de5\x2d8780\x2d62face7472fd.mount: Deactivated successfully.
Feb 20 09:54:19 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:19.702 2 INFO neutron.agent.securitygroups_rpc [None req-c48149ca-5e2e-4ce1-9578-04651a281b9c 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['92902041-7601-4e4f-984f-f40e84e5d87a']
Feb 20 09:54:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:54:19 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2049357571' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:54:19 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2049357571' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:19 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2049357571' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:19 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2049357571' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:20 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:20.122 2 INFO neutron.agent.securitygroups_rpc [None req-db7ac1b3-e635-4b44-b582-fc808315e3e2 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['92902041-7601-4e4f-984f-f40e84e5d87a']
Feb 20 09:54:20 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:20.441 2 INFO neutron.agent.securitygroups_rpc [None req-59de0fbc-5d34-4b72-b4ef-ac2a7c9b1e9d 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['92902041-7601-4e4f-984f-f40e84e5d87a']
Feb 20 09:54:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:20.701 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:20 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:20.703 2 INFO neutron.agent.securitygroups_rpc [None req-cd9b335d-ac52-4aea-badd-19c0c11ff6a7 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['92902041-7601-4e4f-984f-f40e84e5d87a']
Feb 20 09:54:20 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:20.799 262775 INFO neutron.agent.linux.ip_lib [None req-e8cae02d-5a6b-41f1-a714-7d00e705a457 - - - - - -] Device tap6f2db5b5-cd cannot be used as it has no MAC address
Feb 20 09:54:20 np0005625203.localdomain ceph-mon[296066]: pgmap v213: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 409 B/s wr, 20 op/s
Feb 20 09:54:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:20.822 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:20 np0005625203.localdomain kernel: device tap6f2db5b5-cd entered promiscuous mode
Feb 20 09:54:20 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:20Z|00141|binding|INFO|Claiming lport 6f2db5b5-cd09-4d42-a24a-c9a72f164317 for this chassis.
Feb 20 09:54:20 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:20Z|00142|binding|INFO|6f2db5b5-cd09-4d42-a24a-c9a72f164317: Claiming unknown
Feb 20 09:54:20 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581260.8300] manager: (tap6f2db5b5-cd): new Generic device (/org/freedesktop/NetworkManager/Devices/31)
Feb 20 09:54:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:20.828 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:20 np0005625203.localdomain systemd-udevd[312298]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:54:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:20.844 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-dc12f634-62d5-41b5-9719-c2501170ffb0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc12f634-62d5-41b5-9719-c2501170ffb0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8affd803090342c7a0b4a8c10fbcda95', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11a6bd9f-da0c-4244-be95-190fd2154b45, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=6f2db5b5-cd09-4d42-a24a-c9a72f164317) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:20.846 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 6f2db5b5-cd09-4d42-a24a-c9a72f164317 in datapath dc12f634-62d5-41b5-9719-c2501170ffb0 bound to our chassis
Feb 20 09:54:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:20.849 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 76cda34b-31ae-4314-bbd4-8da15e2c70a2 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:54:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:20.849 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dc12f634-62d5-41b5-9719-c2501170ffb0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:54:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:20.850 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[c08c44ba-8163-4f75-b7b7-9ac4d71f68b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:20 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap6f2db5b5-cd: No such device
Feb 20 09:54:20 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap6f2db5b5-cd: No such device
Feb 20 09:54:20 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:20Z|00143|binding|INFO|Setting lport 6f2db5b5-cd09-4d42-a24a-c9a72f164317 ovn-installed in OVS
Feb 20 09:54:20 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:20Z|00144|binding|INFO|Setting lport 6f2db5b5-cd09-4d42-a24a-c9a72f164317 up in Southbound
Feb 20 09:54:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:20.870 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:20 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap6f2db5b5-cd: No such device
Feb 20 09:54:20 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap6f2db5b5-cd: No such device
Feb 20 09:54:20 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap6f2db5b5-cd: No such device
Feb 20 09:54:20 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap6f2db5b5-cd: No such device
Feb 20 09:54:20 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap6f2db5b5-cd: No such device
Feb 20 09:54:20 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap6f2db5b5-cd: No such device
Feb 20 09:54:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:20.906 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:20.938 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:21 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:21.038 2 INFO neutron.agent.securitygroups_rpc [None req-eaa2444c-9904-4bf6-912c-ce544aec0944 a7bf5bd2a08b40838819ba2189f49015 8affd803090342c7a0b4a8c10fbcda95 - - default default] Security group member updated ['863be2ee-f2c8-49ce-9c0f-4d58bca5441c']
Feb 20 09:54:21 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:21.197 2 INFO neutron.agent.securitygroups_rpc [None req-eef0b483-52c5-455d-8e50-fdf7323c6cd9 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['92902041-7601-4e4f-984f-f40e84e5d87a']
Feb 20 09:54:21 np0005625203.localdomain podman[312369]: 
Feb 20 09:54:21 np0005625203.localdomain podman[312369]: 2026-02-20 09:54:21.800798306 +0000 UTC m=+0.084999053 container create 7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc12f634-62d5-41b5-9719-c2501170ffb0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:21 np0005625203.localdomain systemd[1]: Started libpod-conmon-7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f.scope.
Feb 20 09:54:21 np0005625203.localdomain podman[312369]: 2026-02-20 09:54:21.76039532 +0000 UTC m=+0.044596087 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:54:21 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:54:21 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c72fa4a96bcb0d61975a0c3510a8220aafda0739a02d1076c3f3354361feaa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:54:21 np0005625203.localdomain podman[312369]: 2026-02-20 09:54:21.91635372 +0000 UTC m=+0.200554457 container init 7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc12f634-62d5-41b5-9719-c2501170ffb0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 20 09:54:21 np0005625203.localdomain podman[312369]: 2026-02-20 09:54:21.92770211 +0000 UTC m=+0.211902847 container start 7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc12f634-62d5-41b5-9719-c2501170ffb0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 20 09:54:21 np0005625203.localdomain dnsmasq[312387]: started, version 2.85 cachesize 150
Feb 20 09:54:21 np0005625203.localdomain dnsmasq[312387]: DNS service limited to local subnets
Feb 20 09:54:21 np0005625203.localdomain dnsmasq[312387]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:54:21 np0005625203.localdomain dnsmasq[312387]: warning: no upstream servers configured
Feb 20 09:54:21 np0005625203.localdomain dnsmasq-dhcp[312387]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:54:21 np0005625203.localdomain dnsmasq[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/addn_hosts - 0 addresses
Feb 20 09:54:21 np0005625203.localdomain dnsmasq-dhcp[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/host
Feb 20 09:54:21 np0005625203.localdomain dnsmasq-dhcp[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/opts
Feb 20 09:54:21 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:21.991 262775 INFO neutron.agent.dhcp.agent [None req-dde2988d-67e0-4179-b9eb-80f08a53366b - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:54:20Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e78220>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e78880>], id=f86ad34a-86fe-4891-a5f6-28d9e4ecb394, ip_allocation=immediate, mac_address=fa:16:3e:14:7d:2c, name=tempest-ExtraDHCPOptionsTestJSON-619773436, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:17Z, description=, dns_domain=, id=dc12f634-62d5-41b5-9719-c2501170ffb0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsTestJSON-test-network-1629180863, port_security_enabled=True, project_id=8affd803090342c7a0b4a8c10fbcda95, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58010, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1623, status=ACTIVE, subnets=['ca3cc6da-09d3-45dd-ae48-5596282d4a3c'], tags=[], tenant_id=8affd803090342c7a0b4a8c10fbcda95, updated_at=2026-02-20T09:54:18Z, vlan_transparent=None, network_id=dc12f634-62d5-41b5-9719-c2501170ffb0, port_security_enabled=True, project_id=8affd803090342c7a0b4a8c10fbcda95, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['863be2ee-f2c8-49ce-9c0f-4d58bca5441c'], standard_attr_id=1665, status=DOWN, tags=[], tenant_id=8affd803090342c7a0b4a8c10fbcda95, updated_at=2026-02-20T09:54:20Z on network dc12f634-62d5-41b5-9719-c2501170ffb0
Feb 20 09:54:22 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:22.231 262775 INFO neutron.agent.dhcp.agent [None req-a83d4019-997b-4bc9-9450-3c8b30d87bd3 - - - - - -] DHCP configuration for ports {'d06abf67-683d-4c47-839c-e703ada831ad'} is completed
Feb 20 09:54:22 np0005625203.localdomain dnsmasq[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/addn_hosts - 1 addresses
Feb 20 09:54:22 np0005625203.localdomain podman[312403]: 2026-02-20 09:54:22.410980625 +0000 UTC m=+0.060740534 container kill 7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc12f634-62d5-41b5-9719-c2501170ffb0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:54:22 np0005625203.localdomain dnsmasq-dhcp[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/host
Feb 20 09:54:22 np0005625203.localdomain dnsmasq-dhcp[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/opts
Feb 20 09:54:22 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:22.467 2 INFO neutron.agent.securitygroups_rpc [None req-5a5ae7af-13a6-42c5-a0fc-7ba1957a4294 b9d64681c327441a81dfa771b4b413f6 ce97c44a73f94ada962654654798a4af - - default default] Security group member updated ['203b95e6-8f62-4037-821a-d64a45daeaf8']
Feb 20 09:54:22 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:22.724 262775 INFO neutron.agent.dhcp.agent [None req-7d2081c5-b485-4003-a25b-52c97cdb22f1 - - - - - -] DHCP configuration for ports {'f86ad34a-86fe-4891-a5f6-28d9e4ecb394'} is completed
Feb 20 09:54:22 np0005625203.localdomain ceph-mon[296066]: pgmap v214: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 716 B/s wr, 16 op/s
Feb 20 09:54:22 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:22.966 2 INFO neutron.agent.securitygroups_rpc [None req-ec64d548-962f-41dd-a02f-4a25f3310f4f a7bf5bd2a08b40838819ba2189f49015 8affd803090342c7a0b4a8c10fbcda95 - - default default] Security group member updated ['863be2ee-f2c8-49ce-9c0f-4d58bca5441c']
Feb 20 09:54:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:23.037 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:54:22Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da5728790>], dns_domain=, dns_name=, extra_dhcp_opts=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da57284f0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da5728c70>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da5728be0>], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4f075e0>], id=18a84cad-3abc-49c2-92ec-196baba905a4, ip_allocation=immediate, mac_address=fa:16:3e:45:30:27, name=tempest-ExtraDHCPOptionsTestJSON-1377675996, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:17Z, description=, dns_domain=, id=dc12f634-62d5-41b5-9719-c2501170ffb0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-ExtraDHCPOptionsTestJSON-test-network-1629180863, port_security_enabled=True, project_id=8affd803090342c7a0b4a8c10fbcda95, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58010, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1623, status=ACTIVE, subnets=['ca3cc6da-09d3-45dd-ae48-5596282d4a3c'], tags=[], tenant_id=8affd803090342c7a0b4a8c10fbcda95, updated_at=2026-02-20T09:54:18Z, vlan_transparent=None, network_id=dc12f634-62d5-41b5-9719-c2501170ffb0, port_security_enabled=True, project_id=8affd803090342c7a0b4a8c10fbcda95, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['863be2ee-f2c8-49ce-9c0f-4d58bca5441c'], standard_attr_id=1672, status=DOWN, tags=[], tenant_id=8affd803090342c7a0b4a8c10fbcda95, updated_at=2026-02-20T09:54:22Z on network dc12f634-62d5-41b5-9719-c2501170ffb0
Feb 20 09:54:23 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:23.161 2 INFO neutron.agent.securitygroups_rpc [None req-a78e376a-5faf-4597-9e34-68a60251f328 124fec5084164515a4a8079d3fba8fba 0d4c2b96a324436e80a9b37c9d4a15eb - - default default] Security group rule updated ['11123030-cb07-4b38-85fd-08bf79b16579']
Feb 20 09:54:23 np0005625203.localdomain dnsmasq[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/addn_hosts - 2 addresses
Feb 20 09:54:23 np0005625203.localdomain dnsmasq-dhcp[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/host
Feb 20 09:54:23 np0005625203.localdomain dnsmasq-dhcp[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/opts
Feb 20 09:54:23 np0005625203.localdomain podman[312440]: 2026-02-20 09:54:23.291970526 +0000 UTC m=+0.064222712 container kill 7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc12f634-62d5-41b5-9719-c2501170ffb0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:54:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:23.564 262775 INFO neutron.agent.dhcp.agent [None req-b72667c2-6b4c-4737-bf05-4788df80859b - - - - - -] DHCP configuration for ports {'18a84cad-3abc-49c2-92ec-196baba905a4'} is completed
Feb 20 09:54:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:24.273 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:54:24 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:24.360 2 INFO neutron.agent.securitygroups_rpc [None req-f54c4838-5ce8-45ff-b090-12b4bbbb882f a7bf5bd2a08b40838819ba2189f49015 8affd803090342c7a0b4a8c10fbcda95 - - default default] Security group member updated ['863be2ee-f2c8-49ce-9c0f-4d58bca5441c']
Feb 20 09:54:24 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:24.573 2 INFO neutron.agent.securitygroups_rpc [None req-5c523cd2-f346-4abd-990f-316e1d877f9a b9d64681c327441a81dfa771b4b413f6 ce97c44a73f94ada962654654798a4af - - default default] Security group member updated ['203b95e6-8f62-4037-821a-d64a45daeaf8']
Feb 20 09:54:24 np0005625203.localdomain dnsmasq[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/addn_hosts - 1 addresses
Feb 20 09:54:24 np0005625203.localdomain dnsmasq-dhcp[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/host
Feb 20 09:54:24 np0005625203.localdomain podman[312478]: 2026-02-20 09:54:24.630646042 +0000 UTC m=+0.061811317 container kill 7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc12f634-62d5-41b5-9719-c2501170ffb0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:54:24 np0005625203.localdomain dnsmasq-dhcp[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/opts
Feb 20 09:54:24 np0005625203.localdomain ceph-mon[296066]: pgmap v215: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 663 B/s wr, 15 op/s
Feb 20 09:54:24 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:24.881 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:54:20Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4f01130>], dns_domain=, dns_name=, extra_dhcp_opts=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da57b4ac0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4f01880>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4f01c70>], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da57b45b0>], id=f86ad34a-86fe-4891-a5f6-28d9e4ecb394, ip_allocation=immediate, mac_address=fa:16:3e:14:7d:2c, name=tempest-new-port-name-1114076250, network_id=dc12f634-62d5-41b5-9719-c2501170ffb0, port_security_enabled=True, project_id=8affd803090342c7a0b4a8c10fbcda95, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['863be2ee-f2c8-49ce-9c0f-4d58bca5441c'], standard_attr_id=1665, status=DOWN, tags=[], tenant_id=8affd803090342c7a0b4a8c10fbcda95, updated_at=2026-02-20T09:54:24Z on network dc12f634-62d5-41b5-9719-c2501170ffb0
Feb 20 09:54:25 np0005625203.localdomain dnsmasq[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/addn_hosts - 1 addresses
Feb 20 09:54:25 np0005625203.localdomain dnsmasq-dhcp[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/host
Feb 20 09:54:25 np0005625203.localdomain dnsmasq-dhcp[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/opts
Feb 20 09:54:25 np0005625203.localdomain podman[312513]: 2026-02-20 09:54:25.128325361 +0000 UTC m=+0.062158958 container kill 7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc12f634-62d5-41b5-9719-c2501170ffb0, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:54:25 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:25.421 262775 INFO neutron.agent.dhcp.agent [None req-8252a896-bcc1-4971-a78c-54d73c1c1de8 - - - - - -] DHCP configuration for ports {'f86ad34a-86fe-4891-a5f6-28d9e4ecb394'} is completed
Feb 20 09:54:25 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:25.535 2 INFO neutron.agent.securitygroups_rpc [None req-1b48fcb2-1507-433d-8e69-fc9c0e8a60aa a7bf5bd2a08b40838819ba2189f49015 8affd803090342c7a0b4a8c10fbcda95 - - default default] Security group member updated ['863be2ee-f2c8-49ce-9c0f-4d58bca5441c']
Feb 20 09:54:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:25.704 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:25 np0005625203.localdomain dnsmasq[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/addn_hosts - 0 addresses
Feb 20 09:54:25 np0005625203.localdomain dnsmasq-dhcp[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/host
Feb 20 09:54:25 np0005625203.localdomain podman[312552]: 2026-02-20 09:54:25.797442857 +0000 UTC m=+0.064208290 container kill 7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc12f634-62d5-41b5-9719-c2501170ffb0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:54:25 np0005625203.localdomain dnsmasq-dhcp[312387]: read /var/lib/neutron/dhcp/dc12f634-62d5-41b5-9719-c2501170ffb0/opts
Feb 20 09:54:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e124 e124: 6 total, 6 up, 6 in
Feb 20 09:54:26 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:26Z|00145|binding|INFO|Removing iface tap6f2db5b5-cd ovn-installed in OVS
Feb 20 09:54:26 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:26.186 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 76cda34b-31ae-4314-bbd4-8da15e2c70a2 with type ""
Feb 20 09:54:26 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:26Z|00146|binding|INFO|Removing lport 6f2db5b5-cd09-4d42-a24a-c9a72f164317 ovn-installed in OVS
Feb 20 09:54:26 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:26.188 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-dc12f634-62d5-41b5-9719-c2501170ffb0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dc12f634-62d5-41b5-9719-c2501170ffb0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8affd803090342c7a0b4a8c10fbcda95', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11a6bd9f-da0c-4244-be95-190fd2154b45, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=6f2db5b5-cd09-4d42-a24a-c9a72f164317) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:26.189 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:26 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:26.192 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 6f2db5b5-cd09-4d42-a24a-c9a72f164317 in datapath dc12f634-62d5-41b5-9719-c2501170ffb0 unbound from our chassis
Feb 20 09:54:26 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:26.194 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dc12f634-62d5-41b5-9719-c2501170ffb0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:54:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:26.195 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:26 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:26.196 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c3a6c4-6f4b-47f1-a2c8-736d378c2485]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:26 np0005625203.localdomain dnsmasq[312387]: exiting on receipt of SIGTERM
Feb 20 09:54:26 np0005625203.localdomain podman[312591]: 2026-02-20 09:54:26.242968938 +0000 UTC m=+0.071669641 container kill 7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc12f634-62d5-41b5-9719-c2501170ffb0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:26 np0005625203.localdomain systemd[1]: libpod-7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f.scope: Deactivated successfully.
Feb 20 09:54:26 np0005625203.localdomain podman[312607]: 2026-02-20 09:54:26.317572949 +0000 UTC m=+0.051454298 container died 7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc12f634-62d5-41b5-9719-c2501170ffb0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 20 09:54:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f-userdata-shm.mount: Deactivated successfully.
Feb 20 09:54:26 np0005625203.localdomain podman[312607]: 2026-02-20 09:54:26.361678709 +0000 UTC m=+0.095560028 container remove 7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-dc12f634-62d5-41b5-9719-c2501170ffb0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 20 09:54:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:26.375 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:26 np0005625203.localdomain kernel: device tap6f2db5b5-cd left promiscuous mode
Feb 20 09:54:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:26.397 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:26 np0005625203.localdomain systemd[1]: libpod-conmon-7dc60defa271f02d2a01114208da0adade34268bc655b732477ad6564f55fb2f.scope: Deactivated successfully.
Feb 20 09:54:26 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:26.427 262775 INFO neutron.agent.dhcp.agent [None req-d24de8c6-3852-41de-8aff-68ae4d959e73 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:54:26 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:26.428 262775 INFO neutron.agent.dhcp.agent [None req-d24de8c6-3852-41de-8aff-68ae4d959e73 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:54:26 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f1c72fa4a96bcb0d61975a0c3510a8220aafda0739a02d1076c3f3354361feaa-merged.mount: Deactivated successfully.
Feb 20 09:54:26 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2ddc12f634\x2d62d5\x2d41b5\x2d9719\x2dc2501170ffb0.mount: Deactivated successfully.
Feb 20 09:54:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:26.815 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:26 np0005625203.localdomain ceph-mon[296066]: pgmap v216: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 597 B/s wr, 14 op/s
Feb 20 09:54:26 np0005625203.localdomain ceph-mon[296066]: osdmap e124: 6 total, 6 up, 6 in
Feb 20 09:54:26 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e125 e125: 6 total, 6 up, 6 in
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.692746) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581267692816, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1164, "num_deletes": 254, "total_data_size": 1413785, "memory_usage": 1439328, "flush_reason": "Manual Compaction"}
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581267702095, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 923135, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22201, "largest_seqno": 23360, "table_properties": {"data_size": 918254, "index_size": 2416, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11360, "raw_average_key_size": 20, "raw_value_size": 908237, "raw_average_value_size": 1666, "num_data_blocks": 106, "num_entries": 545, "num_filter_entries": 545, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581202, "oldest_key_time": 1771581202, "file_creation_time": 1771581267, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 9403 microseconds, and 4333 cpu microseconds.
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.702153) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 923135 bytes OK
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.702182) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.704005) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.704027) EVENT_LOG_v1 {"time_micros": 1771581267704020, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.704051) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1408047, prev total WAL file size 1408047, number of live WAL files 2.
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.704757) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131353436' seq:72057594037927935, type:22 .. '7061786F73003131373938' seq:0, type:0; will stop at (end)
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(901KB)], [36(17MB)]
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581267704808, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 18824782, "oldest_snapshot_seqno": -1}
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 12201 keys, 15913761 bytes, temperature: kUnknown
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581267777556, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 15913761, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15845738, "index_size": 36434, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30533, "raw_key_size": 328060, "raw_average_key_size": 26, "raw_value_size": 15639402, "raw_average_value_size": 1281, "num_data_blocks": 1377, "num_entries": 12201, "num_filter_entries": 12201, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581267, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.778089) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 15913761 bytes
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.779940) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 257.8 rd, 217.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 17.1 +0.0 blob) out(15.2 +0.0 blob), read-write-amplify(37.6) write-amplify(17.2) OK, records in: 12728, records dropped: 527 output_compression: NoCompression
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.779971) EVENT_LOG_v1 {"time_micros": 1771581267779958, "job": 20, "event": "compaction_finished", "compaction_time_micros": 73018, "compaction_time_cpu_micros": 44639, "output_level": 6, "num_output_files": 1, "total_output_size": 15913761, "num_input_records": 12728, "num_output_records": 12201, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581267780769, "job": 20, "event": "table_file_deletion", "file_number": 38}
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581267783810, "job": 20, "event": "table_file_deletion", "file_number": 36}
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.704697) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.783990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.783994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.783996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.783999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:54:27.784001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: osdmap e125: 6 total, 6 up, 6 in
Feb 20 09:54:27 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e126 e126: 6 total, 6 up, 6 in
Feb 20 09:54:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:54:28 np0005625203.localdomain podman[312634]: 2026-02-20 09:54:28.779162737 +0000 UTC m=+0.090026077 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:54:28 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:28.785 2 INFO neutron.agent.securitygroups_rpc [None req-3a0ac74b-e97f-4840-a2be-cf3db56b29ba eed45d0e6e9a4013a0e822ffa85bb5cb 13f7a9ed49974d1596cd7746bdf2e7c4 - - default default] Security group rule updated ['92258b95-63d5-4c8a-9734-555bdc627d97']
Feb 20 09:54:28 np0005625203.localdomain podman[312634]: 2026-02-20 09:54:28.813249288 +0000 UTC m=+0.124112578 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Feb 20 09:54:28 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:54:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e127 e127: 6 total, 6 up, 6 in
Feb 20 09:54:28 np0005625203.localdomain ceph-mon[296066]: pgmap v219: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 14 KiB/s rd, 383 B/s wr, 18 op/s
Feb 20 09:54:28 np0005625203.localdomain ceph-mon[296066]: osdmap e126: 6 total, 6 up, 6 in
Feb 20 09:54:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:54:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:54:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:54:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 09:54:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18300 "" "Go-http-client/1.1"
Feb 20 09:54:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:29.308 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:29 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:54:29 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:29.431 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:54:29 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:29.708 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:54:29 np0005625203.localdomain ceph-mon[296066]: osdmap e127: 6 total, 6 up, 6 in
Feb 20 09:54:29 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e128 e128: 6 total, 6 up, 6 in
Feb 20 09:54:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:30.706 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e129 e129: 6 total, 6 up, 6 in
Feb 20 09:54:31 np0005625203.localdomain ceph-mon[296066]: pgmap v222: 177 pgs: 177 active+clean; 145 MiB data, 759 MiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 5.7 KiB/s wr, 63 op/s
Feb 20 09:54:31 np0005625203.localdomain ceph-mon[296066]: osdmap e128: 6 total, 6 up, 6 in
Feb 20 09:54:32 np0005625203.localdomain ceph-mon[296066]: osdmap e129: 6 total, 6 up, 6 in
Feb 20 09:54:32 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e130 e130: 6 total, 6 up, 6 in
Feb 20 09:54:32 np0005625203.localdomain sshd[312652]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:54:33 np0005625203.localdomain ceph-mon[296066]: pgmap v225: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 121 KiB/s rd, 22 KiB/s wr, 175 op/s
Feb 20 09:54:33 np0005625203.localdomain ceph-mon[296066]: osdmap e130: 6 total, 6 up, 6 in
Feb 20 09:54:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e131 e131: 6 total, 6 up, 6 in
Feb 20 09:54:33 np0005625203.localdomain sshd[312652]: Invalid user devuser from 185.196.11.208 port 53980
Feb 20 09:54:33 np0005625203.localdomain sshd[312652]: Received disconnect from 185.196.11.208 port 53980:11: Bye Bye [preauth]
Feb 20 09:54:33 np0005625203.localdomain sshd[312652]: Disconnected from invalid user devuser 185.196.11.208 port 53980 [preauth]
Feb 20 09:54:33 np0005625203.localdomain sshd[312654]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:54:33 np0005625203.localdomain sshd[312654]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:54:34 np0005625203.localdomain ceph-mon[296066]: osdmap e131: 6 total, 6 up, 6 in
Feb 20 09:54:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e132 e132: 6 total, 6 up, 6 in
Feb 20 09:54:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:34.310 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:54:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:54:34 np0005625203.localdomain podman[312656]: 2026-02-20 09:54:34.778546744 +0000 UTC m=+0.090319527 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:54:34 np0005625203.localdomain podman[312656]: 2026-02-20 09:54:34.827270457 +0000 UTC m=+0.139043270 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:54:34 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:54:35 np0005625203.localdomain ceph-mon[296066]: pgmap v228: 177 pgs: 177 active+clean; 145 MiB data, 760 MiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 16 KiB/s wr, 111 op/s
Feb 20 09:54:35 np0005625203.localdomain ceph-mon[296066]: osdmap e132: 6 total, 6 up, 6 in
Feb 20 09:54:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e133 e133: 6 total, 6 up, 6 in
Feb 20 09:54:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:35.709 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:36 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e134 e134: 6 total, 6 up, 6 in
Feb 20 09:54:36 np0005625203.localdomain ceph-mon[296066]: osdmap e133: 6 total, 6 up, 6 in
Feb 20 09:54:36 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:36.360 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:54:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:54:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:54:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:54:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:54:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:54:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:54:37 np0005625203.localdomain ceph-mon[296066]: pgmap v231: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 190 KiB/s rd, 19 KiB/s wr, 262 op/s
Feb 20 09:54:37 np0005625203.localdomain ceph-mon[296066]: osdmap e134: 6 total, 6 up, 6 in
Feb 20 09:54:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e135 e135: 6 total, 6 up, 6 in
Feb 20 09:54:37 np0005625203.localdomain sshd[312681]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:54:37 np0005625203.localdomain sudo[312683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:54:37 np0005625203.localdomain sudo[312683]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:54:37 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:54:37 np0005625203.localdomain sudo[312683]: pam_unix(sudo:session): session closed for user root
Feb 20 09:54:37 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:54:37 np0005625203.localdomain sudo[312703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 09:54:37 np0005625203.localdomain sudo[312703]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:54:37 np0005625203.localdomain podman[312702]: 2026-02-20 09:54:37.649007482 +0000 UTC m=+0.091408590 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9/ubi-minimal, version=9.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347)
Feb 20 09:54:37 np0005625203.localdomain podman[312702]: 2026-02-20 09:54:37.663421337 +0000 UTC m=+0.105822465 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, version=9.7, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1770267347)
Feb 20 09:54:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e136 e136: 6 total, 6 up, 6 in
Feb 20 09:54:37 np0005625203.localdomain systemd[1]: tmp-crun.dbMfM9.mount: Deactivated successfully.
Feb 20 09:54:37 np0005625203.localdomain sshd[312681]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:54:37 np0005625203.localdomain podman[312701]: 2026-02-20 09:54:37.724867702 +0000 UTC m=+0.167661432 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:37 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:54:37 np0005625203.localdomain podman[312701]: 2026-02-20 09:54:37.740458613 +0000 UTC m=+0.183252373 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute)
Feb 20 09:54:37 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:54:38 np0005625203.localdomain sudo[312703]: pam_unix(sudo:session): session closed for user root
Feb 20 09:54:38 np0005625203.localdomain ceph-mon[296066]: osdmap e135: 6 total, 6 up, 6 in
Feb 20 09:54:38 np0005625203.localdomain ceph-mon[296066]: osdmap e136: 6 total, 6 up, 6 in
Feb 20 09:54:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:54:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:54:38 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:54:38 np0005625203.localdomain sudo[312776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:54:38 np0005625203.localdomain sudo[312776]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:54:38 np0005625203.localdomain sudo[312776]: pam_unix(sudo:session): session closed for user root
Feb 20 09:54:38 np0005625203.localdomain sudo[312794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:54:38 np0005625203.localdomain sudo[312794]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:54:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e137 e137: 6 total, 6 up, 6 in
Feb 20 09:54:39 np0005625203.localdomain sudo[312794]: pam_unix(sudo:session): session closed for user root
Feb 20 09:54:39 np0005625203.localdomain ceph-mon[296066]: pgmap v234: 177 pgs: 177 active+clean; 145 MiB data, 778 MiB used, 41 GiB / 42 GiB avail; 190 KiB/s rd, 19 KiB/s wr, 262 op/s
Feb 20 09:54:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:54:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:54:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:54:39 np0005625203.localdomain ceph-mon[296066]: osdmap e137: 6 total, 6 up, 6 in
Feb 20 09:54:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:54:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:54:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:54:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:54:39 np0005625203.localdomain sudo[312845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:54:39 np0005625203.localdomain sudo[312845]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:54:39 np0005625203.localdomain sudo[312845]: pam_unix(sudo:session): session closed for user root
Feb 20 09:54:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:54:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:39.343 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:40 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1736566385' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:40 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1736566385' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:40.751 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:41 np0005625203.localdomain ceph-mon[296066]: pgmap v237: 177 pgs: 177 active+clean; 145 MiB data, 782 MiB used, 41 GiB / 42 GiB avail; 145 KiB/s rd, 8.7 KiB/s wr, 188 op/s
Feb 20 09:54:41 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:41.598 2 INFO neutron.agent.securitygroups_rpc [None req-eb743b9b-8319-4b6c-9522-759dec99d8f5 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']
Feb 20 09:54:42 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:42.264 2 INFO neutron.agent.securitygroups_rpc [None req-b2b645bc-d759-4ba7-b30a-1010fa24d49e 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']
Feb 20 09:54:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e138 e138: 6 total, 6 up, 6 in
Feb 20 09:54:43 np0005625203.localdomain ceph-mon[296066]: pgmap v238: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 217 KiB/s rd, 18 KiB/s wr, 300 op/s
Feb 20 09:54:43 np0005625203.localdomain ceph-mon[296066]: osdmap e138: 6 total, 6 up, 6 in
Feb 20 09:54:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:54:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:44.346 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:44 np0005625203.localdomain ceph-mon[296066]: pgmap v240: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 194 KiB/s rd, 16 KiB/s wr, 268 op/s
Feb 20 09:54:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:54:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e139 e139: 6 total, 6 up, 6 in
Feb 20 09:54:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:45.754 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:45 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:45.928 262775 INFO neutron.agent.linux.ip_lib [None req-13da616d-9464-4975-bbaa-ed50b7305104 - - - - - -] Device tapde93f400-60 cannot be used as it has no MAC address
Feb 20 09:54:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:45.953 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:45 np0005625203.localdomain kernel: device tapde93f400-60 entered promiscuous mode
Feb 20 09:54:45 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581285.9631] manager: (tapde93f400-60): new Generic device (/org/freedesktop/NetworkManager/Devices/32)
Feb 20 09:54:45 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:45Z|00147|binding|INFO|Claiming lport de93f400-6060-43e5-b9cd-5b32bbd8b105 for this chassis.
Feb 20 09:54:45 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:45Z|00148|binding|INFO|de93f400-6060-43e5-b9cd-5b32bbd8b105: Claiming unknown
Feb 20 09:54:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:45.964 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:45 np0005625203.localdomain systemd-udevd[312873]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:54:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:45.976 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=de93f400-6060-43e5-b9cd-5b32bbd8b105) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:45.978 161112 INFO neutron.agent.ovn.metadata.agent [-] Port de93f400-6060-43e5-b9cd-5b32bbd8b105 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:54:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:45.980 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:54:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:45.982 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[d2e2fe5e-504d-4029-b0d6-2be6d5e61651]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:45 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:45Z|00149|binding|INFO|Setting lport de93f400-6060-43e5-b9cd-5b32bbd8b105 ovn-installed in OVS
Feb 20 09:54:45 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:45Z|00150|binding|INFO|Setting lport de93f400-6060-43e5-b9cd-5b32bbd8b105 up in Southbound
Feb 20 09:54:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:45.999 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:46.050 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:46.083 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:46 np0005625203.localdomain ceph-mon[296066]: pgmap v241: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 153 KiB/s rd, 13 KiB/s wr, 213 op/s
Feb 20 09:54:46 np0005625203.localdomain ceph-mon[296066]: osdmap e139: 6 total, 6 up, 6 in
Feb 20 09:54:46 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:46.919 2 INFO neutron.agent.securitygroups_rpc [None req-bce28f21-3f23-462f-a383-948742167547 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:54:46 np0005625203.localdomain podman[312926]: 
Feb 20 09:54:46 np0005625203.localdomain podman[312926]: 2026-02-20 09:54:46.973231901 +0000 UTC m=+0.103957687 container create ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 20 09:54:47 np0005625203.localdomain systemd[1]: Started libpod-conmon-ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6.scope.
Feb 20 09:54:47 np0005625203.localdomain systemd[1]: tmp-crun.iO4H2T.mount: Deactivated successfully.
Feb 20 09:54:47 np0005625203.localdomain podman[312926]: 2026-02-20 09:54:46.935733731 +0000 UTC m=+0.066459547 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:54:47 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:54:47 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be643ce7057e79ff4fa0e23a6eb015e1ac3f772938035441eb509000919c6c42/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:54:47 np0005625203.localdomain podman[312926]: 2026-02-20 09:54:47.048913492 +0000 UTC m=+0.179639348 container init ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 20 09:54:47 np0005625203.localdomain podman[312926]: 2026-02-20 09:54:47.054219306 +0000 UTC m=+0.184945112 container start ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:54:47 np0005625203.localdomain dnsmasq[312944]: started, version 2.85 cachesize 150
Feb 20 09:54:47 np0005625203.localdomain dnsmasq[312944]: DNS service limited to local subnets
Feb 20 09:54:47 np0005625203.localdomain dnsmasq[312944]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:54:47 np0005625203.localdomain dnsmasq[312944]: warning: no upstream servers configured
Feb 20 09:54:47 np0005625203.localdomain dnsmasq-dhcp[312944]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:54:47 np0005625203.localdomain dnsmasq[312944]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:54:47 np0005625203.localdomain dnsmasq-dhcp[312944]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:54:47 np0005625203.localdomain dnsmasq-dhcp[312944]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:54:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:47.115 262775 INFO neutron.agent.dhcp.agent [None req-13da616d-9464-4975-bbaa-ed50b7305104 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:54:45Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e3e3a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e3e070>], id=02a71128-4640-42b6-8c89-1f96fce8890f, ip_allocation=immediate, mac_address=fa:16:3e:a8:f7:4b, name=tempest-NetworksTestDHCPv6-992701144, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['5966476e-e3a7-4add-b299-04d799c97e8e'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:54:44Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=1873, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:54:46Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:54:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:47.238 262775 INFO neutron.agent.dhcp.agent [None req-57de5b49-f064-4a5b-b7d3-b0c731a02cee - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:54:47 np0005625203.localdomain dnsmasq[312944]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:54:47 np0005625203.localdomain dnsmasq-dhcp[312944]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:54:47 np0005625203.localdomain podman[312961]: 2026-02-20 09:54:47.325114038 +0000 UTC m=+0.059396029 container kill ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:54:47 np0005625203.localdomain dnsmasq-dhcp[312944]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:54:47 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:47.560 2 INFO neutron.agent.securitygroups_rpc [None req-4cd6b0db-c578-42d5-a167-b93bcbfe0117 d89d4d1c90df43a18f75b47e31c78f62 7365eb83b07c4401a5a58afb3f122ce5 - - default default] Security group member updated ['921ea913-835d-427d-a3f2-d35699dcd043']
Feb 20 09:54:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:47.604 262775 INFO neutron.agent.dhcp.agent [None req-a181bcd0-eb6e-4eff-9c10-5e5987c2e997 - - - - - -] DHCP configuration for ports {'02a71128-4640-42b6-8c89-1f96fce8890f'} is completed
Feb 20 09:54:48 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:48.445 2 INFO neutron.agent.securitygroups_rpc [None req-8cc87b89-3410-419f-80d0-39ae3addedda f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:54:48 np0005625203.localdomain dnsmasq[312944]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:54:48 np0005625203.localdomain dnsmasq-dhcp[312944]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:54:48 np0005625203.localdomain dnsmasq-dhcp[312944]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:54:48 np0005625203.localdomain podman[312999]: 2026-02-20 09:54:48.666973081 +0000 UTC m=+0.059131100 container kill ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 20 09:54:48 np0005625203.localdomain ceph-mon[296066]: pgmap v243: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 77 KiB/s rd, 8.2 KiB/s wr, 113 op/s
Feb 20 09:54:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:54:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:49.349 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:49 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:49.526 2 INFO neutron.agent.securitygroups_rpc [None req-fe10c928-22e1-439f-ab59-773382d09580 d89d4d1c90df43a18f75b47e31c78f62 7365eb83b07c4401a5a58afb3f122ce5 - - default default] Security group member updated ['921ea913-835d-427d-a3f2-d35699dcd043']
Feb 20 09:54:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:54:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:54:49 np0005625203.localdomain podman[313020]: 2026-02-20 09:54:49.761547255 +0000 UTC m=+0.073890478 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:54:49 np0005625203.localdomain systemd[1]: tmp-crun.FYFEie.mount: Deactivated successfully.
Feb 20 09:54:49 np0005625203.localdomain podman[313019]: 2026-02-20 09:54:49.839347342 +0000 UTC m=+0.152838350 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:54:49 np0005625203.localdomain podman[313020]: 2026-02-20 09:54:49.851537179 +0000 UTC m=+0.163880372 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:54:49 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:54:49 np0005625203.localdomain podman[313019]: 2026-02-20 09:54:49.877464121 +0000 UTC m=+0.190955069 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:54:49 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:54:50 np0005625203.localdomain dnsmasq[312944]: exiting on receipt of SIGTERM
Feb 20 09:54:50 np0005625203.localdomain podman[313083]: 2026-02-20 09:54:50.069668478 +0000 UTC m=+0.065893790 container kill ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:50 np0005625203.localdomain systemd[1]: libpod-ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6.scope: Deactivated successfully.
Feb 20 09:54:50 np0005625203.localdomain podman[313096]: 2026-02-20 09:54:50.140417436 +0000 UTC m=+0.056052955 container died ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:50 np0005625203.localdomain podman[313096]: 2026-02-20 09:54:50.167796663 +0000 UTC m=+0.083432132 container cleanup ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:50 np0005625203.localdomain systemd[1]: libpod-conmon-ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6.scope: Deactivated successfully.
Feb 20 09:54:50 np0005625203.localdomain podman[313098]: 2026-02-20 09:54:50.232435053 +0000 UTC m=+0.135248205 container remove ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:54:50 np0005625203.localdomain kernel: device tapde93f400-60 left promiscuous mode
Feb 20 09:54:50 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:50Z|00151|binding|INFO|Releasing lport de93f400-6060-43e5-b9cd-5b32bbd8b105 from this chassis (sb_readonly=0)
Feb 20 09:54:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:50.282 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:50 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:50Z|00152|binding|INFO|Setting lport de93f400-6060-43e5-b9cd-5b32bbd8b105 down in Southbound
Feb 20 09:54:50 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:50.303 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=de93f400-6060-43e5-b9cd-5b32bbd8b105) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:50 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:50.305 161112 INFO neutron.agent.ovn.metadata.agent [-] Port de93f400-6060-43e5-b9cd-5b32bbd8b105 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:54:50 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:50.307 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:54:50 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:50.308 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[a89549a0-4e7b-4807-b264-b6c6e259c58d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:50.311 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:50.340 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:54:50 np0005625203.localdomain ceph-mon[296066]: pgmap v244: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 5.6 KiB/s rd, 1.4 KiB/s wr, 9 op/s
Feb 20 09:54:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3040086044' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3040086044' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-be643ce7057e79ff4fa0e23a6eb015e1ac3f772938035441eb509000919c6c42-merged.mount: Deactivated successfully.
Feb 20 09:54:50 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec63e1a031ac275dba6ac940f2240a63045292714c9630a1c75c476b4611d2f6-userdata-shm.mount: Deactivated successfully.
Feb 20 09:54:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:50.757 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:50 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:54:50 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:50.915 262775 INFO neutron.agent.dhcp.agent [None req-1a3f54b4-2008-4779-b891-faefd941cd9e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:54:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e140 e140: 6 total, 6 up, 6 in
Feb 20 09:54:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:52.146 2 INFO neutron.agent.securitygroups_rpc [None req-ef251e71-dc68-4712-81f3-aa7e64c344fa d89d4d1c90df43a18f75b47e31c78f62 7365eb83b07c4401a5a58afb3f122ce5 - - default default] Security group member updated ['921ea913-835d-427d-a3f2-d35699dcd043']
Feb 20 09:54:52 np0005625203.localdomain ceph-mon[296066]: pgmap v245: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 41 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Feb 20 09:54:52 np0005625203.localdomain ceph-mon[296066]: osdmap e140: 6 total, 6 up, 6 in
Feb 20 09:54:52 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:52.931 2 INFO neutron.agent.securitygroups_rpc [None req-ffde2a61-a3bf-4d03-bb6c-67c1d37d15bf f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:54:53 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:53.186 262775 INFO neutron.agent.linux.ip_lib [None req-2b5fde96-fff6-4858-915b-35db82a93060 - - - - - -] Device tap0cc12155-f5 cannot be used as it has no MAC address
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.222 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:53 np0005625203.localdomain kernel: device tap0cc12155-f5 entered promiscuous mode
Feb 20 09:54:53 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581293.2327] manager: (tap0cc12155-f5): new Generic device (/org/freedesktop/NetworkManager/Devices/33)
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.236 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:53 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:53Z|00153|binding|INFO|Claiming lport 0cc12155-f575-492a-bf06-83d2d63fe3f8 for this chassis.
Feb 20 09:54:53 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:53Z|00154|binding|INFO|0cc12155-f575-492a-bf06-83d2d63fe3f8: Claiming unknown
Feb 20 09:54:53 np0005625203.localdomain systemd-udevd[313135]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:54:53 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:53Z|00155|binding|INFO|Setting lport 0cc12155-f575-492a-bf06-83d2d63fe3f8 ovn-installed in OVS
Feb 20 09:54:53 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:53Z|00156|binding|INFO|Setting lport 0cc12155-f575-492a-bf06-83d2d63fe3f8 up in Southbound
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.247 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:53.248 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=0cc12155-f575-492a-bf06-83d2d63fe3f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:53.252 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 0cc12155-f575-492a-bf06-83d2d63fe3f8 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:54:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:53.256 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:54:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:53.257 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c517da-b26b-479e-b857-bf1fd3322aad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0cc12155-f5: No such device
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.269 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0cc12155-f5: No such device
Feb 20 09:54:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0cc12155-f5: No such device
Feb 20 09:54:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0cc12155-f5: No such device
Feb 20 09:54:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0cc12155-f5: No such device
Feb 20 09:54:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0cc12155-f5: No such device
Feb 20 09:54:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0cc12155-f5: No such device
Feb 20 09:54:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0cc12155-f5: No such device
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.311 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.349 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.365 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.365 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.366 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.366 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.366 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:54:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:54:53 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:54:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2344507398' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:54:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2344507398' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:54:53 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:54:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:54:53 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3514695046' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:54:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:53.860 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.112 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.115 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11722MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.115 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.116 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.223 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.225 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:54:54 np0005625203.localdomain podman[313228]: 
Feb 20 09:54:54 np0005625203.localdomain podman[313228]: 2026-02-20 09:54:54.256986373 +0000 UTC m=+0.082170133 container create a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.292 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:54:54 np0005625203.localdomain systemd[1]: Started libpod-conmon-a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80.scope.
Feb 20 09:54:54 np0005625203.localdomain podman[313228]: 2026-02-20 09:54:54.212048272 +0000 UTC m=+0.037232042 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:54:54 np0005625203.localdomain systemd[1]: tmp-crun.DS96kO.mount: Deactivated successfully.
Feb 20 09:54:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:54:54 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:54:54 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cced54910168b79fad1fe28b2839a40c366c2453ffacb74d72dbb3ad6ae4bcea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.353 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:54 np0005625203.localdomain podman[313228]: 2026-02-20 09:54:54.362858038 +0000 UTC m=+0.188041808 container init a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:54:54 np0005625203.localdomain podman[313228]: 2026-02-20 09:54:54.370181035 +0000 UTC m=+0.195364805 container start a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:54:54 np0005625203.localdomain dnsmasq[313247]: started, version 2.85 cachesize 150
Feb 20 09:54:54 np0005625203.localdomain dnsmasq[313247]: DNS service limited to local subnets
Feb 20 09:54:54 np0005625203.localdomain dnsmasq[313247]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:54:54 np0005625203.localdomain dnsmasq[313247]: warning: no upstream servers configured
Feb 20 09:54:54 np0005625203.localdomain dnsmasq[313247]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:54:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:54.443 262775 INFO neutron.agent.dhcp.agent [None req-2b5fde96-fff6-4858-915b-35db82a93060 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:54:52Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e918b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e91700>], id=0c011321-ef64-4de7-900e-881fdb9a7dfb, ip_allocation=immediate, mac_address=fa:16:3e:ca:bf:e5, name=tempest-NetworksTestDHCPv6-281612709, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=4, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['e6a1578c-40a7-4c11-9350-97e457ec0a6c'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:54:50Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=1899, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:54:52Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:54:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:54.506 262775 INFO neutron.agent.dhcp.agent [None req-d4b3e7f6-db01-4676-8b2b-181b7c8249f4 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:54:54 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:54.602 2 INFO neutron.agent.securitygroups_rpc [None req-3a014f90-3d7f-4958-9a67-4182f974566f 5f5cb06fd4c045b1bfc3f890392c3165 54500b20d6a643669fbf357242dde27f - - default default] Security group member updated ['b8016b60-fce7-435d-b877-28993ceda4d5']
Feb 20 09:54:54 np0005625203.localdomain dnsmasq[313247]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:54:54 np0005625203.localdomain podman[313285]: 2026-02-20 09:54:54.659675821 +0000 UTC m=+0.068888882 container kill a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:54:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3ad5ffb2-84e1-4f79-a933-80d3c5a63164", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:54:54 np0005625203.localdomain ceph-mon[296066]: pgmap v247: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 1.6 KiB/s wr, 52 op/s
Feb 20 09:54:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3ad5ffb2-84e1-4f79-a933-80d3c5a63164", "format": "json"}]: dispatch
Feb 20 09:54:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3514695046' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:54:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:54:54 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3800725616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.878 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.884 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.901 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.902 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.903 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.787s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:54:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:54.911 262775 INFO neutron.agent.linux.ip_lib [None req-08aa2c6b-13b8-4ecc-9f22-9a16326ce92e - - - - - -] Device tapeea86f25-7b cannot be used as it has no MAC address
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.937 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:54 np0005625203.localdomain kernel: device tapeea86f25-7b entered promiscuous mode
Feb 20 09:54:54 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581294.9445] manager: (tapeea86f25-7b): new Generic device (/org/freedesktop/NetworkManager/Devices/34)
Feb 20 09:54:54 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:54Z|00157|binding|INFO|Claiming lport eea86f25-7bff-4e76-abe8-b46b7a1e148d for this chassis.
Feb 20 09:54:54 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:54Z|00158|binding|INFO|eea86f25-7bff-4e76-abe8-b46b7a1e148d: Claiming unknown
Feb 20 09:54:54 np0005625203.localdomain systemd-udevd[313137]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.948 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:54 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:54.960 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe56:b722/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-af29bf87-1daa-44a5-8cd8-ed0a60fa19f1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af29bf87-1daa-44a5-8cd8-ed0a60fa19f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54500b20d6a643669fbf357242dde27f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=abe27b5f-bbfc-4ae6-bc3f-47bda3bfadb9, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=eea86f25-7bff-4e76-abe8-b46b7a1e148d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:54 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:54.962 161112 INFO neutron.agent.ovn.metadata.agent [-] Port eea86f25-7bff-4e76-abe8-b46b7a1e148d in datapath af29bf87-1daa-44a5-8cd8-ed0a60fa19f1 bound to our chassis
Feb 20 09:54:54 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:54.964 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 042942a2-c1dd-4383-9e84-cac2bfea1eed IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:54:54 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:54.964 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network af29bf87-1daa-44a5-8cd8-ed0a60fa19f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:54:54 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:54.965 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[2aeaf570-2186-4490-830c-c7ebbac8e69b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:54 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:54.965 2 INFO neutron.agent.securitygroups_rpc [None req-208eb96e-53cb-409e-b54b-a8915a18b91f f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:54:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:54.977 262775 INFO neutron.agent.dhcp.agent [None req-aa3f24f9-edf4-481f-bb9b-85b2a898f05b - - - - - -] DHCP configuration for ports {'0c011321-ef64-4de7-900e-881fdb9a7dfb'} is completed
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.984 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:54 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:54Z|00159|binding|INFO|Setting lport eea86f25-7bff-4e76-abe8-b46b7a1e148d ovn-installed in OVS
Feb 20 09:54:54 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:54Z|00160|binding|INFO|Setting lport eea86f25-7bff-4e76-abe8-b46b7a1e148d up in Southbound
Feb 20 09:54:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:54.988 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:55 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:55.022 2 INFO neutron.agent.securitygroups_rpc [None req-d0404fc6-f217-496c-b11d-08ac05ffdfb7 d89d4d1c90df43a18f75b47e31c78f62 7365eb83b07c4401a5a58afb3f122ce5 - - default default] Security group member updated ['921ea913-835d-427d-a3f2-d35699dcd043']
Feb 20 09:54:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:55.029 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:55.100 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:55 np0005625203.localdomain dnsmasq[313247]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:54:55 np0005625203.localdomain systemd[1]: tmp-crun.sYCqzO.mount: Deactivated successfully.
Feb 20 09:54:55 np0005625203.localdomain podman[313349]: 2026-02-20 09:54:55.328050479 +0000 UTC m=+0.077318163 container kill a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:54:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:55.761 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:55 np0005625203.localdomain ceph-mon[296066]: mgrmap e47: np0005625202.arwxwo(active, since 6m), standbys: np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:54:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3800725616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:54:56 np0005625203.localdomain podman[313409]: 
Feb 20 09:54:56 np0005625203.localdomain podman[313409]: 2026-02-20 09:54:56.062387218 +0000 UTC m=+0.094738742 container create 0d5f088c7d7dbbc4ed8bbc6fa0b3eae71cfb75edf5c775b4379625785b80ef8d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af29bf87-1daa-44a5-8cd8-ed0a60fa19f1, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:56 np0005625203.localdomain systemd[1]: Started libpod-conmon-0d5f088c7d7dbbc4ed8bbc6fa0b3eae71cfb75edf5c775b4379625785b80ef8d.scope.
Feb 20 09:54:56 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:54:56 np0005625203.localdomain podman[313409]: 2026-02-20 09:54:56.020286815 +0000 UTC m=+0.052638359 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:54:56 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b798273eb6eceb2fa4de6fecfb8242c6598b53642e3c1b2ff00c6e5841c14108/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:54:56 np0005625203.localdomain podman[313409]: 2026-02-20 09:54:56.129854565 +0000 UTC m=+0.162206079 container init 0d5f088c7d7dbbc4ed8bbc6fa0b3eae71cfb75edf5c775b4379625785b80ef8d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af29bf87-1daa-44a5-8cd8-ed0a60fa19f1, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 20 09:54:56 np0005625203.localdomain podman[313409]: 2026-02-20 09:54:56.138869014 +0000 UTC m=+0.171220528 container start 0d5f088c7d7dbbc4ed8bbc6fa0b3eae71cfb75edf5c775b4379625785b80ef8d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af29bf87-1daa-44a5-8cd8-ed0a60fa19f1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:54:56 np0005625203.localdomain dnsmasq[313431]: started, version 2.85 cachesize 150
Feb 20 09:54:56 np0005625203.localdomain dnsmasq[313431]: DNS service limited to local subnets
Feb 20 09:54:56 np0005625203.localdomain dnsmasq[313431]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:54:56 np0005625203.localdomain dnsmasq[313431]: warning: no upstream servers configured
Feb 20 09:54:56 np0005625203.localdomain dnsmasq[313431]: read /var/lib/neutron/dhcp/af29bf87-1daa-44a5-8cd8-ed0a60fa19f1/addn_hosts - 0 addresses
Feb 20 09:54:56 np0005625203.localdomain dnsmasq[313247]: exiting on receipt of SIGTERM
Feb 20 09:54:56 np0005625203.localdomain podman[313444]: 2026-02-20 09:54:56.316144869 +0000 UTC m=+0.077794438 container kill a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Feb 20 09:54:56 np0005625203.localdomain systemd[1]: libpod-a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80.scope: Deactivated successfully.
Feb 20 09:54:56 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:56.357 262775 INFO neutron.agent.dhcp.agent [None req-15ae5f5e-02e6-47eb-a01f-800e0882780d - - - - - -] DHCP configuration for ports {'93bfadd2-9128-417b-b4c3-39397b7886f7'} is completed
Feb 20 09:54:56 np0005625203.localdomain podman[313456]: 2026-02-20 09:54:56.412780188 +0000 UTC m=+0.080231803 container died a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:54:56 np0005625203.localdomain podman[313456]: 2026-02-20 09:54:56.496637823 +0000 UTC m=+0.164089448 container cleanup a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:54:56 np0005625203.localdomain systemd[1]: libpod-conmon-a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80.scope: Deactivated successfully.
Feb 20 09:54:56 np0005625203.localdomain podman[313458]: 2026-02-20 09:54:56.525761403 +0000 UTC m=+0.179542325 container remove a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:54:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:56.581 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:56Z|00161|binding|INFO|Releasing lport 0cc12155-f575-492a-bf06-83d2d63fe3f8 from this chassis (sb_readonly=0)
Feb 20 09:54:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:56Z|00162|binding|INFO|Setting lport 0cc12155-f575-492a-bf06-83d2d63fe3f8 down in Southbound
Feb 20 09:54:56 np0005625203.localdomain kernel: device tap0cc12155-f5 left promiscuous mode
Feb 20 09:54:56 np0005625203.localdomain podman[313496]: 2026-02-20 09:54:56.583055286 +0000 UTC m=+0.094838575 container kill 0d5f088c7d7dbbc4ed8bbc6fa0b3eae71cfb75edf5c775b4379625785b80ef8d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af29bf87-1daa-44a5-8cd8-ed0a60fa19f1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:54:56 np0005625203.localdomain dnsmasq[313431]: exiting on receipt of SIGTERM
Feb 20 09:54:56 np0005625203.localdomain systemd[1]: libpod-0d5f088c7d7dbbc4ed8bbc6fa0b3eae71cfb75edf5c775b4379625785b80ef8d.scope: Deactivated successfully.
Feb 20 09:54:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:56.599 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=0cc12155-f575-492a-bf06-83d2d63fe3f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:56.602 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 0cc12155-f575-492a-bf06-83d2d63fe3f8 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:54:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:56.603 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:56.605 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:56.607 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:54:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:56.608 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[a4fa9339-dfc5-41bf-ac34-dc4e6710b0a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:56.610 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:54:56 np0005625203.localdomain podman[313513]: 2026-02-20 09:54:56.652500705 +0000 UTC m=+0.052464454 container died 0d5f088c7d7dbbc4ed8bbc6fa0b3eae71cfb75edf5c775b4379625785b80ef8d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af29bf87-1daa-44a5-8cd8-ed0a60fa19f1, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:54:56 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:56.669 2 INFO neutron.agent.securitygroups_rpc [None req-36db6997-ce64-4f72-86bb-117bda3b0094 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']
Feb 20 09:54:56 np0005625203.localdomain podman[313513]: 2026-02-20 09:54:56.730058094 +0000 UTC m=+0.130021843 container cleanup 0d5f088c7d7dbbc4ed8bbc6fa0b3eae71cfb75edf5c775b4379625785b80ef8d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af29bf87-1daa-44a5-8cd8-ed0a60fa19f1, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:54:56 np0005625203.localdomain systemd[1]: libpod-conmon-0d5f088c7d7dbbc4ed8bbc6fa0b3eae71cfb75edf5c775b4379625785b80ef8d.scope: Deactivated successfully.
Feb 20 09:54:56 np0005625203.localdomain podman[313515]: 2026-02-20 09:54:56.758266646 +0000 UTC m=+0.148244677 container remove 0d5f088c7d7dbbc4ed8bbc6fa0b3eae71cfb75edf5c775b4379625785b80ef8d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-af29bf87-1daa-44a5-8cd8-ed0a60fa19f1, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:56 np0005625203.localdomain kernel: device tapeea86f25-7b left promiscuous mode
Feb 20 09:54:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:56Z|00163|binding|INFO|Releasing lport eea86f25-7bff-4e76-abe8-b46b7a1e148d from this chassis (sb_readonly=0)
Feb 20 09:54:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:56.774 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:56Z|00164|binding|INFO|Setting lport eea86f25-7bff-4e76-abe8-b46b7a1e148d down in Southbound
Feb 20 09:54:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:56.789 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-af29bf87-1daa-44a5-8cd8-ed0a60fa19f1', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af29bf87-1daa-44a5-8cd8-ed0a60fa19f1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54500b20d6a643669fbf357242dde27f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=abe27b5f-bbfc-4ae6-bc3f-47bda3bfadb9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=eea86f25-7bff-4e76-abe8-b46b7a1e148d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:56.790 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:56.792 161112 INFO neutron.agent.ovn.metadata.agent [-] Port eea86f25-7bff-4e76-abe8-b46b7a1e148d in datapath af29bf87-1daa-44a5-8cd8-ed0a60fa19f1 unbound from our chassis
Feb 20 09:54:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:56.795 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network af29bf87-1daa-44a5-8cd8-ed0a60fa19f1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:54:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:56.795 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9f8260-47dd-4710-bea5-5d6b322e4b7f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:56 np0005625203.localdomain ceph-mon[296066]: pgmap v248: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 4.8 KiB/s wr, 71 op/s
Feb 20 09:54:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3685988020' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:54:56 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3ad5ffb2-84e1-4f79-a933-80d3c5a63164", "format": "json"}]: dispatch
Feb 20 09:54:56 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3ad5ffb2-84e1-4f79-a933-80d3c5a63164", "force": true, "format": "json"}]: dispatch
Feb 20 09:54:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-b798273eb6eceb2fa4de6fecfb8242c6598b53642e3c1b2ff00c6e5841c14108-merged.mount: Deactivated successfully.
Feb 20 09:54:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d5f088c7d7dbbc4ed8bbc6fa0b3eae71cfb75edf5c775b4379625785b80ef8d-userdata-shm.mount: Deactivated successfully.
Feb 20 09:54:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-cced54910168b79fad1fe28b2839a40c366c2453ffacb74d72dbb3ad6ae4bcea-merged.mount: Deactivated successfully.
Feb 20 09:54:57 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a71649fc5a127b888da1fc4dc75bceccf49571ddef348b74e9c9c3b046ec5b80-userdata-shm.mount: Deactivated successfully.
Feb 20 09:54:57 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2daf29bf87\x2d1daa\x2d44a5\x2d8cd8\x2ded0a60fa19f1.mount: Deactivated successfully.
Feb 20 09:54:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:57.370 262775 INFO neutron.agent.dhcp.agent [None req-f4f799e9-4771-4526-aca8-6552559ec47b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:54:57 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:54:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e141 e141: 6 total, 6 up, 6 in
Feb 20 09:54:57 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:57.786 2 INFO neutron.agent.securitygroups_rpc [None req-a1f738f8-2f94-4ebf-b02a-59bcf9971aeb 51a4789e7d0b404b9882e0c26f7229be 1c44e13adebb4610b7c0cd2fdc62a5b7 - - default default] Security group member updated ['000c42d1-648a-4f56-b7e6-024a1e270fb9']
Feb 20 09:54:57 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1996257721' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:54:57 np0005625203.localdomain ceph-mon[296066]: osdmap e141: 6 total, 6 up, 6 in
Feb 20 09:54:58 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:58.067 2 INFO neutron.agent.securitygroups_rpc [None req-865b7338-5884-4631-b540-e349cbd12cfd f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:54:58 np0005625203.localdomain sshd[313541]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:54:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:58.647 262775 INFO neutron.agent.linux.ip_lib [None req-c5999cd4-bb51-42bd-b225-b1f5bd9e5775 - - - - - -] Device tap7b554a99-38 cannot be used as it has no MAC address
Feb 20 09:54:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:58.678 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:58 np0005625203.localdomain kernel: device tap7b554a99-38 entered promiscuous mode
Feb 20 09:54:58 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581298.6896] manager: (tap7b554a99-38): new Generic device (/org/freedesktop/NetworkManager/Devices/35)
Feb 20 09:54:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:58Z|00165|binding|INFO|Claiming lport 7b554a99-38a5-42de-a450-760cbc915a60 for this chassis.
Feb 20 09:54:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:58Z|00166|binding|INFO|7b554a99-38a5-42de-a450-760cbc915a60: Claiming unknown
Feb 20 09:54:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:58.691 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:58 np0005625203.localdomain systemd-udevd[313553]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:54:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:58Z|00167|binding|INFO|Setting lport 7b554a99-38a5-42de-a450-760cbc915a60 ovn-installed in OVS
Feb 20 09:54:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:58.700 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:58Z|00168|binding|INFO|Setting lport 7b554a99-38a5-42de-a450-760cbc915a60 up in Southbound
Feb 20 09:54:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:58.705 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=7b554a99-38a5-42de-a450-760cbc915a60) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:58.706 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:58.710 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 7b554a99-38a5-42de-a450-760cbc915a60 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:54:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:58.714 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:54:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:58.714 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[809070f8-a210-45bc-a4d6-e8df3627e02a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:58.739 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:58.782 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:58 np0005625203.localdomain sshd[313541]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:54:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:58.818 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:58 np0005625203.localdomain ceph-mon[296066]: pgmap v249: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 4.7 KiB/s wr, 70 op/s
Feb 20 09:54:58 np0005625203.localdomain ceph-mon[296066]: mgrmap e48: np0005625202.arwxwo(active, since 6m), standbys: np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:54:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:54:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 600.0 total, 600.0 interval
                                                           Cumulative writes: 2343 writes, 24K keys, 2343 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.07 MB/s
                                                           Cumulative WAL: 2343 writes, 2343 syncs, 1.00 writes per sync, written: 0.04 GB, 0.07 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 2343 writes, 24K keys, 2343 commit groups, 1.0 writes per commit group, ingest: 42.12 MB, 0.07 MB/s
                                                           Interval WAL: 2343 writes, 2343 syncs, 1.00 writes per sync, written: 0.04 GB, 0.07 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    173.2      0.17              0.07        10    0.017       0      0       0.0       0.0
                                                             L6      1/0   15.18 MB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   5.2    222.1    202.9      0.75              0.42         9    0.084    109K   4508       0.0       0.0
                                                            Sum      1/0   15.18 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   6.2    181.1    197.5      0.93              0.49        19    0.049    109K   4508       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   6.2    181.6    198.0      0.92              0.49        18    0.051    109K   4508       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.1      0.0       0.0   0.0    222.1    202.9      0.75              0.42         9    0.084    109K   4508       0.0       0.0
                                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    176.1      0.17              0.07         9    0.019       0      0       0.0       0.0
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 600.0 total, 600.0 interval
                                                           Flush(GB): cumulative 0.029, interval 0.029
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.18 GB write, 0.30 MB/s write, 0.16 GB read, 0.28 MB/s read, 0.9 seconds
                                                           Interval compaction: 0.18 GB write, 0.30 MB/s write, 0.16 GB read, 0.28 MB/s read, 0.9 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x5619bae7b350#2 capacity: 308.00 MB usage: 16.43 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 0.000133 secs_since: 0
                                                           Block cache entry stats(count,size,portion): DataBlock(744,15.69 MB,5.09314%) FilterBlock(19,335.55 KB,0.10639%) IndexBlock(19,428.02 KB,0.135709%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Feb 20 09:54:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:58.903 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:54:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:54:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:54:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:54:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 09:54:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18302 "" "Go-http-client/1.1"
Feb 20 09:54:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:54:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:54:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:59.354 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:59 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:59.359 262775 INFO neutron.agent.linux.ip_lib [None req-81ae8a99-9cd4-4226-a568-d534c29c089c - - - - - -] Device tap5ce4eae2-e6 cannot be used as it has no MAC address
Feb 20 09:54:59 np0005625203.localdomain systemd[1]: tmp-crun.uV61er.mount: Deactivated successfully.
Feb 20 09:54:59 np0005625203.localdomain podman[313588]: 2026-02-20 09:54:59.389901745 +0000 UTC m=+0.096512168 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb 20 09:54:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:59.390 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:59 np0005625203.localdomain kernel: device tap5ce4eae2-e6 entered promiscuous mode
Feb 20 09:54:59 np0005625203.localdomain systemd-udevd[313555]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:54:59 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581299.3977] manager: (tap5ce4eae2-e6): new Generic device (/org/freedesktop/NetworkManager/Devices/36)
Feb 20 09:54:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:59.398 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:59 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:59Z|00169|binding|INFO|Claiming lport 5ce4eae2-e62a-429a-b62a-1dbd88af73dc for this chassis.
Feb 20 09:54:59 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:59Z|00170|binding|INFO|5ce4eae2-e62a-429a-b62a-1dbd88af73dc: Claiming unknown
Feb 20 09:54:59 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:59.406 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-12d7921f-2fee-4d50-820c-ba95a837462a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12d7921f-2fee-4d50-820c-ba95a837462a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8aa5b5a34cfe458d96fea87261361db1', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae876130-9243-41a7-8be6-088886ad6058, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=5ce4eae2-e62a-429a-b62a-1dbd88af73dc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:54:59 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:59.409 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 5ce4eae2-e62a-429a-b62a-1dbd88af73dc in datapath 12d7921f-2fee-4d50-820c-ba95a837462a bound to our chassis
Feb 20 09:54:59 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:59.417 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 12d7921f-2fee-4d50-820c-ba95a837462a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:54:59 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:54:59.418 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[a2674ad9-60a8-4e31-9b4e-cc2d66ab60f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:54:59 np0005625203.localdomain podman[313588]: 2026-02-20 09:54:59.426841057 +0000 UTC m=+0.133451520 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:54:59 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:59Z|00171|binding|INFO|Setting lport 5ce4eae2-e62a-429a-b62a-1dbd88af73dc ovn-installed in OVS
Feb 20 09:54:59 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:54:59Z|00172|binding|INFO|Setting lport 5ce4eae2-e62a-429a-b62a-1dbd88af73dc up in Southbound
Feb 20 09:54:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:59.442 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:59 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:54:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:59.478 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:54:59.502 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:54:59 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:54:59.603 2 INFO neutron.agent.securitygroups_rpc [None req-1d54fa0a-8062-4c9c-94e3-875dfdd78a26 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:54:59 np0005625203.localdomain podman[313642]: 
Feb 20 09:54:59 np0005625203.localdomain podman[313642]: 2026-02-20 09:54:59.682821876 +0000 UTC m=+0.103062738 container create 1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 20 09:54:59 np0005625203.localdomain podman[313642]: 2026-02-20 09:54:59.642293153 +0000 UTC m=+0.062534065 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:54:59 np0005625203.localdomain systemd[1]: Started libpod-conmon-1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb.scope.
Feb 20 09:54:59 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:54:59 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5230dad5fd56dca79d2bba38a081fbbbf6000b9c9a42f2f9be37ab44e6fa076/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:54:59 np0005625203.localdomain podman[313642]: 2026-02-20 09:54:59.77763048 +0000 UTC m=+0.197871342 container init 1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 20 09:54:59 np0005625203.localdomain podman[313642]: 2026-02-20 09:54:59.786771392 +0000 UTC m=+0.207012254 container start 1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:54:59 np0005625203.localdomain dnsmasq[313671]: started, version 2.85 cachesize 150
Feb 20 09:54:59 np0005625203.localdomain dnsmasq[313671]: DNS service limited to local subnets
Feb 20 09:54:59 np0005625203.localdomain dnsmasq[313671]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:54:59 np0005625203.localdomain dnsmasq[313671]: warning: no upstream servers configured
Feb 20 09:54:59 np0005625203.localdomain dnsmasq-dhcp[313671]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:54:59 np0005625203.localdomain dnsmasq[313671]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:54:59 np0005625203.localdomain dnsmasq-dhcp[313671]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:54:59 np0005625203.localdomain dnsmasq-dhcp[313671]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:54:59 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:59.848 262775 INFO neutron.agent.dhcp.agent [None req-c5999cd4-bb51-42bd-b225-b1f5bd9e5775 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:54:57Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e49f10>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e49cd0>], id=6b13b900-1b7e-4dd0-8fe2-b4070c9691f4, ip_allocation=immediate, mac_address=fa:16:3e:d8:17:8c, name=tempest-NetworksTestDHCPv6-1724539952, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=6, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['2cbab3c4-a1a8-41a1-a01b-3a7632447ff0'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:54:56Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=1916, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:54:57Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:54:59 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:54:59.948 262775 INFO neutron.agent.dhcp.agent [None req-71c19107-5a0f-4c73-96b2-12de63656e19 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:00 np0005625203.localdomain dnsmasq[313671]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:55:00 np0005625203.localdomain dnsmasq-dhcp[313671]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:00 np0005625203.localdomain dnsmasq-dhcp[313671]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:00 np0005625203.localdomain podman[313697]: 2026-02-20 09:55:00.028012977 +0000 UTC m=+0.051374232 container kill 1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:00 np0005625203.localdomain kernel: device tap7b554a99-38 left promiscuous mode
Feb 20 09:55:00 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:00Z|00173|binding|INFO|Releasing lport 7b554a99-38a5-42de-a450-760cbc915a60 from this chassis (sb_readonly=0)
Feb 20 09:55:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:00.207 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:00 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:00Z|00174|binding|INFO|Setting lport 7b554a99-38a5-42de-a450-760cbc915a60 down in Southbound
Feb 20 09:55:00 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:00.216 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=7b554a99-38a5-42de-a450-760cbc915a60) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:00 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:00.218 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 7b554a99-38a5-42de-a450-760cbc915a60 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:55:00 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:00.220 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:00 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:00.221 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[4a798183-5d6c-4881-83f6-b7171ea04f07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:00.231 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.285 262775 INFO neutron.agent.dhcp.agent [None req-874e7549-7e49-4d30-af07-7c170d0d7243 - - - - - -] DHCP configuration for ports {'6b13b900-1b7e-4dd0-8fe2-b4070c9691f4'} is completed
Feb 20 09:55:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:00.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:55:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:00.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:55:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:00.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:55:00 np0005625203.localdomain podman[313759]: 
Feb 20 09:55:00 np0005625203.localdomain dnsmasq[313671]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:00 np0005625203.localdomain dnsmasq-dhcp[313671]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:00 np0005625203.localdomain dnsmasq-dhcp[313671]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:00 np0005625203.localdomain podman[313772]: 2026-02-20 09:55:00.425657829 +0000 UTC m=+0.063396253 container kill 1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:00 np0005625203.localdomain podman[313759]: 2026-02-20 09:55:00.436191164 +0000 UTC m=+0.108161687 container create dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-12d7921f-2fee-4d50-820c-ba95a837462a, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:00 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:00.459 2 INFO neutron.agent.securitygroups_rpc [None req-4a29bc3c-e3e4-4d65-b800-f3b558d9c704 5f5cb06fd4c045b1bfc3f890392c3165 54500b20d6a643669fbf357242dde27f - - default default] Security group member updated ['b8016b60-fce7-435d-b877-28993ceda4d5']
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent [None req-c5999cd4-bb51-42bd-b225-b1f5bd9e5775 - - - - - -] Unable to reload_allocations dhcp for 811e2462-6872-485d-9c09-d2dd9cb25273.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap7b554a99-38 not found in namespace qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273.
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     return fut.result()
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     raise self._exception
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap7b554a99-38 not found in namespace qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273.
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.461 262775 ERROR neutron.agent.dhcp.agent 
Feb 20 09:55:00 np0005625203.localdomain systemd[1]: Started libpod-conmon-dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41.scope.
Feb 20 09:55:00 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4a5f5daf0f0c3535db339370b5cceb0ea2012c655888f910994c9970600955/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:00 np0005625203.localdomain podman[313759]: 2026-02-20 09:55:00.395867587 +0000 UTC m=+0.067838130 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:00 np0005625203.localdomain podman[313759]: 2026-02-20 09:55:00.500186014 +0000 UTC m=+0.172156537 container init dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-12d7921f-2fee-4d50-820c-ba95a837462a, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:55:00 np0005625203.localdomain podman[313759]: 2026-02-20 09:55:00.509591605 +0000 UTC m=+0.181562128 container start dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-12d7921f-2fee-4d50-820c-ba95a837462a, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:00 np0005625203.localdomain dnsmasq[313795]: started, version 2.85 cachesize 150
Feb 20 09:55:00 np0005625203.localdomain dnsmasq[313795]: DNS service limited to local subnets
Feb 20 09:55:00 np0005625203.localdomain dnsmasq[313795]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:00 np0005625203.localdomain dnsmasq[313795]: warning: no upstream servers configured
Feb 20 09:55:00 np0005625203.localdomain dnsmasq-dhcp[313795]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:55:00 np0005625203.localdomain dnsmasq[313795]: read /var/lib/neutron/dhcp/12d7921f-2fee-4d50-820c-ba95a837462a/addn_hosts - 0 addresses
Feb 20 09:55:00 np0005625203.localdomain dnsmasq-dhcp[313795]: read /var/lib/neutron/dhcp/12d7921f-2fee-4d50-820c-ba95a837462a/host
Feb 20 09:55:00 np0005625203.localdomain dnsmasq-dhcp[313795]: read /var/lib/neutron/dhcp/12d7921f-2fee-4d50-820c-ba95a837462a/opts
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.568 262775 INFO neutron.agent.dhcp.agent [None req-e2daf790-839f-4e71-8238-88b6ee7dd3df - - - - - -] Synchronizing state
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.689 262775 INFO neutron.agent.dhcp.agent [None req-41fba7b3-5a1d-49ce-8293-9ad9863b3442 - - - - - -] DHCP configuration for ports {'5d6a277a-aeb4-45a2-aec2-0b683d8814ae'} is completed
Feb 20 09:55:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:00.764 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.776 262775 INFO neutron.agent.dhcp.agent [None req-e21296f0-1e65-432d-9931-4f4869144f19 - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.777 262775 INFO neutron.agent.dhcp.agent [-] Starting network 545dd7fa-aa2a-4b99-a111-22d2b369ed0a dhcp configuration
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.777 262775 INFO neutron.agent.dhcp.agent [-] Finished network 545dd7fa-aa2a-4b99-a111-22d2b369ed0a dhcp configuration
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.778 262775 INFO neutron.agent.dhcp.agent [-] Starting network 811e2462-6872-485d-9c09-d2dd9cb25273 dhcp configuration
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.778 262775 INFO neutron.agent.dhcp.agent [-] Finished network 811e2462-6872-485d-9c09-d2dd9cb25273 dhcp configuration
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.778 262775 INFO neutron.agent.dhcp.agent [-] Starting network af29bf87-1daa-44a5-8cd8-ed0a60fa19f1 dhcp configuration
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.778 262775 INFO neutron.agent.dhcp.agent [-] Finished network af29bf87-1daa-44a5-8cd8-ed0a60fa19f1 dhcp configuration
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.779 262775 INFO neutron.agent.dhcp.agent [None req-e21296f0-1e65-432d-9931-4f4869144f19 - - - - - -] Synchronizing state complete
Feb 20 09:55:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:00.840 262775 INFO neutron.agent.dhcp.agent [None req-2900448d-c0cd-4a9f-9646-d2671aed3c35 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:00 np0005625203.localdomain ceph-mon[296066]: pgmap v251: 177 pgs: 177 active+clean; 145 MiB data, 786 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 8.5 KiB/s wr, 36 op/s
Feb 20 09:55:01 np0005625203.localdomain dnsmasq[313671]: exiting on receipt of SIGTERM
Feb 20 09:55:01 np0005625203.localdomain podman[313812]: 2026-02-20 09:55:01.024040161 +0000 UTC m=+0.062723792 container kill 1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 20 09:55:01 np0005625203.localdomain systemd[1]: libpod-1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb.scope: Deactivated successfully.
Feb 20 09:55:01 np0005625203.localdomain podman[313824]: 2026-02-20 09:55:01.094404827 +0000 UTC m=+0.056751366 container died 1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:01 np0005625203.localdomain systemd[1]: tmp-crun.kpg5bh.mount: Deactivated successfully.
Feb 20 09:55:01 np0005625203.localdomain podman[313824]: 2026-02-20 09:55:01.135831569 +0000 UTC m=+0.098178058 container cleanup 1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:01 np0005625203.localdomain systemd[1]: libpod-conmon-1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb.scope: Deactivated successfully.
Feb 20 09:55:01 np0005625203.localdomain podman[313831]: 2026-02-20 09:55:01.181509372 +0000 UTC m=+0.131174738 container remove 1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 20 09:55:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:01.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:55:01 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:01.485 262775 INFO neutron.agent.dhcp.agent [None req-db106083-583c-4754-8978-c685914530ac - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e5230dad5fd56dca79d2bba38a081fbbbf6000b9c9a42f2f9be37ab44e6fa076-merged.mount: Deactivated successfully.
Feb 20 09:55:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1b8f71fba998fd9619b239c6361460d3f2cca4d54afa8b6350252c30d9fe80cb-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:01 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:55:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:55:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2547929420' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:55:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:55:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2547929420' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:55:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:02.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:55:02 np0005625203.localdomain ceph-mon[296066]: pgmap v252: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 7.9 KiB/s wr, 32 op/s
Feb 20 09:55:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/674329819' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:55:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2547929420' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:55:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2547929420' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:55:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2447234794' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:55:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:03.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:55:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:03.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:55:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:03.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:55:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:03.366 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:55:03 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:03.814 2 INFO neutron.agent.securitygroups_rpc [None req-06243d38-8e6a-45a5-a465-51e5b9c1322f f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:55:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:03.906 262775 INFO neutron.agent.linux.ip_lib [None req-8be2c963-e55c-42be-9eb2-d1faf7058c58 - - - - - -] Device tap90d76354-71 cannot be used as it has no MAC address
Feb 20 09:55:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:03.926 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:03 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:03.926 2 INFO neutron.agent.securitygroups_rpc [None req-befc33c8-098a-487a-a3bd-757b9f93e81d 3ace3fc0d46241ffa2d6d0b16953a588 8aa5b5a34cfe458d96fea87261361db1 - - default default] Security group member updated ['f947e5dd-708d-45fc-8f3d-3e71e4aec5b2']
Feb 20 09:55:03 np0005625203.localdomain kernel: device tap90d76354-71 entered promiscuous mode
Feb 20 09:55:03 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581303.9318] manager: (tap90d76354-71): new Generic device (/org/freedesktop/NetworkManager/Devices/37)
Feb 20 09:55:03 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:03Z|00175|binding|INFO|Claiming lport 90d76354-7168-4085-a2ad-0e77605ae6e8 for this chassis.
Feb 20 09:55:03 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:03Z|00176|binding|INFO|90d76354-7168-4085-a2ad-0e77605ae6e8: Claiming unknown
Feb 20 09:55:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:03.934 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:03 np0005625203.localdomain systemd-udevd[313859]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:55:03 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:03.938 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=90d76354-7168-4085-a2ad-0e77605ae6e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:03 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:03Z|00177|binding|INFO|Setting lport 90d76354-7168-4085-a2ad-0e77605ae6e8 ovn-installed in OVS
Feb 20 09:55:03 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:03Z|00178|binding|INFO|Setting lport 90d76354-7168-4085-a2ad-0e77605ae6e8 up in Southbound
Feb 20 09:55:03 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:03.940 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 90d76354-7168-4085-a2ad-0e77605ae6e8 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:55:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:03.942 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:03 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:03.944 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:03 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:03.945 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[48a89132-c8ed-4f66-ac69-9cc853119c1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:03.945 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:03 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:03.953 2 INFO neutron.agent.securitygroups_rpc [None req-d0194b4e-ea8b-4209-af3b-a875851573ce 51a4789e7d0b404b9882e0c26f7229be 1c44e13adebb4610b7c0cd2fdc62a5b7 - - default default] Security group member updated ['000c42d1-648a-4f56-b7e6-024a1e270fb9']
Feb 20 09:55:04 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:04.138 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:03Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e63760>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e636d0>], id=459e1d17-bcb8-49e1-9e89-b49e4c9e2fc0, ip_allocation=immediate, mac_address=fa:16:3e:3d:fd:3f, name=tempest-RoutersTest-1864635331, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:57Z, description=, dns_domain=, id=12d7921f-2fee-4d50-820c-ba95a837462a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersTest-1855830072, port_security_enabled=True, project_id=8aa5b5a34cfe458d96fea87261361db1, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=55270, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1914, status=ACTIVE, subnets=['88f0c1ab-c6d5-4266-88ae-ba8c36db4747'], tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:54:58Z, vlan_transparent=None, network_id=12d7921f-2fee-4d50-820c-ba95a837462a, port_security_enabled=True, project_id=8aa5b5a34cfe458d96fea87261361db1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['f947e5dd-708d-45fc-8f3d-3e71e4aec5b2'], standard_attr_id=1945, status=DOWN, tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:55:03Z on network 12d7921f-2fee-4d50-820c-ba95a837462a
Feb 20 09:55:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90d76354-71: No such device
Feb 20 09:55:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90d76354-71: No such device
Feb 20 09:55:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:04.162 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90d76354-71: No such device
Feb 20 09:55:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90d76354-71: No such device
Feb 20 09:55:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90d76354-71: No such device
Feb 20 09:55:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90d76354-71: No such device
Feb 20 09:55:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90d76354-71: No such device
Feb 20 09:55:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap90d76354-71: No such device
Feb 20 09:55:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:04.198 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:04.230 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:04 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:55:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:04.358 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:04 np0005625203.localdomain systemd[1]: tmp-crun.78lqP1.mount: Deactivated successfully.
Feb 20 09:55:04 np0005625203.localdomain dnsmasq[313795]: read /var/lib/neutron/dhcp/12d7921f-2fee-4d50-820c-ba95a837462a/addn_hosts - 1 addresses
Feb 20 09:55:04 np0005625203.localdomain dnsmasq-dhcp[313795]: read /var/lib/neutron/dhcp/12d7921f-2fee-4d50-820c-ba95a837462a/host
Feb 20 09:55:04 np0005625203.localdomain dnsmasq-dhcp[313795]: read /var/lib/neutron/dhcp/12d7921f-2fee-4d50-820c-ba95a837462a/opts
Feb 20 09:55:04 np0005625203.localdomain podman[313906]: 2026-02-20 09:55:04.374023352 +0000 UTC m=+0.056870020 container kill dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-12d7921f-2fee-4d50-820c-ba95a837462a, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:04 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:04.695 262775 INFO neutron.agent.dhcp.agent [None req-af404db3-696c-48ae-8eae-5ac13b7e3a47 - - - - - -] DHCP configuration for ports {'459e1d17-bcb8-49e1-9e89-b49e4c9e2fc0'} is completed
Feb 20 09:55:04 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:04.958 2 INFO neutron.agent.securitygroups_rpc [None req-a41b9e5e-24c7-4e83-9a8a-33aa24dfcce3 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']
Feb 20 09:55:05 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:05.027 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:05 np0005625203.localdomain podman[313968]: 
Feb 20 09:55:05 np0005625203.localdomain podman[313968]: 2026-02-20 09:55:05.129235017 +0000 UTC m=+0.082624228 container create bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:55:05 np0005625203.localdomain ceph-mon[296066]: pgmap v253: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 7.7 KiB/s wr, 31 op/s
Feb 20 09:55:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:55:05 np0005625203.localdomain systemd[1]: Started libpod-conmon-bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e.scope.
Feb 20 09:55:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e142 e142: 6 total, 6 up, 6 in
Feb 20 09:55:05 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:05 np0005625203.localdomain podman[313968]: 2026-02-20 09:55:05.088761974 +0000 UTC m=+0.042151235 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:05 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef66b722dc7fd1f63b4277cd218699dd7b82191be3f59f20d0d87367e669b3cc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:05 np0005625203.localdomain podman[313982]: 2026-02-20 09:55:05.234868884 +0000 UTC m=+0.069559433 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:05 np0005625203.localdomain podman[313968]: 2026-02-20 09:55:05.255782712 +0000 UTC m=+0.209171953 container init bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 20 09:55:05 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:05.262 2 INFO neutron.agent.securitygroups_rpc [None req-563cc02b-9571-4400-9563-57af3068006b f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:55:05 np0005625203.localdomain podman[313968]: 2026-02-20 09:55:05.269546418 +0000 UTC m=+0.222935659 container start bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:55:05 np0005625203.localdomain dnsmasq[314009]: started, version 2.85 cachesize 150
Feb 20 09:55:05 np0005625203.localdomain dnsmasq[314009]: DNS service limited to local subnets
Feb 20 09:55:05 np0005625203.localdomain dnsmasq[314009]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:05 np0005625203.localdomain dnsmasq[314009]: warning: no upstream servers configured
Feb 20 09:55:05 np0005625203.localdomain dnsmasq-dhcp[314009]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:05 np0005625203.localdomain dnsmasq[314009]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:05 np0005625203.localdomain dnsmasq-dhcp[314009]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:05 np0005625203.localdomain dnsmasq-dhcp[314009]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:05 np0005625203.localdomain podman[313982]: 2026-02-20 09:55:05.281660872 +0000 UTC m=+0.116351431 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:05 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:55:05 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:05.331 262775 INFO neutron.agent.dhcp.agent [None req-8be2c963-e55c-42be-9eb2-d1faf7058c58 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:03Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ebea00>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ebedc0>], id=486d821e-7726-43c0-ab60-ab766fd047b7, ip_allocation=immediate, mac_address=fa:16:3e:16:f9:3f, name=tempest-NetworksTestDHCPv6-1802925304, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=8, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['e4b9a3f8-8ed5-4304-93fe-e89061bd6335'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:01Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=1944, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:03Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:55:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:05.362 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:55:05 np0005625203.localdomain dnsmasq[314009]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:55:05 np0005625203.localdomain dnsmasq-dhcp[314009]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:05 np0005625203.localdomain dnsmasq-dhcp[314009]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:05 np0005625203.localdomain podman[314029]: 2026-02-20 09:55:05.511212264 +0000 UTC m=+0.052927708 container kill bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:55:05 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:05.612 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:55:05 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:05.723 262775 INFO neutron.agent.dhcp.agent [None req-97e59258-7023-4155-b4e1-6cfc0b4b491e - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:05.766 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:05 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:05.840 262775 INFO neutron.agent.dhcp.agent [None req-eb272933-c304-498e-9917-ca0cff7bec86 - - - - - -] DHCP configuration for ports {'486d821e-7726-43c0-ab60-ab766fd047b7'} is completed
Feb 20 09:55:05 np0005625203.localdomain dnsmasq[314009]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:05 np0005625203.localdomain dnsmasq-dhcp[314009]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:05 np0005625203.localdomain podman[314067]: 2026-02-20 09:55:05.86760505 +0000 UTC m=+0.063252897 container kill bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:55:05 np0005625203.localdomain dnsmasq-dhcp[314009]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:06 np0005625203.localdomain ceph-mon[296066]: osdmap e142: 6 total, 6 up, 6 in
Feb 20 09:55:06 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e143 e143: 6 total, 6 up, 6 in
Feb 20 09:55:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:55:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:55:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:55:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:55:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:55:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:55:07 np0005625203.localdomain ceph-mon[296066]: pgmap v255: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 4.5 KiB/s rd, 7.5 KiB/s wr, 10 op/s
Feb 20 09:55:07 np0005625203.localdomain ceph-mon[296066]: osdmap e143: 6 total, 6 up, 6 in
Feb 20 09:55:07 np0005625203.localdomain dnsmasq[314009]: exiting on receipt of SIGTERM
Feb 20 09:55:07 np0005625203.localdomain podman[314103]: 2026-02-20 09:55:07.268990385 +0000 UTC m=+0.062667039 container kill bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 20 09:55:07 np0005625203.localdomain systemd[1]: libpod-bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e.scope: Deactivated successfully.
Feb 20 09:55:07 np0005625203.localdomain podman[314117]: 2026-02-20 09:55:07.348243098 +0000 UTC m=+0.061724551 container died bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:55:07 np0005625203.localdomain systemd[1]: tmp-crun.aK5VlW.mount: Deactivated successfully.
Feb 20 09:55:07 np0005625203.localdomain podman[314117]: 2026-02-20 09:55:07.387159932 +0000 UTC m=+0.100641345 container cleanup bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:55:07 np0005625203.localdomain systemd[1]: libpod-conmon-bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e.scope: Deactivated successfully.
Feb 20 09:55:07 np0005625203.localdomain podman[314119]: 2026-02-20 09:55:07.426065955 +0000 UTC m=+0.133944684 container remove bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:55:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:07.445 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:07 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:07Z|00179|binding|INFO|Releasing lport 90d76354-7168-4085-a2ad-0e77605ae6e8 from this chassis (sb_readonly=0)
Feb 20 09:55:07 np0005625203.localdomain kernel: device tap90d76354-71 left promiscuous mode
Feb 20 09:55:07 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:07Z|00180|binding|INFO|Setting lport 90d76354-7168-4085-a2ad-0e77605ae6e8 down in Southbound
Feb 20 09:55:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:07.465 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=90d76354-7168-4085-a2ad-0e77605ae6e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:07.466 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 90d76354-7168-4085-a2ad-0e77605ae6e8 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:55:07 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:07.467 2 INFO neutron.agent.securitygroups_rpc [None req-63a5189c-53e8-40d9-9654-04d02c3d7a9f 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:07.468 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:07.469 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:07.469 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[bd6a5ffc-72bd-4e8b-985c-98bfe56d3144]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:07.636 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:03Z, description=, device_id=5b20e754-b267-4ac9-a030-663c9129808f, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ebe760>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e499a0>], id=459e1d17-bcb8-49e1-9e89-b49e4c9e2fc0, ip_allocation=immediate, mac_address=fa:16:3e:3d:fd:3f, name=tempest-RoutersTest-1864635331, network_id=12d7921f-2fee-4d50-820c-ba95a837462a, port_security_enabled=True, project_id=8aa5b5a34cfe458d96fea87261361db1, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=3, security_groups=['f947e5dd-708d-45fc-8f3d-3e71e4aec5b2'], standard_attr_id=1945, status=ACTIVE, tags=[], tenant_id=8aa5b5a34cfe458d96fea87261361db1, updated_at=2026-02-20T09:55:05Z on network 12d7921f-2fee-4d50-820c-ba95a837462a
Feb 20 09:55:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:07.669 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:55:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:07.670 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:55:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:07.670 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:55:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:07.836 262775 INFO neutron.agent.dhcp.agent [None req-480aef71-c774-48b5-bac3-3a9fc5d603e6 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:07 np0005625203.localdomain dnsmasq[313795]: read /var/lib/neutron/dhcp/12d7921f-2fee-4d50-820c-ba95a837462a/addn_hosts - 1 addresses
Feb 20 09:55:07 np0005625203.localdomain dnsmasq-dhcp[313795]: read /var/lib/neutron/dhcp/12d7921f-2fee-4d50-820c-ba95a837462a/host
Feb 20 09:55:07 np0005625203.localdomain podman[314165]: 2026-02-20 09:55:07.882471685 +0000 UTC m=+0.062236746 container kill dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-12d7921f-2fee-4d50-820c-ba95a837462a, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb 20 09:55:07 np0005625203.localdomain dnsmasq-dhcp[313795]: read /var/lib/neutron/dhcp/12d7921f-2fee-4d50-820c-ba95a837462a/opts
Feb 20 09:55:08 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:08.125 262775 INFO neutron.agent.dhcp.agent [None req-8db48650-4df2-43e8-bb79-a6ffee1fc678 - - - - - -] DHCP configuration for ports {'459e1d17-bcb8-49e1-9e89-b49e4c9e2fc0'} is completed
Feb 20 09:55:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:55:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:55:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e144 e144: 6 total, 6 up, 6 in
Feb 20 09:55:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ef66b722dc7fd1f63b4277cd218699dd7b82191be3f59f20d0d87367e669b3cc-merged.mount: Deactivated successfully.
Feb 20 09:55:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf20b39c1eeb2952c3d34f42dd09bc65693be254c2d560d2bcb1fec5c06cd90e-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:08 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:55:08 np0005625203.localdomain systemd[1]: tmp-crun.pWWjGL.mount: Deactivated successfully.
Feb 20 09:55:08 np0005625203.localdomain podman[314188]: 2026-02-20 09:55:08.333108327 +0000 UTC m=+0.139098693 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1770267347, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Feb 20 09:55:08 np0005625203.localdomain podman[314188]: 2026-02-20 09:55:08.349444033 +0000 UTC m=+0.155434409 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, vcs-type=git, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z)
Feb 20 09:55:08 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:55:08 np0005625203.localdomain podman[314187]: 2026-02-20 09:55:08.302326665 +0000 UTC m=+0.111664266 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:55:08 np0005625203.localdomain podman[314187]: 2026-02-20 09:55:08.433371309 +0000 UTC m=+0.242708930 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute)
Feb 20 09:55:08 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:55:08 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:08.629 2 INFO neutron.agent.securitygroups_rpc [None req-08efe737-22a8-48a9-b7b8-e1929d8369c8 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']
Feb 20 09:55:08 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:08.718 262775 INFO neutron.agent.linux.ip_lib [None req-dae3c804-2371-4fd5-97b2-8ffaf56d9bf9 - - - - - -] Device tap64d2d024-e9 cannot be used as it has no MAC address
Feb 20 09:55:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:08.744 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:08 np0005625203.localdomain kernel: device tap64d2d024-e9 entered promiscuous mode
Feb 20 09:55:08 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:08Z|00181|binding|INFO|Claiming lport 64d2d024-e964-43c3-b44a-e6f95ac77447 for this chassis.
Feb 20 09:55:08 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:08Z|00182|binding|INFO|64d2d024-e964-43c3-b44a-e6f95ac77447: Claiming unknown
Feb 20 09:55:08 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581308.7541] manager: (tap64d2d024-e9): new Generic device (/org/freedesktop/NetworkManager/Devices/38)
Feb 20 09:55:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:08.753 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:08 np0005625203.localdomain systemd-udevd[314235]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:55:08 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:08Z|00183|binding|INFO|Setting lport 64d2d024-e964-43c3-b44a-e6f95ac77447 ovn-installed in OVS
Feb 20 09:55:08 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:08Z|00184|binding|INFO|Setting lport 64d2d024-e964-43c3-b44a-e6f95ac77447 up in Southbound
Feb 20 09:55:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:08.763 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=64d2d024-e964-43c3-b44a-e6f95ac77447) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:08.765 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:08.766 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 64d2d024-e964-43c3-b44a-e6f95ac77447 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:55:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:08.768 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:08.769 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a9e350-385b-476c-affe-d2ee2fb5b87e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:08.771 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d2d024-e9: No such device
Feb 20 09:55:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d2d024-e9: No such device
Feb 20 09:55:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:08.792 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d2d024-e9: No such device
Feb 20 09:55:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d2d024-e9: No such device
Feb 20 09:55:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d2d024-e9: No such device
Feb 20 09:55:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d2d024-e9: No such device
Feb 20 09:55:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d2d024-e9: No such device
Feb 20 09:55:08 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap64d2d024-e9: No such device
Feb 20 09:55:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:08.837 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:08.865 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:08 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:08.956 2 INFO neutron.agent.securitygroups_rpc [None req-3b1327f4-a56e-425d-b764-df7dfb03463f f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:55:09 np0005625203.localdomain ceph-mon[296066]: pgmap v257: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 4.2 KiB/s rd, 3.2 KiB/s wr, 8 op/s
Feb 20 09:55:09 np0005625203.localdomain ceph-mon[296066]: osdmap e144: 6 total, 6 up, 6 in
Feb 20 09:55:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e145 e145: 6 total, 6 up, 6 in
Feb 20 09:55:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:55:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:09.360 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:09 np0005625203.localdomain podman[314306]: 
Feb 20 09:55:09 np0005625203.localdomain podman[314306]: 2026-02-20 09:55:09.753776039 +0000 UTC m=+0.105105072 container create 272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:55:09 np0005625203.localdomain systemd[1]: Started libpod-conmon-272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4.scope.
Feb 20 09:55:09 np0005625203.localdomain podman[314306]: 2026-02-20 09:55:09.703832715 +0000 UTC m=+0.055161798 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:09 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:09 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ec516c44835982fb8eb383b1f5867089cdd6deb0f9cdd1c545ccbaf8e2cc5b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:09 np0005625203.localdomain podman[314306]: 2026-02-20 09:55:09.842083182 +0000 UTC m=+0.193412225 container init 272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:55:09 np0005625203.localdomain podman[314306]: 2026-02-20 09:55:09.857412036 +0000 UTC m=+0.208741079 container start 272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 20 09:55:09 np0005625203.localdomain dnsmasq[314325]: started, version 2.85 cachesize 150
Feb 20 09:55:09 np0005625203.localdomain dnsmasq[314325]: DNS service limited to local subnets
Feb 20 09:55:09 np0005625203.localdomain dnsmasq[314325]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:09 np0005625203.localdomain dnsmasq[314325]: warning: no upstream servers configured
Feb 20 09:55:09 np0005625203.localdomain dnsmasq[314325]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:09 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:09.918 262775 INFO neutron.agent.dhcp.agent [None req-dae3c804-2371-4fd5-97b2-8ffaf56d9bf9 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:08Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e63e20>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e63d30>], id=27af4ed9-9d76-4e5b-81fe-3c02d5f2892f, ip_allocation=immediate, mac_address=fa:16:3e:a1:63:26, name=tempest-NetworksTestDHCPv6-1562325897, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=10, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['387892b9-6acd-4989-a616-56b41d0d85fe'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:07Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=1972, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:08Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:55:09 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:09.979 262775 INFO neutron.agent.dhcp.agent [None req-2c7ecc45-4641-454b-8931-fe3843d4744f - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:10 np0005625203.localdomain sshd[314348]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:55:10 np0005625203.localdomain podman[314342]: 2026-02-20 09:55:10.119140144 +0000 UTC m=+0.059072229 container kill 272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:10 np0005625203.localdomain dnsmasq[314325]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:55:10 np0005625203.localdomain ceph-mon[296066]: osdmap e145: 6 total, 6 up, 6 in
Feb 20 09:55:10 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:10.377 2 INFO neutron.agent.securitygroups_rpc [None req-9a8b2a58-7fed-4400-9808-58a86984ca8a 3ace3fc0d46241ffa2d6d0b16953a588 8aa5b5a34cfe458d96fea87261361db1 - - default default] Security group member updated ['f947e5dd-708d-45fc-8f3d-3e71e4aec5b2']
Feb 20 09:55:10 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:10.474 262775 INFO neutron.agent.dhcp.agent [None req-fe23002e-071c-482f-bcd3-3de15ef3d997 - - - - - -] DHCP configuration for ports {'27af4ed9-9d76-4e5b-81fe-3c02d5f2892f'} is completed
Feb 20 09:55:10 np0005625203.localdomain dnsmasq[313795]: read /var/lib/neutron/dhcp/12d7921f-2fee-4d50-820c-ba95a837462a/addn_hosts - 0 addresses
Feb 20 09:55:10 np0005625203.localdomain dnsmasq-dhcp[313795]: read /var/lib/neutron/dhcp/12d7921f-2fee-4d50-820c-ba95a837462a/host
Feb 20 09:55:10 np0005625203.localdomain dnsmasq-dhcp[313795]: read /var/lib/neutron/dhcp/12d7921f-2fee-4d50-820c-ba95a837462a/opts
Feb 20 09:55:10 np0005625203.localdomain podman[314379]: 2026-02-20 09:55:10.672981287 +0000 UTC m=+0.053922189 container kill dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-12d7921f-2fee-4d50-820c-ba95a837462a, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:55:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:10.803 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:11 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:11Z|00185|binding|INFO|Releasing lport 5ce4eae2-e62a-429a-b62a-1dbd88af73dc from this chassis (sb_readonly=0)
Feb 20 09:55:11 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:11Z|00186|binding|INFO|Setting lport 5ce4eae2-e62a-429a-b62a-1dbd88af73dc down in Southbound
Feb 20 09:55:11 np0005625203.localdomain kernel: device tap5ce4eae2-e6 left promiscuous mode
Feb 20 09:55:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:11.155 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:11 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:11.163 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-12d7921f-2fee-4d50-820c-ba95a837462a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12d7921f-2fee-4d50-820c-ba95a837462a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8aa5b5a34cfe458d96fea87261361db1', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ae876130-9243-41a7-8be6-088886ad6058, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=5ce4eae2-e62a-429a-b62a-1dbd88af73dc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:11 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:11.165 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 5ce4eae2-e62a-429a-b62a-1dbd88af73dc in datapath 12d7921f-2fee-4d50-820c-ba95a837462a unbound from our chassis
Feb 20 09:55:11 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:11.168 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12d7921f-2fee-4d50-820c-ba95a837462a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:55:11 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:11.169 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[b71acf19-24a5-4dcc-81c0-ac24d4cc7207]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:11.176 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:11 np0005625203.localdomain ceph-mon[296066]: pgmap v260: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 3.2 KiB/s rd, 236 B/s wr, 7 op/s
Feb 20 09:55:11 np0005625203.localdomain sshd[314348]: Invalid user claude from 34.131.211.42 port 52624
Feb 20 09:55:11 np0005625203.localdomain sshd[314348]: Received disconnect from 34.131.211.42 port 52624:11: Bye Bye [preauth]
Feb 20 09:55:11 np0005625203.localdomain sshd[314348]: Disconnected from invalid user claude 34.131.211.42 port 52624 [preauth]
Feb 20 09:55:11 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:11.788 2 INFO neutron.agent.securitygroups_rpc [None req-9a050612-ac54-4498-bab7-ac1559fb18bf 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:11 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:11.908 2 INFO neutron.agent.securitygroups_rpc [None req-36106926-50cd-429d-aa41-30ad5718ad39 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:55:12 np0005625203.localdomain dnsmasq[314325]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:12 np0005625203.localdomain podman[314419]: 2026-02-20 09:55:12.132993337 +0000 UTC m=+0.052223927 container kill 272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:55:12 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:55:12 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3509655015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:55:12 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:55:12 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3509655015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:55:12 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:12.344 2 INFO neutron.agent.securitygroups_rpc [None req-9a050612-ac54-4498-bab7-ac1559fb18bf 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:12 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e146 e146: 6 total, 6 up, 6 in
Feb 20 09:55:13 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:13.070 2 INFO neutron.agent.securitygroups_rpc [None req-5b112f63-69ca-4cb8-af5b-6c4c1770d4b6 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:13 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:13.135 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:13 np0005625203.localdomain podman[314455]: 2026-02-20 09:55:13.262990066 +0000 UTC m=+0.062863985 container kill 272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:55:13 np0005625203.localdomain dnsmasq[314325]: exiting on receipt of SIGTERM
Feb 20 09:55:13 np0005625203.localdomain systemd[1]: libpod-272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4.scope: Deactivated successfully.
Feb 20 09:55:13 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:13.285 2 INFO neutron.agent.securitygroups_rpc [None req-624534fd-50bc-4fd0-a351-6a4578734382 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']
Feb 20 09:55:13 np0005625203.localdomain ceph-mon[296066]: pgmap v261: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 3.3 KiB/s wr, 57 op/s
Feb 20 09:55:13 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3509655015' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:55:13 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3509655015' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:55:13 np0005625203.localdomain ceph-mon[296066]: osdmap e146: 6 total, 6 up, 6 in
Feb 20 09:55:13 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:13.304 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:13 np0005625203.localdomain podman[314469]: 2026-02-20 09:55:13.339869264 +0000 UTC m=+0.059591074 container died 272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:13 np0005625203.localdomain podman[314469]: 2026-02-20 09:55:13.373171525 +0000 UTC m=+0.092893315 container cleanup 272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:13 np0005625203.localdomain systemd[1]: libpod-conmon-272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4.scope: Deactivated successfully.
Feb 20 09:55:13 np0005625203.localdomain podman[314476]: 2026-02-20 09:55:13.460713054 +0000 UTC m=+0.165601625 container remove 272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Feb 20 09:55:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:13.475 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:13 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:13Z|00187|binding|INFO|Releasing lport 64d2d024-e964-43c3-b44a-e6f95ac77447 from this chassis (sb_readonly=0)
Feb 20 09:55:13 np0005625203.localdomain kernel: device tap64d2d024-e9 left promiscuous mode
Feb 20 09:55:13 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:13Z|00188|binding|INFO|Setting lport 64d2d024-e964-43c3-b44a-e6f95ac77447 down in Southbound
Feb 20 09:55:13 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:13.483 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=64d2d024-e964-43c3-b44a-e6f95ac77447) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:13 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:13.484 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 64d2d024-e964-43c3-b44a-e6f95ac77447 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:55:13 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:13.486 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:13 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:13.487 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[7521db4f-9e0d-4946-8cd4-d9408b560fa1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:13 np0005625203.localdomain dnsmasq[313795]: exiting on receipt of SIGTERM
Feb 20 09:55:13 np0005625203.localdomain podman[314513]: 2026-02-20 09:55:13.496611214 +0000 UTC m=+0.060772071 container kill dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-12d7921f-2fee-4d50-820c-ba95a837462a, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 20 09:55:13 np0005625203.localdomain systemd[1]: libpod-dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41.scope: Deactivated successfully.
Feb 20 09:55:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:13.502 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:13 np0005625203.localdomain podman[314530]: 2026-02-20 09:55:13.575233036 +0000 UTC m=+0.056098977 container died dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-12d7921f-2fee-4d50-820c-ba95a837462a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:13 np0005625203.localdomain podman[314530]: 2026-02-20 09:55:13.617142723 +0000 UTC m=+0.098008614 container remove dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-12d7921f-2fee-4d50-820c-ba95a837462a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:13 np0005625203.localdomain systemd[1]: libpod-conmon-dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41.scope: Deactivated successfully.
Feb 20 09:55:13 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:13.746 262775 INFO neutron.agent.dhcp.agent [None req-624f139f-cf7d-47ce-a5ea-dc0e2205053b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:13 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:13.955 262775 INFO neutron.agent.dhcp.agent [None req-2d6c2023-af1a-4473-ab3c-58c5323262ec - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:14 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:14.163 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-3ec516c44835982fb8eb383b1f5867089cdd6deb0f9cdd1c545ccbaf8e2cc5b7-merged.mount: Deactivated successfully.
Feb 20 09:55:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-272e6e7f26ead26632c39d28bd86eb0345453cebcdeb7e36cbada3ec06d008b4-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:14 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:55:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-9e4a5f5daf0f0c3535db339370b5cceb0ea2012c655888f910994c9970600955-merged.mount: Deactivated successfully.
Feb 20 09:55:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dc84f7baaf005802e8045904778af53b929a473d0a9b188b9e1732c6a0ac2c41-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:14 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d12d7921f\x2d2fee\x2d4d50\x2d820c\x2dba95a837462a.mount: Deactivated successfully.
Feb 20 09:55:14 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:55:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:14.391 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:14 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:14.511 2 INFO neutron.agent.securitygroups_rpc [None req-cfd67c24-e1f2-4ca4-a53e-99c158a0738a 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:14 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:14.843 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:14 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:14.856 262775 INFO neutron.agent.linux.ip_lib [None req-43d1edee-a51b-4c0f-bcc6-3ee527fed664 - - - - - -] Device tapd86bb777-99 cannot be used as it has no MAC address
Feb 20 09:55:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:14.886 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:14 np0005625203.localdomain kernel: device tapd86bb777-99 entered promiscuous mode
Feb 20 09:55:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:14Z|00189|binding|INFO|Claiming lport d86bb777-99f8-4145-8876-1383c9d3a807 for this chassis.
Feb 20 09:55:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:14Z|00190|binding|INFO|d86bb777-99f8-4145-8876-1383c9d3a807: Claiming unknown
Feb 20 09:55:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:14.896 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:14 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581314.9000] manager: (tapd86bb777-99): new Generic device (/org/freedesktop/NetworkManager/Devices/39)
Feb 20 09:55:14 np0005625203.localdomain systemd-udevd[314566]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:55:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:14.903 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=d86bb777-99f8-4145-8876-1383c9d3a807) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:14Z|00191|binding|INFO|Setting lport d86bb777-99f8-4145-8876-1383c9d3a807 ovn-installed in OVS
Feb 20 09:55:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:14Z|00192|binding|INFO|Setting lport d86bb777-99f8-4145-8876-1383c9d3a807 up in Southbound
Feb 20 09:55:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:14.904 161112 INFO neutron.agent.ovn.metadata.agent [-] Port d86bb777-99f8-4145-8876-1383c9d3a807 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:55:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:14.906 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:14.906 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:14.907 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[098297dd-82ff-4591-b352-b60dd556cde0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:14.911 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd86bb777-99: No such device
Feb 20 09:55:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:14.933 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd86bb777-99: No such device
Feb 20 09:55:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd86bb777-99: No such device
Feb 20 09:55:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd86bb777-99: No such device
Feb 20 09:55:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd86bb777-99: No such device
Feb 20 09:55:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd86bb777-99: No such device
Feb 20 09:55:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd86bb777-99: No such device
Feb 20 09:55:14 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapd86bb777-99: No such device
Feb 20 09:55:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:14.976 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:15.010 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:15 np0005625203.localdomain ceph-mon[296066]: pgmap v263: 177 pgs: 177 active+clean; 145 MiB data, 787 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 3.3 KiB/s wr, 57 op/s
Feb 20 09:55:15 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:15.451 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:15.808 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:15 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:15.820 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:16 np0005625203.localdomain podman[314638]: 
Feb 20 09:55:16 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:16.014 2 INFO neutron.agent.securitygroups_rpc [None req-7271509a-2575-4000-bc96-a2fd7601216d f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:55:16 np0005625203.localdomain podman[314638]: 2026-02-20 09:55:16.018059632 +0000 UTC m=+0.092944436 container create ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 09:55:16 np0005625203.localdomain systemd[1]: Started libpod-conmon-ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830.scope.
Feb 20 09:55:16 np0005625203.localdomain podman[314638]: 2026-02-20 09:55:15.974283438 +0000 UTC m=+0.049168302 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:16 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:16 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af0b8bae1dbae70ccb2d552b846d7be5da91e059caf3bf1acb7aa455a0d963d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:16 np0005625203.localdomain podman[314638]: 2026-02-20 09:55:16.098447069 +0000 UTC m=+0.173331883 container init ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:16 np0005625203.localdomain podman[314638]: 2026-02-20 09:55:16.106642353 +0000 UTC m=+0.181527167 container start ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:16 np0005625203.localdomain dnsmasq[314656]: started, version 2.85 cachesize 150
Feb 20 09:55:16 np0005625203.localdomain dnsmasq[314656]: DNS service limited to local subnets
Feb 20 09:55:16 np0005625203.localdomain dnsmasq[314656]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:16 np0005625203.localdomain dnsmasq[314656]: warning: no upstream servers configured
Feb 20 09:55:16 np0005625203.localdomain dnsmasq-dhcp[314656]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:16 np0005625203.localdomain dnsmasq[314656]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:16 np0005625203.localdomain dnsmasq-dhcp[314656]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:16 np0005625203.localdomain dnsmasq-dhcp[314656]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:16 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:16.176 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:15Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ef9250>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ef9070>], id=4a89f273-3b96-4bec-a8e7-17dab03f7f12, ip_allocation=immediate, mac_address=fa:16:3e:fd:c2:59, name=tempest-NetworksTestDHCPv6-23789222, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=12, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['02d0d905-aa5f-4407-a2be-c955ab8f453d'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:13Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=1995, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:15Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:55:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:16.227 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:16 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:16.308 262775 INFO neutron.agent.dhcp.agent [None req-d1fc6039-e7fd-488d-a50b-0afe14b62040 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:16 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e147 e147: 6 total, 6 up, 6 in
Feb 20 09:55:16 np0005625203.localdomain dnsmasq[314656]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:55:16 np0005625203.localdomain dnsmasq-dhcp[314656]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:16 np0005625203.localdomain dnsmasq-dhcp[314656]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:16 np0005625203.localdomain podman[314675]: 2026-02-20 09:55:16.37998325 +0000 UTC m=+0.065503708 container kill ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:55:16 np0005625203.localdomain sshd[314695]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:55:16 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:16.647 262775 INFO neutron.agent.dhcp.agent [None req-a0ac04d0-5c17-4073-b3b3-8507e838f55a - - - - - -] DHCP configuration for ports {'4a89f273-3b96-4bec-a8e7-17dab03f7f12'} is completed
Feb 20 09:55:17 np0005625203.localdomain sshd[314695]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.196 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.197 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.198 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:55:17.199 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:55:17 np0005625203.localdomain ceph-mon[296066]: pgmap v264: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 69 KiB/s rd, 4.0 KiB/s wr, 91 op/s
Feb 20 09:55:17 np0005625203.localdomain ceph-mon[296066]: osdmap e147: 6 total, 6 up, 6 in
Feb 20 09:55:17 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:17.983 2 INFO neutron.agent.securitygroups_rpc [None req-99b1b964-36a8-407d-a34f-bd7246c382f8 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:17 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:17.986 2 INFO neutron.agent.securitygroups_rpc [None req-9d512101-812d-472a-bd29-050847053b0a f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:55:18 np0005625203.localdomain dnsmasq[314656]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:18 np0005625203.localdomain dnsmasq-dhcp[314656]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:18 np0005625203.localdomain podman[314714]: 2026-02-20 09:55:18.214425863 +0000 UTC m=+0.069359856 container kill ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:55:18 np0005625203.localdomain dnsmasq-dhcp[314656]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:18 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e148 e148: 6 total, 6 up, 6 in
Feb 20 09:55:18 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:18.376 2 INFO neutron.agent.securitygroups_rpc [None req-6afebda2-88bb-41f1-8c70-a0608f1757d1 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:18 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:18.405 262775 INFO neutron.agent.linux.ip_lib [None req-68659a91-a8f9-4f24-aa95-91a2fd56380b - - - - - -] Device tap4fb0d92e-0b cannot be used as it has no MAC address
Feb 20 09:55:18 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:18.444 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:18 np0005625203.localdomain kernel: device tap4fb0d92e-0b entered promiscuous mode
Feb 20 09:55:18 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581318.4540] manager: (tap4fb0d92e-0b): new Generic device (/org/freedesktop/NetworkManager/Devices/40)
Feb 20 09:55:18 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:18Z|00193|binding|INFO|Claiming lport 4fb0d92e-0bac-458c-a858-cf1eeeb4b267 for this chassis.
Feb 20 09:55:18 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:18Z|00194|binding|INFO|4fb0d92e-0bac-458c-a858-cf1eeeb4b267: Claiming unknown
Feb 20 09:55:18 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:18.455 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:18 np0005625203.localdomain systemd-udevd[314746]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:55:18 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:18.470 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-39e7e275-a138-4d29-8800-638c001ed9bc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39e7e275-a138-4d29-8800-638c001ed9bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a09b3407c3143c8ae0948ccb18c1e61', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4321da81-4107-4cae-a537-a2eca62d1151, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=4fb0d92e-0bac-458c-a858-cf1eeeb4b267) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:18 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:18.473 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 4fb0d92e-0bac-458c-a858-cf1eeeb4b267 in datapath 39e7e275-a138-4d29-8800-638c001ed9bc bound to our chassis
Feb 20 09:55:18 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:18.475 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 39e7e275-a138-4d29-8800-638c001ed9bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:18 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:18.478 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[bc41dd79-924e-4249-b079-e8e7a74acfcc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:18 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:18Z|00195|binding|INFO|Setting lport 4fb0d92e-0bac-458c-a858-cf1eeeb4b267 ovn-installed in OVS
Feb 20 09:55:18 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:18Z|00196|binding|INFO|Setting lport 4fb0d92e-0bac-458c-a858-cf1eeeb4b267 up in Southbound
Feb 20 09:55:18 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:18.489 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:18 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:18.528 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:18 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:18.558 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:18 np0005625203.localdomain ceph-mon[296066]: pgmap v266: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 61 KiB/s rd, 3.5 KiB/s wr, 79 op/s
Feb 20 09:55:18 np0005625203.localdomain ceph-mon[296066]: osdmap e148: 6 total, 6 up, 6 in
Feb 20 09:55:19 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:19Z|00197|binding|INFO|Removing iface tap4fb0d92e-0b ovn-installed in OVS
Feb 20 09:55:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:19.223 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 5e5a7a95-98f6-4e9f-981a-8529c2f9f3de with type ""
Feb 20 09:55:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:19.225 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-39e7e275-a138-4d29-8800-638c001ed9bc', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39e7e275-a138-4d29-8800-638c001ed9bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a09b3407c3143c8ae0948ccb18c1e61', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4321da81-4107-4cae-a537-a2eca62d1151, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=4fb0d92e-0bac-458c-a858-cf1eeeb4b267) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:19.228 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 4fb0d92e-0bac-458c-a858-cf1eeeb4b267 in datapath 39e7e275-a138-4d29-8800-638c001ed9bc unbound from our chassis
Feb 20 09:55:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:19.231 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 39e7e275-a138-4d29-8800-638c001ed9bc or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:19 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:19Z|00198|binding|INFO|Removing lport 4fb0d92e-0bac-458c-a858-cf1eeeb4b267 ovn-installed in OVS
Feb 20 09:55:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:19.232 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[22d06eb2-a63a-4a41-adf9-7a7ea04a8516]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:19.234 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:19.235 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:55:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:19.418 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:19 np0005625203.localdomain podman[314798]: 
Feb 20 09:55:19 np0005625203.localdomain podman[314798]: 2026-02-20 09:55:19.575483822 +0000 UTC m=+0.115006889 container create 7f0f1444bc38e3ca51d003f1c48fc100e7ae3bd81f471b02ae19941ec661d44e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e7e275-a138-4d29-8800-638c001ed9bc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:55:19 np0005625203.localdomain systemd[1]: Started libpod-conmon-7f0f1444bc38e3ca51d003f1c48fc100e7ae3bd81f471b02ae19941ec661d44e.scope.
Feb 20 09:55:19 np0005625203.localdomain podman[314798]: 2026-02-20 09:55:19.524553776 +0000 UTC m=+0.064076893 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:19 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:19 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/524871cc9f2f9e7bcbbf654da44bcaf70ea2b5cc3bda2861221cfbd6c8a1b4b4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:19 np0005625203.localdomain podman[314798]: 2026-02-20 09:55:19.649951526 +0000 UTC m=+0.189474593 container init 7f0f1444bc38e3ca51d003f1c48fc100e7ae3bd81f471b02ae19941ec661d44e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e7e275-a138-4d29-8800-638c001ed9bc, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:55:19 np0005625203.localdomain podman[314798]: 2026-02-20 09:55:19.658046106 +0000 UTC m=+0.197569183 container start 7f0f1444bc38e3ca51d003f1c48fc100e7ae3bd81f471b02ae19941ec661d44e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e7e275-a138-4d29-8800-638c001ed9bc, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:55:19 np0005625203.localdomain dnsmasq[314843]: started, version 2.85 cachesize 150
Feb 20 09:55:19 np0005625203.localdomain dnsmasq[314843]: DNS service limited to local subnets
Feb 20 09:55:19 np0005625203.localdomain dnsmasq[314843]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:19 np0005625203.localdomain dnsmasq[314843]: warning: no upstream servers configured
Feb 20 09:55:19 np0005625203.localdomain dnsmasq-dhcp[314843]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:19 np0005625203.localdomain dnsmasq[314843]: read /var/lib/neutron/dhcp/39e7e275-a138-4d29-8800-638c001ed9bc/addn_hosts - 0 addresses
Feb 20 09:55:19 np0005625203.localdomain dnsmasq-dhcp[314843]: read /var/lib/neutron/dhcp/39e7e275-a138-4d29-8800-638c001ed9bc/host
Feb 20 09:55:19 np0005625203.localdomain dnsmasq-dhcp[314843]: read /var/lib/neutron/dhcp/39e7e275-a138-4d29-8800-638c001ed9bc/opts
Feb 20 09:55:19 np0005625203.localdomain dnsmasq[314656]: exiting on receipt of SIGTERM
Feb 20 09:55:19 np0005625203.localdomain podman[314831]: 2026-02-20 09:55:19.688847929 +0000 UTC m=+0.052035161 container kill ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 20 09:55:19 np0005625203.localdomain systemd[1]: libpod-ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830.scope: Deactivated successfully.
Feb 20 09:55:19 np0005625203.localdomain podman[314845]: 2026-02-20 09:55:19.763101906 +0000 UTC m=+0.058900853 container died ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:55:19 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1519354984' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:55:19 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1519354984' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:55:19 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:19.789 262775 INFO neutron.agent.dhcp.agent [None req-7ace0652-5648-4887-9e5f-d7be3efa2666 - - - - - -] DHCP configuration for ports {'fdead686-b7bc-41e1-bd1f-218f9ec0352e'} is completed
Feb 20 09:55:19 np0005625203.localdomain podman[314845]: 2026-02-20 09:55:19.795464978 +0000 UTC m=+0.091263885 container cleanup ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:55:19 np0005625203.localdomain systemd[1]: libpod-conmon-ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830.scope: Deactivated successfully.
Feb 20 09:55:19 np0005625203.localdomain podman[314847]: 2026-02-20 09:55:19.845376602 +0000 UTC m=+0.125030549 container remove ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:19.870 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:19 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:19Z|00199|binding|INFO|Releasing lport d86bb777-99f8-4145-8876-1383c9d3a807 from this chassis (sb_readonly=0)
Feb 20 09:55:19 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:19Z|00200|binding|INFO|Setting lport d86bb777-99f8-4145-8876-1383c9d3a807 down in Southbound
Feb 20 09:55:19 np0005625203.localdomain kernel: device tapd86bb777-99 left promiscuous mode
Feb 20 09:55:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:19.882 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=d86bb777-99f8-4145-8876-1383c9d3a807) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:19.884 161112 INFO neutron.agent.ovn.metadata.agent [-] Port d86bb777-99f8-4145-8876-1383c9d3a807 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:55:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:19.886 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:19.886 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[1eebf93f-10d9-4a0b-92ba-3e5a8a5a5286]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:19.896 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:19.903 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:19 np0005625203.localdomain dnsmasq[314843]: exiting on receipt of SIGTERM
Feb 20 09:55:19 np0005625203.localdomain podman[314888]: 2026-02-20 09:55:19.975977162 +0000 UTC m=+0.058891973 container kill 7f0f1444bc38e3ca51d003f1c48fc100e7ae3bd81f471b02ae19941ec661d44e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e7e275-a138-4d29-8800-638c001ed9bc, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:19 np0005625203.localdomain systemd[1]: libpod-7f0f1444bc38e3ca51d003f1c48fc100e7ae3bd81f471b02ae19941ec661d44e.scope: Deactivated successfully.
Feb 20 09:55:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:55:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:55:20 np0005625203.localdomain podman[314907]: 2026-02-20 09:55:20.055032859 +0000 UTC m=+0.056667235 container died 7f0f1444bc38e3ca51d003f1c48fc100e7ae3bd81f471b02ae19941ec661d44e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e7e275-a138-4d29-8800-638c001ed9bc, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:55:20 np0005625203.localdomain podman[314914]: 2026-02-20 09:55:20.100561607 +0000 UTC m=+0.084653190 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:55:20 np0005625203.localdomain podman[314914]: 2026-02-20 09:55:20.109097671 +0000 UTC m=+0.093189274 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:55:20 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:55:20 np0005625203.localdomain podman[314908]: 2026-02-20 09:55:20.183069769 +0000 UTC m=+0.172958822 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:55:20 np0005625203.localdomain podman[314908]: 2026-02-20 09:55:20.189837889 +0000 UTC m=+0.179726982 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 09:55:20 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:55:20 np0005625203.localdomain podman[314907]: 2026-02-20 09:55:20.249478074 +0000 UTC m=+0.251112470 container remove 7f0f1444bc38e3ca51d003f1c48fc100e7ae3bd81f471b02ae19941ec661d44e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-39e7e275-a138-4d29-8800-638c001ed9bc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 20 09:55:20 np0005625203.localdomain systemd[1]: libpod-conmon-7f0f1444bc38e3ca51d003f1c48fc100e7ae3bd81f471b02ae19941ec661d44e.scope: Deactivated successfully.
Feb 20 09:55:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:20.260 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:20 np0005625203.localdomain kernel: device tap4fb0d92e-0b left promiscuous mode
Feb 20 09:55:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:20.281 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:20 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:20.309 262775 INFO neutron.agent.dhcp.agent [None req-24a21fed-ab78-4ba3-922a-df64c0d63cb5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:20 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:20.310 262775 INFO neutron.agent.dhcp.agent [None req-24a21fed-ab78-4ba3-922a-df64c0d63cb5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:20 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:20.310 262775 INFO neutron.agent.dhcp.agent [None req-24a21fed-ab78-4ba3-922a-df64c0d63cb5 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-524871cc9f2f9e7bcbbf654da44bcaf70ea2b5cc3bda2861221cfbd6c8a1b4b4-merged.mount: Deactivated successfully.
Feb 20 09:55:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f0f1444bc38e3ca51d003f1c48fc100e7ae3bd81f471b02ae19941ec661d44e-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:20 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d39e7e275\x2da138\x2d4d29\x2d8800\x2d638c001ed9bc.mount: Deactivated successfully.
Feb 20 09:55:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-af0b8bae1dbae70ccb2d552b846d7be5da91e059caf3bf1acb7aa455a0d963d2-merged.mount: Deactivated successfully.
Feb 20 09:55:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ac677a93e53068e19331d5d4d68806d047c98ba9c8654a41b66fcb2d71706830-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:20 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:55:20 np0005625203.localdomain ceph-mon[296066]: pgmap v268: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 40 KiB/s rd, 1.9 KiB/s wr, 54 op/s
Feb 20 09:55:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:20.810 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:21 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:21.022 262775 INFO neutron.agent.linux.ip_lib [None req-302a5f4e-2b26-4b24-b6e0-b41260918419 - - - - - -] Device tapc71491e8-1f cannot be used as it has no MAC address
Feb 20 09:55:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:21.051 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:21 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:21.055 2 INFO neutron.agent.securitygroups_rpc [None req-0c0c6410-fbc5-4b85-ab45-3c003033a966 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:55:21 np0005625203.localdomain kernel: device tapc71491e8-1f entered promiscuous mode
Feb 20 09:55:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:21.059 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:21 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:21Z|00201|binding|INFO|Claiming lport c71491e8-1f45-4d4c-89fe-850ea1c81708 for this chassis.
Feb 20 09:55:21 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:21Z|00202|binding|INFO|c71491e8-1f45-4d4c-89fe-850ea1c81708: Claiming unknown
Feb 20 09:55:21 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581321.0607] manager: (tapc71491e8-1f): new Generic device (/org/freedesktop/NetworkManager/Devices/41)
Feb 20 09:55:21 np0005625203.localdomain systemd-udevd[314748]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:55:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:21.070 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:21 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:21.072 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=c71491e8-1f45-4d4c-89fe-850ea1c81708) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:21.072 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:21 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:21Z|00203|binding|INFO|Setting lport c71491e8-1f45-4d4c-89fe-850ea1c81708 ovn-installed in OVS
Feb 20 09:55:21 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:21Z|00204|binding|INFO|Setting lport c71491e8-1f45-4d4c-89fe-850ea1c81708 up in Southbound
Feb 20 09:55:21 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:21.075 161112 INFO neutron.agent.ovn.metadata.agent [-] Port c71491e8-1f45-4d4c-89fe-850ea1c81708 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:55:21 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:21.078 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:21 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:21.080 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[cd2fe9fe-df1a-4c07-818c-78e26bf80921]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:21.085 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc71491e8-1f: No such device
Feb 20 09:55:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc71491e8-1f: No such device
Feb 20 09:55:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:21.103 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc71491e8-1f: No such device
Feb 20 09:55:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:21.107 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc71491e8-1f: No such device
Feb 20 09:55:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc71491e8-1f: No such device
Feb 20 09:55:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc71491e8-1f: No such device
Feb 20 09:55:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc71491e8-1f: No such device
Feb 20 09:55:21 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapc71491e8-1f: No such device
Feb 20 09:55:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:21.143 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:21.172 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:22 np0005625203.localdomain podman[315050]: 
Feb 20 09:55:22 np0005625203.localdomain podman[315050]: 2026-02-20 09:55:22.059823781 +0000 UTC m=+0.082089470 container create 756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:55:22 np0005625203.localdomain systemd[1]: Started libpod-conmon-756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8.scope.
Feb 20 09:55:22 np0005625203.localdomain podman[315050]: 2026-02-20 09:55:22.017948886 +0000 UTC m=+0.040214605 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:22 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:22 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847c2e201036209f738881c57b253c983097b3dc573ddcad1d29bd8336f271fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:22 np0005625203.localdomain podman[315050]: 2026-02-20 09:55:22.143839101 +0000 UTC m=+0.166104790 container init 756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Feb 20 09:55:22 np0005625203.localdomain podman[315050]: 2026-02-20 09:55:22.150151016 +0000 UTC m=+0.172416665 container start 756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:22 np0005625203.localdomain dnsmasq[315068]: started, version 2.85 cachesize 150
Feb 20 09:55:22 np0005625203.localdomain dnsmasq[315068]: DNS service limited to local subnets
Feb 20 09:55:22 np0005625203.localdomain dnsmasq[315068]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:22 np0005625203.localdomain dnsmasq[315068]: warning: no upstream servers configured
Feb 20 09:55:22 np0005625203.localdomain dnsmasq-dhcp[315068]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:22 np0005625203.localdomain dnsmasq[315068]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:22 np0005625203.localdomain dnsmasq-dhcp[315068]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:22 np0005625203.localdomain dnsmasq-dhcp[315068]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:22 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:22.200 262775 INFO neutron.agent.dhcp.agent [None req-302a5f4e-2b26-4b24-b6e0-b41260918419 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:20Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ddb370>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4d26460>], id=650d380d-eb2d-4466-8ca0-2798ab039247, ip_allocation=immediate, mac_address=fa:16:3e:46:6f:07, name=tempest-NetworksTestDHCPv6-1372574796, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=14, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['b1e5dbe1-484b-4a05-bf8a-b7e408305315'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:19Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=2025, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:20Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:55:22 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:22.283 262775 INFO neutron.agent.dhcp.agent [None req-85d0789f-d794-4f7f-85df-0fb79eba5966 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:22 np0005625203.localdomain dnsmasq[315068]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:55:22 np0005625203.localdomain dnsmasq-dhcp[315068]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:22 np0005625203.localdomain podman[315088]: 2026-02-20 09:55:22.400037547 +0000 UTC m=+0.059411020 container kill 756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 20 09:55:22 np0005625203.localdomain dnsmasq-dhcp[315068]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:22.601 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:22 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:22Z|00205|binding|INFO|Releasing lport c71491e8-1f45-4d4c-89fe-850ea1c81708 from this chassis (sb_readonly=0)
Feb 20 09:55:22 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:22Z|00206|binding|INFO|Setting lport c71491e8-1f45-4d4c-89fe-850ea1c81708 down in Southbound
Feb 20 09:55:22 np0005625203.localdomain kernel: device tapc71491e8-1f left promiscuous mode
Feb 20 09:55:22 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:22.615 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=c71491e8-1f45-4d4c-89fe-850ea1c81708) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:22 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:22.618 161112 INFO neutron.agent.ovn.metadata.agent [-] Port c71491e8-1f45-4d4c-89fe-850ea1c81708 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:55:22 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:22.620 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:22 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:22.621 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[0f17b2a4-037b-4b3b-b519-140430bfb4f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:22.622 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:22 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:22.650 262775 INFO neutron.agent.dhcp.agent [None req-f5182fd4-a7aa-423d-9b8d-e86e09918f02 - - - - - -] DHCP configuration for ports {'650d380d-eb2d-4466-8ca0-2798ab039247'} is completed
Feb 20 09:55:22 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e149 e149: 6 total, 6 up, 6 in
Feb 20 09:55:22 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:22.773 262775 INFO neutron.agent.linux.ip_lib [None req-1c928c26-dad8-445e-86e3-ca2510699c0a - - - - - -] Device tape5b13e16-bb cannot be used as it has no MAC address
Feb 20 09:55:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:22.803 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:22 np0005625203.localdomain kernel: device tape5b13e16-bb entered promiscuous mode
Feb 20 09:55:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:22.809 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:22 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581322.8104] manager: (tape5b13e16-bb): new Generic device (/org/freedesktop/NetworkManager/Devices/42)
Feb 20 09:55:22 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:22Z|00207|binding|INFO|Claiming lport e5b13e16-bb35-4bf3-848e-d5bbaae4a145 for this chassis.
Feb 20 09:55:22 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:22Z|00208|binding|INFO|e5b13e16-bb35-4bf3-848e-d5bbaae4a145: Claiming unknown
Feb 20 09:55:22 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:22.818 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-a94e9f91-f18c-4d38-8cbb-1b1392f5f97a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a94e9f91-f18c-4d38-8cbb-1b1392f5f97a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a09b3407c3143c8ae0948ccb18c1e61', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=785a165f-0093-4e72-83d4-4746d45c71e6, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=e5b13e16-bb35-4bf3-848e-d5bbaae4a145) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:22 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:22.820 161112 INFO neutron.agent.ovn.metadata.agent [-] Port e5b13e16-bb35-4bf3-848e-d5bbaae4a145 in datapath a94e9f91-f18c-4d38-8cbb-1b1392f5f97a bound to our chassis
Feb 20 09:55:22 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:22.822 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a94e9f91-f18c-4d38-8cbb-1b1392f5f97a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:22 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:22.823 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff780d6-5d3f-43a1-be69-bf355dcfea07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:22 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tape5b13e16-bb: No such device
Feb 20 09:55:22 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:22Z|00209|binding|INFO|Setting lport e5b13e16-bb35-4bf3-848e-d5bbaae4a145 ovn-installed in OVS
Feb 20 09:55:22 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:22Z|00210|binding|INFO|Setting lport e5b13e16-bb35-4bf3-848e-d5bbaae4a145 up in Southbound
Feb 20 09:55:22 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tape5b13e16-bb: No such device
Feb 20 09:55:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:22.846 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:22 np0005625203.localdomain ceph-mon[296066]: pgmap v269: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 3.2 KiB/s wr, 84 op/s
Feb 20 09:55:22 np0005625203.localdomain ceph-mon[296066]: osdmap e149: 6 total, 6 up, 6 in
Feb 20 09:55:22 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tape5b13e16-bb: No such device
Feb 20 09:55:22 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tape5b13e16-bb: No such device
Feb 20 09:55:22 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tape5b13e16-bb: No such device
Feb 20 09:55:22 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tape5b13e16-bb: No such device
Feb 20 09:55:22 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tape5b13e16-bb: No such device
Feb 20 09:55:22 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tape5b13e16-bb: No such device
Feb 20 09:55:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:22.934 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:22.958 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:23 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:23.362 2 INFO neutron.agent.securitygroups_rpc [None req-6678ed04-c4d3-4555-9813-927645c955fd 5f5cb06fd4c045b1bfc3f890392c3165 54500b20d6a643669fbf357242dde27f - - default default] Security group member updated ['b8016b60-fce7-435d-b877-28993ceda4d5']
Feb 20 09:55:23 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:23.544 2 INFO neutron.agent.securitygroups_rpc [None req-08ae8aba-2609-40c5-899b-086a86995061 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:55:23 np0005625203.localdomain dnsmasq[315068]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:23 np0005625203.localdomain dnsmasq-dhcp[315068]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:23 np0005625203.localdomain podman[315194]: 2026-02-20 09:55:23.856043272 +0000 UTC m=+0.119025373 container kill 756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 20 09:55:23 np0005625203.localdomain dnsmasq-dhcp[315068]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for 811e2462-6872-485d-9c09-d2dd9cb25273.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapc71491e8-1f not found in namespace qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273.
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     return fut.result()
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     raise self._exception
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tapc71491e8-1f not found in namespace qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273.
Feb 20 09:55:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:23.885 262775 ERROR neutron.agent.dhcp.agent 
Feb 20 09:55:23 np0005625203.localdomain podman[315218]: 
Feb 20 09:55:23 np0005625203.localdomain podman[315218]: 2026-02-20 09:55:23.924233193 +0000 UTC m=+0.104470744 container create 7f14b9c9dc80a2e23c64340b0a8d6d8f22b3da36011312ac3732d7cbdd046dce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a94e9f91-f18c-4d38-8cbb-1b1392f5f97a, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 20 09:55:23 np0005625203.localdomain systemd[1]: Started libpod-conmon-7f14b9c9dc80a2e23c64340b0a8d6d8f22b3da36011312ac3732d7cbdd046dce.scope.
Feb 20 09:55:23 np0005625203.localdomain podman[315218]: 2026-02-20 09:55:23.879964763 +0000 UTC m=+0.060202334 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:23 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:23 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1ebf6f16b2f5a5fadeebc83231a32048271b1ed9a6e753c99bcb8ff9f275887/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:24 np0005625203.localdomain podman[315218]: 2026-02-20 09:55:24.000039498 +0000 UTC m=+0.180277069 container init 7f14b9c9dc80a2e23c64340b0a8d6d8f22b3da36011312ac3732d7cbdd046dce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a94e9f91-f18c-4d38-8cbb-1b1392f5f97a, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:55:24 np0005625203.localdomain podman[315218]: 2026-02-20 09:55:24.010045297 +0000 UTC m=+0.190282868 container start 7f14b9c9dc80a2e23c64340b0a8d6d8f22b3da36011312ac3732d7cbdd046dce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a94e9f91-f18c-4d38-8cbb-1b1392f5f97a, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 20 09:55:24 np0005625203.localdomain dnsmasq[315239]: started, version 2.85 cachesize 150
Feb 20 09:55:24 np0005625203.localdomain dnsmasq[315239]: DNS service limited to local subnets
Feb 20 09:55:24 np0005625203.localdomain dnsmasq[315239]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:24 np0005625203.localdomain dnsmasq[315239]: warning: no upstream servers configured
Feb 20 09:55:24 np0005625203.localdomain dnsmasq-dhcp[315239]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:24 np0005625203.localdomain dnsmasq[315239]: read /var/lib/neutron/dhcp/a94e9f91-f18c-4d38-8cbb-1b1392f5f97a/addn_hosts - 0 addresses
Feb 20 09:55:24 np0005625203.localdomain dnsmasq-dhcp[315239]: read /var/lib/neutron/dhcp/a94e9f91-f18c-4d38-8cbb-1b1392f5f97a/host
Feb 20 09:55:24 np0005625203.localdomain dnsmasq-dhcp[315239]: read /var/lib/neutron/dhcp/a94e9f91-f18c-4d38-8cbb-1b1392f5f97a/opts
Feb 20 09:55:24 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:24.072 262775 INFO neutron.agent.dhcp.agent [None req-e21296f0-1e65-432d-9931-4f4869144f19 - - - - - -] Synchronizing state
Feb 20 09:55:24 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:24.233 262775 INFO neutron.agent.dhcp.agent [None req-fa8269fd-b256-4b28-983c-a8abccef6328 - - - - - -] DHCP configuration for ports {'ceff477d-90a1-43a0-b0c7-4569cc891d39'} is completed
Feb 20 09:55:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:55:24 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:24.342 262775 INFO neutron.agent.dhcp.agent [None req-d64b46c8-fd92-4e74-ae79-8904232194cb - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:55:24 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:24.343 262775 INFO neutron.agent.dhcp.agent [-] Starting network 519ed234-0c28-4a63-b6ed-1122a8d9dfc9 dhcp configuration
Feb 20 09:55:24 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:24.343 262775 INFO neutron.agent.dhcp.agent [-] Finished network 519ed234-0c28-4a63-b6ed-1122a8d9dfc9 dhcp configuration
Feb 20 09:55:24 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:24.344 262775 INFO neutron.agent.dhcp.agent [-] Starting network 545dd7fa-aa2a-4b99-a111-22d2b369ed0a dhcp configuration
Feb 20 09:55:24 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:24.344 262775 INFO neutron.agent.dhcp.agent [-] Finished network 545dd7fa-aa2a-4b99-a111-22d2b369ed0a dhcp configuration
Feb 20 09:55:24 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:24.344 262775 INFO neutron.agent.dhcp.agent [-] Starting network 811e2462-6872-485d-9c09-d2dd9cb25273 dhcp configuration
Feb 20 09:55:24 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:24.347 262775 INFO neutron.agent.dhcp.agent [-] Starting network af29bf87-1daa-44a5-8cd8-ed0a60fa19f1 dhcp configuration
Feb 20 09:55:24 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:24.348 262775 INFO neutron.agent.dhcp.agent [-] Finished network af29bf87-1daa-44a5-8cd8-ed0a60fa19f1 dhcp configuration
Feb 20 09:55:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:24.420 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:24 np0005625203.localdomain dnsmasq[315068]: exiting on receipt of SIGTERM
Feb 20 09:55:24 np0005625203.localdomain podman[315255]: 2026-02-20 09:55:24.512537943 +0000 UTC m=+0.051112563 container kill 756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 20 09:55:24 np0005625203.localdomain systemd[1]: libpod-756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8.scope: Deactivated successfully.
Feb 20 09:55:24 np0005625203.localdomain podman[315267]: 2026-02-20 09:55:24.591561437 +0000 UTC m=+0.062269707 container died 756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:55:24 np0005625203.localdomain podman[315267]: 2026-02-20 09:55:24.629393407 +0000 UTC m=+0.100101637 container cleanup 756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb 20 09:55:24 np0005625203.localdomain systemd[1]: libpod-conmon-756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8.scope: Deactivated successfully.
Feb 20 09:55:24 np0005625203.localdomain podman[315269]: 2026-02-20 09:55:24.674004707 +0000 UTC m=+0.136444161 container remove 756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:55:24 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:24.739 262775 INFO neutron.agent.linux.ip_lib [-] Device tapc71491e8-1f cannot be used as it has no MAC address
Feb 20 09:55:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:24.758 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:24 np0005625203.localdomain kernel: device tapc71491e8-1f entered promiscuous mode
Feb 20 09:55:24 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:24Z|00211|binding|INFO|Claiming lport c71491e8-1f45-4d4c-89fe-850ea1c81708 for this chassis.
Feb 20 09:55:24 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:24Z|00212|binding|INFO|c71491e8-1f45-4d4c-89fe-850ea1c81708: Claiming unknown
Feb 20 09:55:24 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581324.7670] manager: (tapc71491e8-1f): new Generic device (/org/freedesktop/NetworkManager/Devices/43)
Feb 20 09:55:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:24.768 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:24 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:24.776 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=c71491e8-1f45-4d4c-89fe-850ea1c81708) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:24 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:24Z|00213|binding|INFO|Setting lport c71491e8-1f45-4d4c-89fe-850ea1c81708 up in Southbound
Feb 20 09:55:24 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:24.778 161112 INFO neutron.agent.ovn.metadata.agent [-] Port c71491e8-1f45-4d4c-89fe-850ea1c81708 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:55:24 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:24Z|00214|binding|INFO|Setting lport c71491e8-1f45-4d4c-89fe-850ea1c81708 ovn-installed in OVS
Feb 20 09:55:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:24.779 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:24 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:24.784 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:24.784 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:24 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:24.785 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[81777dd3-00c9-4459-a39c-354b28cdac75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:24.808 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:24.858 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:24 np0005625203.localdomain ceph-mon[296066]: pgmap v271: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 36 KiB/s rd, 2.4 KiB/s wr, 49 op/s
Feb 20 09:55:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:24.890 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-847c2e201036209f738881c57b253c983097b3dc573ddcad1d29bd8336f271fa-merged.mount: Deactivated successfully.
Feb 20 09:55:25 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-756fb1a255e092b25d1da3431fc35332b830d3802d7cde80df4e9cc68627e8a8-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:25 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:25.594 2 INFO neutron.agent.securitygroups_rpc [None req-65f746b2-e25c-42a6-a0af-3bc4d3abfc01 5f5cb06fd4c045b1bfc3f890392c3165 54500b20d6a643669fbf357242dde27f - - default default] Security group member updated ['b8016b60-fce7-435d-b877-28993ceda4d5']
Feb 20 09:55:25 np0005625203.localdomain podman[315352]: 
Feb 20 09:55:25 np0005625203.localdomain podman[315352]: 2026-02-20 09:55:25.733429043 +0000 UTC m=+0.093872115 container create bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:55:25 np0005625203.localdomain systemd[1]: Started libpod-conmon-bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5.scope.
Feb 20 09:55:25 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:25 np0005625203.localdomain podman[315352]: 2026-02-20 09:55:25.692854928 +0000 UTC m=+0.053298030 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:25 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dcc1a221dff644dac5bd8716ff6539ad13703bf4e6e63404e7bd7d36ff4ffe0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:25 np0005625203.localdomain podman[315352]: 2026-02-20 09:55:25.803821051 +0000 UTC m=+0.164264113 container init bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:25.813 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:25 np0005625203.localdomain podman[315352]: 2026-02-20 09:55:25.814386437 +0000 UTC m=+0.174829499 container start bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:55:25 np0005625203.localdomain dnsmasq[315370]: started, version 2.85 cachesize 150
Feb 20 09:55:25 np0005625203.localdomain dnsmasq[315370]: DNS service limited to local subnets
Feb 20 09:55:25 np0005625203.localdomain dnsmasq[315370]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:25 np0005625203.localdomain dnsmasq[315370]: warning: no upstream servers configured
Feb 20 09:55:25 np0005625203.localdomain dnsmasq-dhcp[315370]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:25 np0005625203.localdomain dnsmasq[315370]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:25 np0005625203.localdomain dnsmasq-dhcp[315370]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:25 np0005625203.localdomain dnsmasq-dhcp[315370]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:25 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:25.876 262775 INFO neutron.agent.dhcp.agent [-] Finished network 811e2462-6872-485d-9c09-d2dd9cb25273 dhcp configuration
Feb 20 09:55:25 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:25.877 262775 INFO neutron.agent.dhcp.agent [None req-d64b46c8-fd92-4e74-ae79-8904232194cb - - - - - -] Synchronizing state complete
Feb 20 09:55:25 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:25.879 262775 INFO neutron.agent.dhcp.agent [None req-1c928c26-dad8-445e-86e3-ca2510699c0a - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:22Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4d26970>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e60610>], id=57a646f3-af44-40b9-a99a-038dfab4f976, ip_allocation=immediate, mac_address=fa:16:3e:51:f4:5c, name=tempest-PortsIpV6TestJSON-484561481, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:19Z, description=, dns_domain=, id=a94e9f91-f18c-4d38-8cbb-1b1392f5f97a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-829002462, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=51740, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2021, status=ACTIVE, subnets=['40d482eb-d73f-44b9-aa95-62ed98b2ef21'], tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:55:21Z, vlan_transparent=None, network_id=a94e9f91-f18c-4d38-8cbb-1b1392f5f97a, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2043, status=DOWN, tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:55:22Z on network a94e9f91-f18c-4d38-8cbb-1b1392f5f97a
Feb 20 09:55:25 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:25Z|00215|binding|INFO|Removing iface tape5b13e16-bb ovn-installed in OVS
Feb 20 09:55:25 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:25Z|00216|binding|INFO|Removing lport e5b13e16-bb35-4bf3-848e-d5bbaae4a145 ovn-installed in OVS
Feb 20 09:55:25 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:25.890 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port a8ea10ea-c1c3-4f60-b3db-020f8f040eb9 with type ""
Feb 20 09:55:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:25.891 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:25 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:25.892 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-a94e9f91-f18c-4d38-8cbb-1b1392f5f97a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a94e9f91-f18c-4d38-8cbb-1b1392f5f97a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a09b3407c3143c8ae0948ccb18c1e61', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=785a165f-0093-4e72-83d4-4746d45c71e6, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=e5b13e16-bb35-4bf3-848e-d5bbaae4a145) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:25 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:25.895 161112 INFO neutron.agent.ovn.metadata.agent [-] Port e5b13e16-bb35-4bf3-848e-d5bbaae4a145 in datapath a94e9f91-f18c-4d38-8cbb-1b1392f5f97a unbound from our chassis
Feb 20 09:55:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:25.898 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:25 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:25.898 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a94e9f91-f18c-4d38-8cbb-1b1392f5f97a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:25 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:25.899 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[490eb484-9728-440f-8e20-102d259a2728]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:25 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:25.904 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:26 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:26.037 262775 INFO neutron.agent.dhcp.agent [None req-e796472a-6978-42e0-b1a6-0d91936b6e50 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58', 'c71491e8-1f45-4d4c-89fe-850ea1c81708'} is completed
Feb 20 09:55:26 np0005625203.localdomain dnsmasq[315239]: read /var/lib/neutron/dhcp/a94e9f91-f18c-4d38-8cbb-1b1392f5f97a/addn_hosts - 1 addresses
Feb 20 09:55:26 np0005625203.localdomain podman[315388]: 2026-02-20 09:55:26.083171063 +0000 UTC m=+0.063203096 container kill 7f14b9c9dc80a2e23c64340b0a8d6d8f22b3da36011312ac3732d7cbdd046dce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a94e9f91-f18c-4d38-8cbb-1b1392f5f97a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:26 np0005625203.localdomain dnsmasq-dhcp[315239]: read /var/lib/neutron/dhcp/a94e9f91-f18c-4d38-8cbb-1b1392f5f97a/host
Feb 20 09:55:26 np0005625203.localdomain dnsmasq-dhcp[315239]: read /var/lib/neutron/dhcp/a94e9f91-f18c-4d38-8cbb-1b1392f5f97a/opts
Feb 20 09:55:26 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:26.206 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:26 np0005625203.localdomain dnsmasq[315370]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:26 np0005625203.localdomain dnsmasq-dhcp[315370]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:26 np0005625203.localdomain dnsmasq-dhcp[315370]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:26 np0005625203.localdomain podman[315421]: 2026-02-20 09:55:26.224474455 +0000 UTC m=+0.060428931 container kill bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:55:26 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:26.350 262775 INFO neutron.agent.dhcp.agent [None req-8884b261-426f-47c1-abc9-c31cc50fb153 - - - - - -] DHCP configuration for ports {'57a646f3-af44-40b9-a99a-038dfab4f976'} is completed
Feb 20 09:55:26 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:55:26 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:55:26 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:26.513 262775 INFO neutron.agent.dhcp.agent [None req-7298b3ac-f98e-4414-b03f-54069a13732e - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58', 'c71491e8-1f45-4d4c-89fe-850ea1c81708'} is completed
Feb 20 09:55:26 np0005625203.localdomain dnsmasq[315239]: exiting on receipt of SIGTERM
Feb 20 09:55:26 np0005625203.localdomain podman[315461]: 2026-02-20 09:55:26.531819034 +0000 UTC m=+0.068681447 container kill 7f14b9c9dc80a2e23c64340b0a8d6d8f22b3da36011312ac3732d7cbdd046dce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a94e9f91-f18c-4d38-8cbb-1b1392f5f97a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:55:26 np0005625203.localdomain systemd[1]: libpod-7f14b9c9dc80a2e23c64340b0a8d6d8f22b3da36011312ac3732d7cbdd046dce.scope: Deactivated successfully.
Feb 20 09:55:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:26.552 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:26 np0005625203.localdomain podman[315476]: 2026-02-20 09:55:26.60799245 +0000 UTC m=+0.058999096 container died 7f14b9c9dc80a2e23c64340b0a8d6d8f22b3da36011312ac3732d7cbdd046dce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a94e9f91-f18c-4d38-8cbb-1b1392f5f97a, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:55:26 np0005625203.localdomain podman[315476]: 2026-02-20 09:55:26.637867424 +0000 UTC m=+0.088874030 container cleanup 7f14b9c9dc80a2e23c64340b0a8d6d8f22b3da36011312ac3732d7cbdd046dce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a94e9f91-f18c-4d38-8cbb-1b1392f5f97a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:26 np0005625203.localdomain systemd[1]: libpod-conmon-7f14b9c9dc80a2e23c64340b0a8d6d8f22b3da36011312ac3732d7cbdd046dce.scope: Deactivated successfully.
Feb 20 09:55:26 np0005625203.localdomain podman[315479]: 2026-02-20 09:55:26.683927029 +0000 UTC m=+0.127538367 container remove 7f14b9c9dc80a2e23c64340b0a8d6d8f22b3da36011312ac3732d7cbdd046dce (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a94e9f91-f18c-4d38-8cbb-1b1392f5f97a, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:55:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:26.725 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:26 np0005625203.localdomain kernel: device tape5b13e16-bb left promiscuous mode
Feb 20 09:55:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:26.749 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:26 np0005625203.localdomain dnsmasq[315370]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:26 np0005625203.localdomain dnsmasq-dhcp[315370]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:26 np0005625203.localdomain podman[315522]: 2026-02-20 09:55:26.798341469 +0000 UTC m=+0.062048201 container kill bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 20 09:55:26 np0005625203.localdomain dnsmasq-dhcp[315370]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:26 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:26.888 262775 INFO neutron.agent.dhcp.agent [None req-07ed637e-c543-4efc-9c23-588e875c586d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:26 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:26.889 262775 INFO neutron.agent.dhcp.agent [None req-07ed637e-c543-4efc-9c23-588e875c586d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:26 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:26.889 262775 INFO neutron.agent.dhcp.agent [None req-07ed637e-c543-4efc-9c23-588e875c586d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:27 np0005625203.localdomain systemd[1]: tmp-crun.0RqCQG.mount: Deactivated successfully.
Feb 20 09:55:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-a1ebf6f16b2f5a5fadeebc83231a32048271b1ed9a6e753c99bcb8ff9f275887-merged.mount: Deactivated successfully.
Feb 20 09:55:27 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7f14b9c9dc80a2e23c64340b0a8d6d8f22b3da36011312ac3732d7cbdd046dce-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:27 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2da94e9f91\x2df18c\x2d4d38\x2d8cbb\x2d1b1392f5f97a.mount: Deactivated successfully.
Feb 20 09:55:27 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:27.081 262775 INFO neutron.agent.dhcp.agent [None req-9a710972-280b-4e0c-acd7-ee1e1d9aab5b - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58', 'c71491e8-1f45-4d4c-89fe-850ea1c81708'} is completed
Feb 20 09:55:27 np0005625203.localdomain ceph-mon[296066]: pgmap v272: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Feb 20 09:55:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:55:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:55:27 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:27.329 2 INFO neutron.agent.securitygroups_rpc [None req-09c13c9e-9ba3-4904-bcac-09b2f5e1651f 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:27 np0005625203.localdomain dnsmasq[315370]: exiting on receipt of SIGTERM
Feb 20 09:55:27 np0005625203.localdomain podman[315560]: 2026-02-20 09:55:27.379999364 +0000 UTC m=+0.070164012 container kill bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:55:27 np0005625203.localdomain systemd[1]: libpod-bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5.scope: Deactivated successfully.
Feb 20 09:55:27 np0005625203.localdomain podman[315574]: 2026-02-20 09:55:27.448951007 +0000 UTC m=+0.055571320 container died bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 20 09:55:27 np0005625203.localdomain podman[315574]: 2026-02-20 09:55:27.479944706 +0000 UTC m=+0.086564979 container cleanup bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:27 np0005625203.localdomain systemd[1]: libpod-conmon-bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5.scope: Deactivated successfully.
Feb 20 09:55:27 np0005625203.localdomain podman[315576]: 2026-02-20 09:55:27.525396582 +0000 UTC m=+0.122764689 container remove bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:55:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:27.538 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:27 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:27Z|00217|binding|INFO|Releasing lport c71491e8-1f45-4d4c-89fe-850ea1c81708 from this chassis (sb_readonly=0)
Feb 20 09:55:27 np0005625203.localdomain kernel: device tapc71491e8-1f left promiscuous mode
Feb 20 09:55:27 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:27Z|00218|binding|INFO|Setting lport c71491e8-1f45-4d4c-89fe-850ea1c81708 down in Southbound
Feb 20 09:55:27 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:27.546 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=c71491e8-1f45-4d4c-89fe-850ea1c81708) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:27 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:27.548 161112 INFO neutron.agent.ovn.metadata.agent [-] Port c71491e8-1f45-4d4c-89fe-850ea1c81708 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:55:27 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:27.550 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:27 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:27.551 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[7bdf61b7-f270-4faf-85aa-fa15dc87856d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:27.561 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1dcc1a221dff644dac5bd8716ff6539ad13703bf4e6e63404e7bd7d36ff4ffe0-merged.mount: Deactivated successfully.
Feb 20 09:55:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bc21eee9f6416ea1e3a61702dbc6b25f46a78953151b8c055f124f98cc3d24e5-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:28 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:55:28 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:28.219 2 INFO neutron.agent.securitygroups_rpc [None req-df2a63c6-7355-4f99-a5c3-49ea7a77359b 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:28 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "format": "json"}]: dispatch
Feb 20 09:55:28 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:28.287 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:28 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:28.439 262775 INFO neutron.agent.linux.ip_lib [None req-8c3b7ea9-f312-460b-a640-2c9f954070fe - - - - - -] Device tap9776095e-8e cannot be used as it has no MAC address
Feb 20 09:55:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:28.463 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:28 np0005625203.localdomain kernel: device tap9776095e-8e entered promiscuous mode
Feb 20 09:55:28 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581328.4713] manager: (tap9776095e-8e): new Generic device (/org/freedesktop/NetworkManager/Devices/44)
Feb 20 09:55:28 np0005625203.localdomain systemd-udevd[315614]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:55:28 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:28Z|00219|binding|INFO|Claiming lport 9776095e-8ec9-449a-aac3-94da58b5e4c3 for this chassis.
Feb 20 09:55:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:28.473 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:28 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:28Z|00220|binding|INFO|9776095e-8ec9-449a-aac3-94da58b5e4c3: Claiming unknown
Feb 20 09:55:28 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:28Z|00221|binding|INFO|Setting lport 9776095e-8ec9-449a-aac3-94da58b5e4c3 ovn-installed in OVS
Feb 20 09:55:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:28.481 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:28.484 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:28.506 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:28.550 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:28.574 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:28 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:28Z|00222|binding|INFO|Setting lport 9776095e-8ec9-449a-aac3-94da58b5e4c3 up in Southbound
Feb 20 09:55:28 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:28.624 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=9776095e-8ec9-449a-aac3-94da58b5e4c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:28 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:28.626 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 9776095e-8ec9-449a-aac3-94da58b5e4c3 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:55:28 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:28.628 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:28 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:28.629 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[e90505aa-c521-4761-9804-d9d3fba52b47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:28 np0005625203.localdomain sshd[315628]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:55:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:55:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:55:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:55:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 09:55:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18300 "" "Go-http-client/1.1"
Feb 20 09:55:29 np0005625203.localdomain sshd[315628]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:55:29 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:29.151 2 INFO neutron.agent.securitygroups_rpc [None req-abf59bbc-b23d-40a5-812c-25dd2e2a84ba 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:29 np0005625203.localdomain ceph-mon[296066]: pgmap v273: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 29 KiB/s rd, 1.9 KiB/s wr, 39 op/s
Feb 20 09:55:29 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:55:29 np0005625203.localdomain podman[315671]: 
Feb 20 09:55:29 np0005625203.localdomain podman[315671]: 2026-02-20 09:55:29.382625801 +0000 UTC m=+0.094569947 container create 5f8f84b8c459b1a84ad71bafc5c52d73bf7ab7f02d28826420cb9c3a02b0f841 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:55:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:29.423 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:29 np0005625203.localdomain systemd[1]: Started libpod-conmon-5f8f84b8c459b1a84ad71bafc5c52d73bf7ab7f02d28826420cb9c3a02b0f841.scope.
Feb 20 09:55:29 np0005625203.localdomain podman[315671]: 2026-02-20 09:55:29.336260227 +0000 UTC m=+0.048204403 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:29 np0005625203.localdomain systemd[1]: tmp-crun.GH5mHG.mount: Deactivated successfully.
Feb 20 09:55:29 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:29 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:55:29 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80c8cedf4a74ed3201dd45c078907a31d6047dc7e32d349bd8ad17ed71735edb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:29 np0005625203.localdomain podman[315671]: 2026-02-20 09:55:29.464330088 +0000 UTC m=+0.176274234 container init 5f8f84b8c459b1a84ad71bafc5c52d73bf7ab7f02d28826420cb9c3a02b0f841 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:29 np0005625203.localdomain podman[315671]: 2026-02-20 09:55:29.474612277 +0000 UTC m=+0.186556433 container start 5f8f84b8c459b1a84ad71bafc5c52d73bf7ab7f02d28826420cb9c3a02b0f841 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 20 09:55:29 np0005625203.localdomain dnsmasq[315695]: started, version 2.85 cachesize 150
Feb 20 09:55:29 np0005625203.localdomain dnsmasq[315695]: DNS service limited to local subnets
Feb 20 09:55:29 np0005625203.localdomain dnsmasq[315695]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:29 np0005625203.localdomain dnsmasq[315695]: warning: no upstream servers configured
Feb 20 09:55:29 np0005625203.localdomain dnsmasq-dhcp[315695]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:29 np0005625203.localdomain dnsmasq[315695]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:29 np0005625203.localdomain dnsmasq-dhcp[315695]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:29 np0005625203.localdomain dnsmasq-dhcp[315695]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:29 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:29.539 262775 INFO neutron.agent.dhcp.agent [None req-8c3b7ea9-f312-460b-a640-2c9f954070fe - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:28Z, description=, device_id=3d85a756-66e6-4ab2-822c-117b1630c4ea, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4dda2e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4de0ee0>], id=091f73f2-6037-4bc2-b656-e321a76f6335, ip_allocation=immediate, mac_address=fa:16:3e:a3:86:74, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=18, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['6e91d66b-ec55-4da8-a1d7-ac0722e4188f'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:27Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=False, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2065, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:28Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:55:29 np0005625203.localdomain podman[315688]: 2026-02-20 09:55:29.543494898 +0000 UTC m=+0.081286626 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 20 09:55:29 np0005625203.localdomain podman[315688]: 2026-02-20 09:55:29.572586878 +0000 UTC m=+0.110378546 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Feb 20 09:55:29 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:55:29 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:29.632 262775 INFO neutron.agent.dhcp.agent [None req-c8dc394a-86c9-47a2-b482-2a2c81189379 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:29 np0005625203.localdomain dnsmasq[315695]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:55:29 np0005625203.localdomain dnsmasq-dhcp[315695]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:29 np0005625203.localdomain dnsmasq-dhcp[315695]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:29 np0005625203.localdomain podman[315726]: 2026-02-20 09:55:29.720410731 +0000 UTC m=+0.059106149 container kill 5f8f84b8c459b1a84ad71bafc5c52d73bf7ab7f02d28826420cb9c3a02b0f841 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:29.960 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:29 np0005625203.localdomain kernel: device tap9776095e-8e left promiscuous mode
Feb 20 09:55:29 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:29Z|00223|binding|INFO|Releasing lport 9776095e-8ec9-449a-aac3-94da58b5e4c3 from this chassis (sb_readonly=0)
Feb 20 09:55:29 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:29Z|00224|binding|INFO|Setting lport 9776095e-8ec9-449a-aac3-94da58b5e4c3 down in Southbound
Feb 20 09:55:29 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:29.969 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=9776095e-8ec9-449a-aac3-94da58b5e4c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:29 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:29.971 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 9776095e-8ec9-449a-aac3-94da58b5e4c3 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:55:29 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:29.973 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:29 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:29.973 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[b034a323-f073-4ca9-8723-4c7992f260fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:29.987 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:29 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:29.993 262775 INFO neutron.agent.dhcp.agent [None req-cb95a30a-8c34-4cb7-9442-7a4be2c4dfd0 - - - - - -] DHCP configuration for ports {'091f73f2-6037-4bc2-b656-e321a76f6335'} is completed
Feb 20 09:55:30 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:30.328 2 INFO neutron.agent.securitygroups_rpc [None req-c4977344-b1fb-45a9-a725-767a5df232d2 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:30 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:30.369 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:30.815 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:31 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:31.054 2 INFO neutron.agent.securitygroups_rpc [None req-3c26b833-1dd4-4db0-a9c6-673823ae81db 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:31 np0005625203.localdomain ceph-mon[296066]: pgmap v274: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 4.0 KiB/s wr, 33 op/s
Feb 20 09:55:31 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "snap_name": "76272f77-26a4-41f3-97ff-7d9d42de32a4", "format": "json"}]: dispatch
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.561 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:28Z, description=, device_id=3d85a756-66e6-4ab2-822c-117b1630c4ea, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4de93d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4de92e0>], id=091f73f2-6037-4bc2-b656-e321a76f6335, ip_allocation=immediate, mac_address=fa:16:3e:a3:86:74, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=18, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['6e91d66b-ec55-4da8-a1d7-ac0722e4188f'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:27Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=False, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2065, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:28Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:55:31 np0005625203.localdomain dnsmasq[315695]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:55:31 np0005625203.localdomain dnsmasq-dhcp[315695]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:31 np0005625203.localdomain podman[315766]: 2026-02-20 09:55:31.767308847 +0000 UTC m=+0.065499717 container kill 5f8f84b8c459b1a84ad71bafc5c52d73bf7ab7f02d28826420cb9c3a02b0f841 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:55:31 np0005625203.localdomain dnsmasq-dhcp[315695]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for 811e2462-6872-485d-9c09-d2dd9cb25273.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap9776095e-8e not found in namespace qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273.
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     return fut.result()
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     raise self._exception
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap9776095e-8e not found in namespace qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273.
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.799 262775 ERROR neutron.agent.dhcp.agent 
Feb 20 09:55:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:31.802 262775 INFO neutron.agent.dhcp.agent [None req-d64b46c8-fd92-4e74-ae79-8904232194cb - - - - - -] Synchronizing state
Feb 20 09:55:32 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:32.022 262775 INFO neutron.agent.dhcp.agent [None req-3236c745-fb55-4940-9236-a32777702c6b - - - - - -] DHCP configuration for ports {'091f73f2-6037-4bc2-b656-e321a76f6335'} is completed
Feb 20 09:55:32 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:32.166 262775 INFO neutron.agent.dhcp.agent [None req-d82885bb-6c20-4bd9-a5a2-f0dbdc21210c - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:55:32 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:32.167 262775 INFO neutron.agent.dhcp.agent [-] Starting network 519ed234-0c28-4a63-b6ed-1122a8d9dfc9 dhcp configuration
Feb 20 09:55:32 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:32.168 262775 INFO neutron.agent.dhcp.agent [-] Finished network 519ed234-0c28-4a63-b6ed-1122a8d9dfc9 dhcp configuration
Feb 20 09:55:32 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:32.168 262775 INFO neutron.agent.dhcp.agent [-] Starting network 545dd7fa-aa2a-4b99-a111-22d2b369ed0a dhcp configuration
Feb 20 09:55:32 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:32.168 262775 INFO neutron.agent.dhcp.agent [-] Finished network 545dd7fa-aa2a-4b99-a111-22d2b369ed0a dhcp configuration
Feb 20 09:55:32 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:32.168 262775 INFO neutron.agent.dhcp.agent [-] Starting network 811e2462-6872-485d-9c09-d2dd9cb25273 dhcp configuration
Feb 20 09:55:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:32.315 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:32 np0005625203.localdomain dnsmasq[315695]: exiting on receipt of SIGTERM
Feb 20 09:55:32 np0005625203.localdomain podman[315798]: 2026-02-20 09:55:32.348846709 +0000 UTC m=+0.056284832 container kill 5f8f84b8c459b1a84ad71bafc5c52d73bf7ab7f02d28826420cb9c3a02b0f841 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:32 np0005625203.localdomain systemd[1]: libpod-5f8f84b8c459b1a84ad71bafc5c52d73bf7ab7f02d28826420cb9c3a02b0f841.scope: Deactivated successfully.
Feb 20 09:55:32 np0005625203.localdomain podman[315813]: 2026-02-20 09:55:32.420688602 +0000 UTC m=+0.053402443 container died 5f8f84b8c459b1a84ad71bafc5c52d73bf7ab7f02d28826420cb9c3a02b0f841 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:55:32 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f8f84b8c459b1a84ad71bafc5c52d73bf7ab7f02d28826420cb9c3a02b0f841-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:32 np0005625203.localdomain podman[315813]: 2026-02-20 09:55:32.470807382 +0000 UTC m=+0.103521153 container remove 5f8f84b8c459b1a84ad71bafc5c52d73bf7ab7f02d28826420cb9c3a02b0f841 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:32 np0005625203.localdomain systemd[1]: libpod-conmon-5f8f84b8c459b1a84ad71bafc5c52d73bf7ab7f02d28826420cb9c3a02b0f841.scope: Deactivated successfully.
Feb 20 09:55:32 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:32.551 262775 INFO neutron.agent.linux.ip_lib [-] Device tap9776095e-8e cannot be used as it has no MAC address
Feb 20 09:55:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:32.577 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:32 np0005625203.localdomain kernel: device tap9776095e-8e entered promiscuous mode
Feb 20 09:55:32 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581332.5865] manager: (tap9776095e-8e): new Generic device (/org/freedesktop/NetworkManager/Devices/45)
Feb 20 09:55:32 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:32Z|00225|binding|INFO|Claiming lport 9776095e-8ec9-449a-aac3-94da58b5e4c3 for this chassis.
Feb 20 09:55:32 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:32Z|00226|binding|INFO|9776095e-8ec9-449a-aac3-94da58b5e4c3: Claiming unknown
Feb 20 09:55:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:32.591 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:32 np0005625203.localdomain systemd-udevd[315844]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:55:32 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:32.591 2 INFO neutron.agent.securitygroups_rpc [None req-2e3d6688-1793-41b9-a2f2-f815ee6132fe 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']
Feb 20 09:55:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:32.597 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=9776095e-8ec9-449a-aac3-94da58b5e4c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:32.599 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 9776095e-8ec9-449a-aac3-94da58b5e4c3 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:55:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:32.600 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:32 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:32Z|00227|binding|INFO|Setting lport 9776095e-8ec9-449a-aac3-94da58b5e4c3 ovn-installed in OVS
Feb 20 09:55:32 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:32Z|00228|binding|INFO|Setting lport 9776095e-8ec9-449a-aac3-94da58b5e4c3 up in Southbound
Feb 20 09:55:32 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:32.602 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[bf5c8321-9e48-4a93-966d-43b035a35107]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:32.602 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:32.606 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:32.643 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:32.691 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:32.747 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:32 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-80c8cedf4a74ed3201dd45c078907a31d6047dc7e32d349bd8ad17ed71735edb-merged.mount: Deactivated successfully.
Feb 20 09:55:32 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:32.968 2 INFO neutron.agent.securitygroups_rpc [None req-d888e083-a1eb-4717-8e36-0999aadb5157 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:33 np0005625203.localdomain ceph-mon[296066]: pgmap v275: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 2.7 KiB/s wr, 1 op/s
Feb 20 09:55:33 np0005625203.localdomain podman[315898]: 
Feb 20 09:55:33 np0005625203.localdomain podman[315898]: 2026-02-20 09:55:33.62145259 +0000 UTC m=+0.090316695 container create 9498efc5e7b344fbfab3b4f65183a6094244470d1c70e62907db1691ebd7e816 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:55:33 np0005625203.localdomain systemd[1]: Started libpod-conmon-9498efc5e7b344fbfab3b4f65183a6094244470d1c70e62907db1691ebd7e816.scope.
Feb 20 09:55:33 np0005625203.localdomain podman[315898]: 2026-02-20 09:55:33.57878474 +0000 UTC m=+0.047648865 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:33 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:33 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f03cd85b2dc814712a1dbc1e1fd4e48e053e95262452cbf9a0dd38cf6f52973b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:33 np0005625203.localdomain podman[315898]: 2026-02-20 09:55:33.698589287 +0000 UTC m=+0.167453382 container init 9498efc5e7b344fbfab3b4f65183a6094244470d1c70e62907db1691ebd7e816 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:55:33 np0005625203.localdomain podman[315898]: 2026-02-20 09:55:33.707794161 +0000 UTC m=+0.176658256 container start 9498efc5e7b344fbfab3b4f65183a6094244470d1c70e62907db1691ebd7e816 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:33 np0005625203.localdomain dnsmasq[315916]: started, version 2.85 cachesize 150
Feb 20 09:55:33 np0005625203.localdomain dnsmasq[315916]: DNS service limited to local subnets
Feb 20 09:55:33 np0005625203.localdomain dnsmasq[315916]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:33 np0005625203.localdomain dnsmasq[315916]: warning: no upstream servers configured
Feb 20 09:55:33 np0005625203.localdomain dnsmasq-dhcp[315916]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:33 np0005625203.localdomain dnsmasq[315916]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:55:33 np0005625203.localdomain dnsmasq-dhcp[315916]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:33 np0005625203.localdomain dnsmasq-dhcp[315916]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:33 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:33.776 262775 INFO neutron.agent.dhcp.agent [-] Finished network 811e2462-6872-485d-9c09-d2dd9cb25273 dhcp configuration
Feb 20 09:55:33 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:33.776 262775 INFO neutron.agent.dhcp.agent [None req-d82885bb-6c20-4bd9-a5a2-f0dbdc21210c - - - - - -] Synchronizing state complete
Feb 20 09:55:33 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:33.777 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:33 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:33.927 262775 INFO neutron.agent.dhcp.agent [None req-ed7cc9a1-6869-4094-ae35-78eba78ab537 - - - - - -] DHCP configuration for ports {'9776095e-8ec9-449a-aac3-94da58b5e4c3', '091f73f2-6037-4bc2-b656-e321a76f6335', 'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:34 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "snap_name": "94e02352-6a28-4288-9072-c7133a6151bf", "format": "json"}]: dispatch
Feb 20 09:55:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:55:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:34.425 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:34 np0005625203.localdomain dnsmasq[315916]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:34 np0005625203.localdomain dnsmasq-dhcp[315916]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:34 np0005625203.localdomain dnsmasq-dhcp[315916]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:34 np0005625203.localdomain podman[315933]: 2026-02-20 09:55:34.464973486 +0000 UTC m=+0.064332901 container kill 9498efc5e7b344fbfab3b4f65183a6094244470d1c70e62907db1691ebd7e816 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:55:34 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:34.956 2 INFO neutron.agent.securitygroups_rpc [None req-3b974f9c-5163-47e8-89d2-00acf380ad82 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']
Feb 20 09:55:35 np0005625203.localdomain ceph-mon[296066]: pgmap v276: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 477 B/s rd, 2.5 KiB/s wr, 1 op/s
Feb 20 09:55:35 np0005625203.localdomain dnsmasq[315916]: exiting on receipt of SIGTERM
Feb 20 09:55:35 np0005625203.localdomain podman[315972]: 2026-02-20 09:55:35.482987512 +0000 UTC m=+0.060606146 container kill 9498efc5e7b344fbfab3b4f65183a6094244470d1c70e62907db1691ebd7e816 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 20 09:55:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:55:35 np0005625203.localdomain systemd[1]: libpod-9498efc5e7b344fbfab3b4f65183a6094244470d1c70e62907db1691ebd7e816.scope: Deactivated successfully.
Feb 20 09:55:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:35.520 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:35 np0005625203.localdomain podman[315986]: 2026-02-20 09:55:35.594732799 +0000 UTC m=+0.085959800 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 09:55:35 np0005625203.localdomain podman[315984]: 2026-02-20 09:55:35.622349634 +0000 UTC m=+0.118799747 container died 9498efc5e7b344fbfab3b4f65183a6094244470d1c70e62907db1691ebd7e816 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:35 np0005625203.localdomain systemd[1]: tmp-crun.RmbiaA.mount: Deactivated successfully.
Feb 20 09:55:35 np0005625203.localdomain podman[315984]: 2026-02-20 09:55:35.666451578 +0000 UTC m=+0.162901661 container cleanup 9498efc5e7b344fbfab3b4f65183a6094244470d1c70e62907db1691ebd7e816 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:35 np0005625203.localdomain systemd[1]: libpod-conmon-9498efc5e7b344fbfab3b4f65183a6094244470d1c70e62907db1691ebd7e816.scope: Deactivated successfully.
Feb 20 09:55:35 np0005625203.localdomain podman[315987]: 2026-02-20 09:55:35.699922593 +0000 UTC m=+0.191004900 container remove 9498efc5e7b344fbfab3b4f65183a6094244470d1c70e62907db1691ebd7e816 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 20 09:55:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:35.712 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:35 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:35Z|00229|binding|INFO|Releasing lport 9776095e-8ec9-449a-aac3-94da58b5e4c3 from this chassis (sb_readonly=0)
Feb 20 09:55:35 np0005625203.localdomain kernel: device tap9776095e-8e left promiscuous mode
Feb 20 09:55:35 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:35Z|00230|binding|INFO|Setting lport 9776095e-8ec9-449a-aac3-94da58b5e4c3 down in Southbound
Feb 20 09:55:35 np0005625203.localdomain podman[315986]: 2026-02-20 09:55:35.719130488 +0000 UTC m=+0.210357509 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Feb 20 09:55:35 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:35.727 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=9776095e-8ec9-449a-aac3-94da58b5e4c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:35 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:35.729 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 9776095e-8ec9-449a-aac3-94da58b5e4c3 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:55:35 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:35.731 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:35 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:35.732 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[96dac24e-7445-4237-8eae-15f00a903e18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:35.735 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:35 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:55:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:35.817 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:36 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:36.324 262775 INFO neutron.agent.linux.ip_lib [None req-551c0468-9d39-47b8-94fa-d952926879d4 - - - - - -] Device tap7658129d-69 cannot be used as it has no MAC address
Feb 20 09:55:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:36.383 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:36 np0005625203.localdomain kernel: device tap7658129d-69 entered promiscuous mode
Feb 20 09:55:36 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581336.3913] manager: (tap7658129d-69): new Generic device (/org/freedesktop/NetworkManager/Devices/46)
Feb 20 09:55:36 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:36Z|00231|binding|INFO|Claiming lport 7658129d-69b4-41d7-9baa-50264b0c3e63 for this chassis.
Feb 20 09:55:36 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:36Z|00232|binding|INFO|7658129d-69b4-41d7-9baa-50264b0c3e63: Claiming unknown
Feb 20 09:55:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:36.392 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:36 np0005625203.localdomain systemd-udevd[316049]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:55:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:36.398 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a09b3407c3143c8ae0948ccb18c1e61', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9c56fb6-e554-4e38-9c79-0d9a4b411b62, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=7658129d-69b4-41d7-9baa-50264b0c3e63) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:36.400 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 7658129d-69b4-41d7-9baa-50264b0c3e63 in datapath 89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a bound to our chassis
Feb 20 09:55:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:36.402 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:36.403 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[29711899-7723-4014-aa38-f074c9171645]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:36 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap7658129d-69: No such device
Feb 20 09:55:36 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:36Z|00233|binding|INFO|Setting lport 7658129d-69b4-41d7-9baa-50264b0c3e63 ovn-installed in OVS
Feb 20 09:55:36 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:36Z|00234|binding|INFO|Setting lport 7658129d-69b4-41d7-9baa-50264b0c3e63 up in Southbound
Feb 20 09:55:36 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap7658129d-69: No such device
Feb 20 09:55:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:36.432 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:36 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap7658129d-69: No such device
Feb 20 09:55:36 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap7658129d-69: No such device
Feb 20 09:55:36 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap7658129d-69: No such device
Feb 20 09:55:36 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap7658129d-69: No such device
Feb 20 09:55:36 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap7658129d-69: No such device
Feb 20 09:55:36 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap7658129d-69: No such device
Feb 20 09:55:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:36.470 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:36 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f03cd85b2dc814712a1dbc1e1fd4e48e053e95262452cbf9a0dd38cf6f52973b-merged.mount: Deactivated successfully.
Feb 20 09:55:36 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9498efc5e7b344fbfab3b4f65183a6094244470d1c70e62907db1691ebd7e816-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:36 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:55:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:36.508 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:36 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:36.800 262775 INFO neutron.agent.linux.ip_lib [None req-d11b4e75-8a7e-4bf8-8dae-fd2c78c05480 - - - - - -] Device tap2e9dfde9-1a cannot be used as it has no MAC address
Feb 20 09:55:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:36.835 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:36 np0005625203.localdomain kernel: device tap2e9dfde9-1a entered promiscuous mode
Feb 20 09:55:36 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:36Z|00235|binding|INFO|Claiming lport 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 for this chassis.
Feb 20 09:55:36 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:36Z|00236|binding|INFO|2e9dfde9-1aab-46ff-b627-acda4eba4ec8: Claiming unknown
Feb 20 09:55:36 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581336.8407] manager: (tap2e9dfde9-1a): new Generic device (/org/freedesktop/NetworkManager/Devices/47)
Feb 20 09:55:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:36.841 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:36 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:36Z|00237|binding|INFO|Setting lport 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 ovn-installed in OVS
Feb 20 09:55:36 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:36Z|00238|binding|INFO|Setting lport 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 up in Southbound
Feb 20 09:55:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:36.848 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:36.851 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=2e9dfde9-1aab-46ff-b627-acda4eba4ec8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:36.856 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:55:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:36.859 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:36.860 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[479534cf-e492-456e-88d6-8648189090af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:36.885 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:36.933 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:36.970 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:55:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:55:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:55:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:55:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:55:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:55:37 np0005625203.localdomain ceph-mon[296066]: pgmap v277: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 4.4 KiB/s wr, 2 op/s
Feb 20 09:55:37 np0005625203.localdomain podman[316156]: 
Feb 20 09:55:37 np0005625203.localdomain podman[316156]: 2026-02-20 09:55:37.48502482 +0000 UTC m=+0.083760492 container create 3300f52d85744df0e603b97d26193b884bc44407fae28d55e756d987413cb02b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:55:37 np0005625203.localdomain systemd[1]: Started libpod-conmon-3300f52d85744df0e603b97d26193b884bc44407fae28d55e756d987413cb02b.scope.
Feb 20 09:55:37 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:37.533 2 INFO neutron.agent.securitygroups_rpc [None req-a4ab168f-7329-4461-8236-8f00fb1e3c92 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:37 np0005625203.localdomain systemd[1]: tmp-crun.giFKfE.mount: Deactivated successfully.
Feb 20 09:55:37 np0005625203.localdomain podman[316156]: 2026-02-20 09:55:37.445219729 +0000 UTC m=+0.043955411 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:37 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:37 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eed0a8c85d2962a4d6bf4f5d8f63bc67a61930356be654a28cc6c0990f4d76a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:37 np0005625203.localdomain podman[316156]: 2026-02-20 09:55:37.575042906 +0000 UTC m=+0.173778568 container init 3300f52d85744df0e603b97d26193b884bc44407fae28d55e756d987413cb02b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:55:37 np0005625203.localdomain podman[316156]: 2026-02-20 09:55:37.583681443 +0000 UTC m=+0.182417105 container start 3300f52d85744df0e603b97d26193b884bc44407fae28d55e756d987413cb02b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:55:37 np0005625203.localdomain dnsmasq[316178]: started, version 2.85 cachesize 150
Feb 20 09:55:37 np0005625203.localdomain dnsmasq[316178]: DNS service limited to local subnets
Feb 20 09:55:37 np0005625203.localdomain dnsmasq[316178]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:37 np0005625203.localdomain dnsmasq[316178]: warning: no upstream servers configured
Feb 20 09:55:37 np0005625203.localdomain dnsmasq-dhcp[316178]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:37 np0005625203.localdomain dnsmasq[316178]: read /var/lib/neutron/dhcp/89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a/addn_hosts - 0 addresses
Feb 20 09:55:37 np0005625203.localdomain dnsmasq-dhcp[316178]: read /var/lib/neutron/dhcp/89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a/host
Feb 20 09:55:37 np0005625203.localdomain dnsmasq-dhcp[316178]: read /var/lib/neutron/dhcp/89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a/opts
Feb 20 09:55:37 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:37.860 262775 INFO neutron.agent.dhcp.agent [None req-4d49c028-51b8-4899-956a-37009ce45a04 - - - - - -] DHCP configuration for ports {'1a523e93-af7f-43c2-83e8-65f63fcde633'} is completed
Feb 20 09:55:37 np0005625203.localdomain podman[316198]: 
Feb 20 09:55:37 np0005625203.localdomain podman[316198]: 2026-02-20 09:55:37.884003314 +0000 UTC m=+0.089823180 container create 42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:55:37 np0005625203.localdomain systemd[1]: Started libpod-conmon-42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4.scope.
Feb 20 09:55:37 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:37 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f40945e24ab58962b2ddb02dd2a68f836e76e216d863163d62a323750ddd8f83/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:37 np0005625203.localdomain podman[316198]: 2026-02-20 09:55:37.84316198 +0000 UTC m=+0.048981916 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:37 np0005625203.localdomain podman[316198]: 2026-02-20 09:55:37.952172363 +0000 UTC m=+0.157992229 container init 42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 09:55:37 np0005625203.localdomain podman[316198]: 2026-02-20 09:55:37.967369953 +0000 UTC m=+0.173189819 container start 42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:37 np0005625203.localdomain dnsmasq[316243]: started, version 2.85 cachesize 150
Feb 20 09:55:37 np0005625203.localdomain dnsmasq[316243]: DNS service limited to local subnets
Feb 20 09:55:37 np0005625203.localdomain dnsmasq[316243]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:37 np0005625203.localdomain dnsmasq[316243]: warning: no upstream servers configured
Feb 20 09:55:37 np0005625203.localdomain dnsmasq[316243]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:38 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:38.019 262775 INFO neutron.agent.dhcp.agent [None req-d11b4e75-8a7e-4bf8-8dae-fd2c78c05480 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:36Z, description=, device_id=b80522af-26e8-4f33-91e0-7c78e4153dc6, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ce44f0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ce4820>], id=675efe2d-4d01-4ad9-8e60-a6c96bf39150, ip_allocation=immediate, mac_address=fa:16:3e:d7:e5:0b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=20, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['63dadf02-6071-4544-a515-56758e32ea63'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:35Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=False, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2104, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:37Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:55:38 np0005625203.localdomain dnsmasq[316178]: exiting on receipt of SIGTERM
Feb 20 09:55:38 np0005625203.localdomain podman[316236]: 2026-02-20 09:55:38.027609897 +0000 UTC m=+0.061987799 container kill 3300f52d85744df0e603b97d26193b884bc44407fae28d55e756d987413cb02b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:55:38 np0005625203.localdomain systemd[1]: libpod-3300f52d85744df0e603b97d26193b884bc44407fae28d55e756d987413cb02b.scope: Deactivated successfully.
Feb 20 09:55:38 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:38.038 262775 INFO neutron.agent.linux.ip_lib [None req-5b60a3f0-40c4-4943-a36a-4a637259c1f3 - - - - - -] Device tap7babcff6-d7 cannot be used as it has no MAC address
Feb 20 09:55:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:38.076 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:38 np0005625203.localdomain kernel: device tap7babcff6-d7 entered promiscuous mode
Feb 20 09:55:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:38.085 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:38 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581338.0864] manager: (tap7babcff6-d7): new Generic device (/org/freedesktop/NetworkManager/Devices/48)
Feb 20 09:55:38 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:38Z|00239|binding|INFO|Claiming lport 7babcff6-d71d-4d28-bebf-6a33d9fdbcc9 for this chassis.
Feb 20 09:55:38 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:38Z|00240|binding|INFO|7babcff6-d71d-4d28-bebf-6a33d9fdbcc9: Claiming unknown
Feb 20 09:55:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:38.102 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c44e13adebb4610b7c0cd2fdc62a5b7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5a72c2-1fc9-4380-90dc-c87395348424, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=7babcff6-d71d-4d28-bebf-6a33d9fdbcc9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:38.103 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 7babcff6-d71d-4d28-bebf-6a33d9fdbcc9 in datapath 03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6 bound to our chassis
Feb 20 09:55:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:38.105 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:38.105 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[69c09b2f-11be-443f-bc09-1ff4e85b63bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:38 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:38Z|00241|binding|INFO|Setting lport 7babcff6-d71d-4d28-bebf-6a33d9fdbcc9 ovn-installed in OVS
Feb 20 09:55:38 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:38Z|00242|binding|INFO|Setting lport 7babcff6-d71d-4d28-bebf-6a33d9fdbcc9 up in Southbound
Feb 20 09:55:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:38.129 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:38 np0005625203.localdomain podman[316253]: 2026-02-20 09:55:38.130955994 +0000 UTC m=+0.088442678 container died 3300f52d85744df0e603b97d26193b884bc44407fae28d55e756d987413cb02b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 20 09:55:38 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:38.135 262775 INFO neutron.agent.dhcp.agent [None req-47a258eb-666e-4711-a725-d2b0c381b4a2 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:38 np0005625203.localdomain podman[316253]: 2026-02-20 09:55:38.164338597 +0000 UTC m=+0.121825231 container cleanup 3300f52d85744df0e603b97d26193b884bc44407fae28d55e756d987413cb02b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 20 09:55:38 np0005625203.localdomain systemd[1]: libpod-conmon-3300f52d85744df0e603b97d26193b884bc44407fae28d55e756d987413cb02b.scope: Deactivated successfully.
Feb 20 09:55:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:38.173 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:38.207 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:38 np0005625203.localdomain podman[316255]: 2026-02-20 09:55:38.217391098 +0000 UTC m=+0.160459585 container remove 3300f52d85744df0e603b97d26193b884bc44407fae28d55e756d987413cb02b (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:38 np0005625203.localdomain podman[316312]: 2026-02-20 09:55:38.349347621 +0000 UTC m=+0.062873076 container kill 42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:38 np0005625203.localdomain dnsmasq[316243]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:55:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "snap_name": "94e02352-6a28-4288-9072-c7133a6151bf_2956b5fe-4a3f-4e13-8c50-0bbaa50928f8", "force": true, "format": "json"}]: dispatch
Feb 20 09:55:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "snap_name": "94e02352-6a28-4288-9072-c7133a6151bf", "force": true, "format": "json"}]: dispatch
Feb 20 09:55:38 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/870555994' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:55:38 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/870555994' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:55:38 np0005625203.localdomain ceph-mon[296066]: pgmap v278: 177 pgs: 177 active+clean; 145 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 4.4 KiB/s wr, 1 op/s
Feb 20 09:55:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:55:38 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-7eed0a8c85d2962a4d6bf4f5d8f63bc67a61930356be654a28cc6c0990f4d76a-merged.mount: Deactivated successfully.
Feb 20 09:55:38 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3300f52d85744df0e603b97d26193b884bc44407fae28d55e756d987413cb02b-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:55:38 np0005625203.localdomain podman[316337]: 2026-02-20 09:55:38.567735007 +0000 UTC m=+0.123282185 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.33.7, release=1770267347, config_id=openstack_network_exporter, io.openshift.expose-services=)
Feb 20 09:55:38 np0005625203.localdomain podman[316337]: 2026-02-20 09:55:38.587811108 +0000 UTC m=+0.143358266 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.7, config_id=openstack_network_exporter, vendor=Red Hat, Inc., release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 20 09:55:38 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:55:38 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:38.676 262775 INFO neutron.agent.dhcp.agent [None req-189ed474-77b9-44d6-ae32-f33252ec033a - - - - - -] DHCP configuration for ports {'675efe2d-4d01-4ad9-8e60-a6c96bf39150'} is completed
Feb 20 09:55:38 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:38Z|00243|binding|INFO|Releasing lport 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 from this chassis (sb_readonly=0)
Feb 20 09:55:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:38.686 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:38 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:38Z|00244|binding|INFO|Setting lport 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 down in Southbound
Feb 20 09:55:38 np0005625203.localdomain kernel: device tap2e9dfde9-1a left promiscuous mode
Feb 20 09:55:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:38.693 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=2e9dfde9-1aab-46ff-b627-acda4eba4ec8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:38.695 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:55:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:38.697 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:38 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:38.699 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[56704c3d-dc85-49d2-a269-18e2a6b1bc1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:38 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:38.702 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:38 np0005625203.localdomain systemd[1]: tmp-crun.8a9oAm.mount: Deactivated successfully.
Feb 20 09:55:38 np0005625203.localdomain podman[316352]: 2026-02-20 09:55:38.741034239 +0000 UTC m=+0.205634894 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 20 09:55:38 np0005625203.localdomain podman[316352]: 2026-02-20 09:55:38.758350544 +0000 UTC m=+0.222951219 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Feb 20 09:55:38 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:55:38 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:38.911 2 INFO neutron.agent.securitygroups_rpc [None req-eb4533df-2883-48ce-b913-efeaaf3f9e10 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:38 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:38.968 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:36Z, description=, device_id=b80522af-26e8-4f33-91e0-7c78e4153dc6, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4d23820>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4d23d00>], id=675efe2d-4d01-4ad9-8e60-a6c96bf39150, ip_allocation=immediate, mac_address=fa:16:3e:d7:e5:0b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=20, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['63dadf02-6071-4544-a515-56758e32ea63'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:35Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=False, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2104, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:37Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:55:39 np0005625203.localdomain dnsmasq[316243]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:55:39 np0005625203.localdomain podman[316466]: 2026-02-20 09:55:39.240823911 +0000 UTC m=+0.060739671 container kill 42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:55:39 np0005625203.localdomain podman[316450]: 
Feb 20 09:55:39 np0005625203.localdomain podman[316450]: 2026-02-20 09:55:39.263073969 +0000 UTC m=+0.166297985 container create 9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent [-] Unable to reload_allocations dhcp for 811e2462-6872-485d-9c09-d2dd9cb25273.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap2e9dfde9-1a not found in namespace qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273.
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     return fut.result()
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     raise self._exception
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap2e9dfde9-1a not found in namespace qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273.
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.289 262775 ERROR neutron.agent.dhcp.agent 
Feb 20 09:55:39 np0005625203.localdomain systemd[1]: Started libpod-conmon-9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64.scope.
Feb 20 09:55:39 np0005625203.localdomain podman[316450]: 2026-02-20 09:55:39.206791248 +0000 UTC m=+0.110015294 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:39 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:39 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6b1d487e944fb77880f4c9e4816e3aa82cfa8f817bb7e1f56ee3b41f22a107/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:39 np0005625203.localdomain podman[316450]: 2026-02-20 09:55:39.332681783 +0000 UTC m=+0.235905779 container init 9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:55:39 np0005625203.localdomain podman[316450]: 2026-02-20 09:55:39.344436836 +0000 UTC m=+0.247660852 container start 9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 20 09:55:39 np0005625203.localdomain dnsmasq[316498]: started, version 2.85 cachesize 150
Feb 20 09:55:39 np0005625203.localdomain dnsmasq[316498]: DNS service limited to local subnets
Feb 20 09:55:39 np0005625203.localdomain dnsmasq[316498]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:39 np0005625203.localdomain dnsmasq[316498]: warning: no upstream servers configured
Feb 20 09:55:39 np0005625203.localdomain dnsmasq-dhcp[316498]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:39 np0005625203.localdomain dnsmasq[316498]: read /var/lib/neutron/dhcp/03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6/addn_hosts - 0 addresses
Feb 20 09:55:39 np0005625203.localdomain dnsmasq-dhcp[316498]: read /var/lib/neutron/dhcp/03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6/host
Feb 20 09:55:39 np0005625203.localdomain dnsmasq-dhcp[316498]: read /var/lib/neutron/dhcp/03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6/opts
Feb 20 09:55:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:39.428 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:39 np0005625203.localdomain sudo[316499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:55:39 np0005625203.localdomain sudo[316499]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:55:39 np0005625203.localdomain sudo[316499]: pam_unix(sudo:session): session closed for user root
Feb 20 09:55:39 np0005625203.localdomain sudo[316517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:55:39 np0005625203.localdomain sudo[316517]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.573 262775 INFO neutron.agent.dhcp.agent [None req-fa1b362f-8cc0-4d5d-a4ec-1d6984a13b00 - - - - - -] DHCP configuration for ports {'675efe2d-4d01-4ad9-8e60-a6c96bf39150'} is completed
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.717 262775 INFO neutron.agent.dhcp.agent [None req-faa90680-4dc7-4543-94f5-cb9080fe8b49 - - - - - -] DHCP configuration for ports {'d42508ee-8180-4afe-8039-3ee68404dbf4'} is completed
Feb 20 09:55:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:39.756 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:39 np0005625203.localdomain podman[316557]: 
Feb 20 09:55:39 np0005625203.localdomain podman[316557]: 2026-02-20 09:55:39.797611447 +0000 UTC m=+0.138511147 container create ee78091695d392bda6dda02aeeba63fc9099a630240c37ab8e1b3f878575cbc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:55:39 np0005625203.localdomain podman[316557]: 2026-02-20 09:55:39.714825205 +0000 UTC m=+0.055724935 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:39 np0005625203.localdomain systemd[1]: Started libpod-conmon-ee78091695d392bda6dda02aeeba63fc9099a630240c37ab8e1b3f878575cbc3.scope.
Feb 20 09:55:39 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:39 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1110774cfa80d2e4f72240e3cd82f44fc1b3811d8bccf35a16b8828103d8d39/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:39 np0005625203.localdomain podman[316557]: 2026-02-20 09:55:39.869970926 +0000 UTC m=+0.210870596 container init ee78091695d392bda6dda02aeeba63fc9099a630240c37ab8e1b3f878575cbc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:39 np0005625203.localdomain podman[316557]: 2026-02-20 09:55:39.878815199 +0000 UTC m=+0.219714869 container start ee78091695d392bda6dda02aeeba63fc9099a630240c37ab8e1b3f878575cbc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 09:55:39 np0005625203.localdomain dnsmasq[316589]: started, version 2.85 cachesize 150
Feb 20 09:55:39 np0005625203.localdomain dnsmasq[316589]: DNS service limited to local subnets
Feb 20 09:55:39 np0005625203.localdomain dnsmasq[316589]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:39 np0005625203.localdomain dnsmasq[316589]: warning: no upstream servers configured
Feb 20 09:55:39 np0005625203.localdomain dnsmasq-dhcp[316589]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Feb 20 09:55:39 np0005625203.localdomain dnsmasq-dhcp[316589]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:39 np0005625203.localdomain dnsmasq[316589]: read /var/lib/neutron/dhcp/89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a/addn_hosts - 2 addresses
Feb 20 09:55:39 np0005625203.localdomain dnsmasq-dhcp[316589]: read /var/lib/neutron/dhcp/89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a/host
Feb 20 09:55:39 np0005625203.localdomain dnsmasq-dhcp[316589]: read /var/lib/neutron/dhcp/89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a/opts
Feb 20 09:55:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:39.921 262775 INFO neutron.agent.dhcp.agent [None req-d82885bb-6c20-4bd9-a5a2-f0dbdc21210c - - - - - -] Synchronizing state
Feb 20 09:55:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:39.968 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:40 np0005625203.localdomain sudo[316517]: pam_unix(sudo:session): session closed for user root
Feb 20 09:55:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:40.228 262775 INFO neutron.agent.dhcp.agent [None req-c1b5e4a0-7fa7-4a63-869b-563b7322a229 - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:55:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:40.230 262775 INFO neutron.agent.dhcp.agent [-] Starting network 519ed234-0c28-4a63-b6ed-1122a8d9dfc9 dhcp configuration
Feb 20 09:55:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:40.231 262775 INFO neutron.agent.dhcp.agent [-] Finished network 519ed234-0c28-4a63-b6ed-1122a8d9dfc9 dhcp configuration
Feb 20 09:55:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:40.231 262775 INFO neutron.agent.dhcp.agent [-] Starting network 545dd7fa-aa2a-4b99-a111-22d2b369ed0a dhcp configuration
Feb 20 09:55:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:40.232 262775 INFO neutron.agent.dhcp.agent [-] Finished network 545dd7fa-aa2a-4b99-a111-22d2b369ed0a dhcp configuration
Feb 20 09:55:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:40.232 262775 INFO neutron.agent.dhcp.agent [-] Starting network 811e2462-6872-485d-9c09-d2dd9cb25273 dhcp configuration
Feb 20 09:55:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:40.236 262775 INFO neutron.agent.dhcp.agent [None req-98611436-403f-4e48-a183-bad076036e7b - - - - - -] DHCP configuration for ports {'1a523e93-af7f-43c2-83e8-65f63fcde633', '7658129d-69b4-41d7-9baa-50264b0c3e63', '88b44592-97a8-465d-85d4-cd8939a54331'} is completed
Feb 20 09:55:40 np0005625203.localdomain dnsmasq[316243]: exiting on receipt of SIGTERM
Feb 20 09:55:40 np0005625203.localdomain podman[316622]: 2026-02-20 09:55:40.403996447 +0000 UTC m=+0.055846729 container kill 42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 20 09:55:40 np0005625203.localdomain systemd[1]: libpod-42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4.scope: Deactivated successfully.
Feb 20 09:55:40 np0005625203.localdomain sudo[316633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:55:40 np0005625203.localdomain sudo[316633]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:55:40 np0005625203.localdomain sudo[316633]: pam_unix(sudo:session): session closed for user root
Feb 20 09:55:40 np0005625203.localdomain podman[316645]: 2026-02-20 09:55:40.483089694 +0000 UTC m=+0.067411886 container died 42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:55:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:40 np0005625203.localdomain podman[316645]: 2026-02-20 09:55:40.520433419 +0000 UTC m=+0.104755571 container cleanup 42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:40 np0005625203.localdomain systemd[1]: libpod-conmon-42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4.scope: Deactivated successfully.
Feb 20 09:55:40 np0005625203.localdomain podman[316651]: 2026-02-20 09:55:40.55923763 +0000 UTC m=+0.136212156 container remove 42d26d4fd5f9790f0153fffcee04cd00800727716474e06736c19eea17b467a4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:55:40 np0005625203.localdomain ceph-mon[296066]: pgmap v279: 177 pgs: 177 active+clean; 146 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 8.7 KiB/s wr, 3 op/s
Feb 20 09:55:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "snap_name": "76272f77-26a4-41f3-97ff-7d9d42de32a4_f845adef-9d7e-4723-a4a0-91acd19cabbe", "force": true, "format": "json"}]: dispatch
Feb 20 09:55:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "snap_name": "76272f77-26a4-41f3-97ff-7d9d42de32a4", "force": true, "format": "json"}]: dispatch
Feb 20 09:55:40 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:55:40 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:55:40 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:55:40 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:55:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:40.613 262775 INFO neutron.agent.linux.ip_lib [-] Device tap2e9dfde9-1a cannot be used as it has no MAC address
Feb 20 09:55:40 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:40.628 2 INFO neutron.agent.securitygroups_rpc [None req-7a51814a-d3bb-4d9f-a2cf-a8e1904feac9 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:40.637 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:40 np0005625203.localdomain kernel: device tap2e9dfde9-1a entered promiscuous mode
Feb 20 09:55:40 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:40Z|00245|binding|INFO|Claiming lport 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 for this chassis.
Feb 20 09:55:40 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581340.6467] manager: (tap2e9dfde9-1a): new Generic device (/org/freedesktop/NetworkManager/Devices/49)
Feb 20 09:55:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:40.646 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:40 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:40Z|00246|binding|INFO|2e9dfde9-1aab-46ff-b627-acda4eba4ec8: Claiming unknown
Feb 20 09:55:40 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:40Z|00247|binding|INFO|Setting lport 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 ovn-installed in OVS
Feb 20 09:55:40 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:40Z|00248|binding|INFO|Setting lport 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 up in Southbound
Feb 20 09:55:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:40.658 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:40.659 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:40 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:40.656 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=2e9dfde9-1aab-46ff-b627-acda4eba4ec8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:40 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:40.659 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:55:40 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:40.661 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:40 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:40.662 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[805f535a-f4aa-4ce7-939d-993f822d4f4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:40.693 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:40.734 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:40.768 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:40.848 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:41 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-f40945e24ab58962b2ddb02dd2a68f836e76e216d863163d62a323750ddd8f83-merged.mount: Deactivated successfully.
Feb 20 09:55:41 np0005625203.localdomain podman[316739]: 
Feb 20 09:55:41 np0005625203.localdomain podman[316739]: 2026-02-20 09:55:41.526576547 +0000 UTC m=+0.088862561 container create 011114b252d5e72753b9e31677d967bd131d79fa8ee25d2aa0bccad6a04f273f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:41 np0005625203.localdomain systemd[1]: Started libpod-conmon-011114b252d5e72753b9e31677d967bd131d79fa8ee25d2aa0bccad6a04f273f.scope.
Feb 20 09:55:41 np0005625203.localdomain podman[316739]: 2026-02-20 09:55:41.484346031 +0000 UTC m=+0.046632085 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:41 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:41 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98e7a219227f937fb4a5b75e28dd4e4cf2b295babab2a91d43c2b0dd3efdf4e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:41 np0005625203.localdomain podman[316739]: 2026-02-20 09:55:41.602764564 +0000 UTC m=+0.165050588 container init 011114b252d5e72753b9e31677d967bd131d79fa8ee25d2aa0bccad6a04f273f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 20 09:55:41 np0005625203.localdomain podman[316739]: 2026-02-20 09:55:41.610820083 +0000 UTC m=+0.173106097 container start 011114b252d5e72753b9e31677d967bd131d79fa8ee25d2aa0bccad6a04f273f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:41 np0005625203.localdomain dnsmasq[316757]: started, version 2.85 cachesize 150
Feb 20 09:55:41 np0005625203.localdomain dnsmasq[316757]: DNS service limited to local subnets
Feb 20 09:55:41 np0005625203.localdomain dnsmasq[316757]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:41 np0005625203.localdomain dnsmasq[316757]: warning: no upstream servers configured
Feb 20 09:55:41 np0005625203.localdomain dnsmasq[316757]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:55:41 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:41.672 262775 INFO neutron.agent.dhcp.agent [-] Finished network 811e2462-6872-485d-9c09-d2dd9cb25273 dhcp configuration
Feb 20 09:55:41 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:41.672 262775 INFO neutron.agent.dhcp.agent [None req-c1b5e4a0-7fa7-4a63-869b-563b7322a229 - - - - - -] Synchronizing state complete
Feb 20 09:55:41 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:41.673 262775 INFO neutron.agent.dhcp.agent [None req-4392ce00-bb74-4e0c-80d3-dc719f6b3a17 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:37Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e60640>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4f075e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e60c70>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e60430>], id=88b44592-97a8-465d-85d4-cd8939a54331, ip_allocation=immediate, mac_address=fa:16:3e:72:8a:3d, name=tempest-PortsIpV6TestJSON-1437249682, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:33Z, description=, dns_domain=, id=89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-1332153286, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48866, qos_policy_id=None, revision_number=3, router:external=False, shared=False, standard_attr_id=2083, status=ACTIVE, subnets=['48baea01-5f0c-4a66-aa12-20eb9f1af76b', 'ce9b8c8b-d841-4eed-94c5-4f69ffac8fbf'], tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:55:36Z, vlan_transparent=None, network_id=89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['b2e5856c-f1df-4bbc-8f9c-41698aa249c6'], standard_attr_id=2106, status=DOWN, tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:55:37Z on network 89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a
Feb 20 09:55:41 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:41.677 2 INFO neutron.agent.securitygroups_rpc [None req-45da2e31-24b0-4a9e-8fbd-b3074d839731 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:41 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:41.686 262775 INFO neutron.agent.dhcp.agent [None req-5b60a3f0-40c4-4943-a36a-4a637259c1f3 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:37Z, description=, device_id=7f90544d-5ad8-4f1b-8f43-7851902677f5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e499a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e492b0>], id=4e39985d-bc48-451a-85c4-380bc1e4f336, ip_allocation=immediate, mac_address=fa:16:3e:a6:49:2a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:35Z, description=, dns_domain=, id=03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1984303079, port_security_enabled=True, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48623, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2094, status=ACTIVE, subnets=['890499c7-c9a7-4dac-a574-54808f5fa538'], tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:55:36Z, vlan_transparent=None, network_id=03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, port_security_enabled=False, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2108, status=DOWN, tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:55:37Z on network 03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6
Feb 20 09:55:41 np0005625203.localdomain dnsmasq[316589]: read /var/lib/neutron/dhcp/89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a/addn_hosts - 2 addresses
Feb 20 09:55:41 np0005625203.localdomain dnsmasq-dhcp[316589]: read /var/lib/neutron/dhcp/89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a/host
Feb 20 09:55:41 np0005625203.localdomain dnsmasq-dhcp[316589]: read /var/lib/neutron/dhcp/89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a/opts
Feb 20 09:55:41 np0005625203.localdomain podman[316792]: 2026-02-20 09:55:41.893795878 +0000 UTC m=+0.062677920 container kill ee78091695d392bda6dda02aeeba63fc9099a630240c37ab8e1b3f878575cbc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:55:41 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:41.913 262775 INFO neutron.agent.dhcp.agent [None req-3459f857-0138-4311-87f8-406ffdd5d846 - - - - - -] DHCP configuration for ports {'675efe2d-4d01-4ad9-8e60-a6c96bf39150', '2e9dfde9-1aab-46ff-b627-acda4eba4ec8', 'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:41 np0005625203.localdomain dnsmasq[316498]: read /var/lib/neutron/dhcp/03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6/addn_hosts - 1 addresses
Feb 20 09:55:41 np0005625203.localdomain podman[316803]: 2026-02-20 09:55:41.942485655 +0000 UTC m=+0.058853342 container kill 9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:55:41 np0005625203.localdomain dnsmasq-dhcp[316498]: read /var/lib/neutron/dhcp/03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6/host
Feb 20 09:55:41 np0005625203.localdomain dnsmasq-dhcp[316498]: read /var/lib/neutron/dhcp/03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6/opts
Feb 20 09:55:42 np0005625203.localdomain dnsmasq[316757]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:42 np0005625203.localdomain podman[316844]: 2026-02-20 09:55:42.080321199 +0000 UTC m=+0.051753103 container kill 011114b252d5e72753b9e31677d967bd131d79fa8ee25d2aa0bccad6a04f273f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:42 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:42.160 262775 INFO neutron.agent.dhcp.agent [None req-5b60a3f0-40c4-4943-a36a-4a637259c1f3 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:37Z, description=, device_id=7f90544d-5ad8-4f1b-8f43-7851902677f5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c857c0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c85670>], id=4e39985d-bc48-451a-85c4-380bc1e4f336, ip_allocation=immediate, mac_address=fa:16:3e:a6:49:2a, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:35Z, description=, dns_domain=, id=03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1984303079, port_security_enabled=True, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=48623, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2094, status=ACTIVE, subnets=['890499c7-c9a7-4dac-a574-54808f5fa538'], tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:55:36Z, vlan_transparent=None, network_id=03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, port_security_enabled=False, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2108, status=DOWN, tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:55:37Z on network 03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6
Feb 20 09:55:42 np0005625203.localdomain dnsmasq[316498]: read /var/lib/neutron/dhcp/03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6/addn_hosts - 1 addresses
Feb 20 09:55:42 np0005625203.localdomain dnsmasq-dhcp[316498]: read /var/lib/neutron/dhcp/03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6/host
Feb 20 09:55:42 np0005625203.localdomain dnsmasq-dhcp[316498]: read /var/lib/neutron/dhcp/03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6/opts
Feb 20 09:55:42 np0005625203.localdomain podman[316889]: 2026-02-20 09:55:42.335579486 +0000 UTC m=+0.060216185 container kill 9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:55:42 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:42.335 262775 INFO neutron.agent.dhcp.agent [None req-72b3d222-8c32-41ed-ba64-5ef01b2c4d50 - - - - - -] DHCP configuration for ports {'88b44592-97a8-465d-85d4-cd8939a54331', '4e39985d-bc48-451a-85c4-380bc1e4f336'} is completed
Feb 20 09:55:42 np0005625203.localdomain systemd[1]: tmp-crun.37bnLG.mount: Deactivated successfully.
Feb 20 09:55:42 np0005625203.localdomain podman[316920]: 2026-02-20 09:55:42.516004617 +0000 UTC m=+0.073209635 container kill ee78091695d392bda6dda02aeeba63fc9099a630240c37ab8e1b3f878575cbc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:42 np0005625203.localdomain dnsmasq[316589]: exiting on receipt of SIGTERM
Feb 20 09:55:42 np0005625203.localdomain systemd[1]: libpod-ee78091695d392bda6dda02aeeba63fc9099a630240c37ab8e1b3f878575cbc3.scope: Deactivated successfully.
Feb 20 09:55:42 np0005625203.localdomain podman[316938]: 2026-02-20 09:55:42.591661508 +0000 UTC m=+0.060886324 container died ee78091695d392bda6dda02aeeba63fc9099a630240c37ab8e1b3f878575cbc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 20 09:55:42 np0005625203.localdomain ceph-mon[296066]: pgmap v280: 177 pgs: 177 active+clean; 146 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 7.1 KiB/s wr, 16 op/s
Feb 20 09:55:42 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:42.599 262775 INFO neutron.agent.dhcp.agent [None req-30311dfd-314a-48c5-b297-0c95815e57e4 - - - - - -] DHCP configuration for ports {'4e39985d-bc48-451a-85c4-380bc1e4f336'} is completed
Feb 20 09:55:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e150 e150: 6 total, 6 up, 6 in
Feb 20 09:55:42 np0005625203.localdomain podman[316938]: 2026-02-20 09:55:42.624189775 +0000 UTC m=+0.093414551 container cleanup ee78091695d392bda6dda02aeeba63fc9099a630240c37ab8e1b3f878575cbc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 09:55:42 np0005625203.localdomain systemd[1]: libpod-conmon-ee78091695d392bda6dda02aeeba63fc9099a630240c37ab8e1b3f878575cbc3.scope: Deactivated successfully.
Feb 20 09:55:42 np0005625203.localdomain podman[316940]: 2026-02-20 09:55:42.680657242 +0000 UTC m=+0.138421703 container remove ee78091695d392bda6dda02aeeba63fc9099a630240c37ab8e1b3f878575cbc3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 09:55:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:43.195 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:43 np0005625203.localdomain dnsmasq[316757]: exiting on receipt of SIGTERM
Feb 20 09:55:43 np0005625203.localdomain podman[317004]: 2026-02-20 09:55:43.200471674 +0000 UTC m=+0.070480082 container kill 011114b252d5e72753b9e31677d967bd131d79fa8ee25d2aa0bccad6a04f273f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:43 np0005625203.localdomain systemd[1]: libpod-011114b252d5e72753b9e31677d967bd131d79fa8ee25d2aa0bccad6a04f273f.scope: Deactivated successfully.
Feb 20 09:55:43 np0005625203.localdomain podman[317026]: 2026-02-20 09:55:43.283366328 +0000 UTC m=+0.051130843 container died 011114b252d5e72753b9e31677d967bd131d79fa8ee25d2aa0bccad6a04f273f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:55:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:43Z|00249|binding|INFO|Removing iface tap7658129d-69 ovn-installed in OVS
Feb 20 09:55:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:43Z|00250|binding|INFO|Removing lport 7658129d-69b4-41d7-9baa-50264b0c3e63 ovn-installed in OVS
Feb 20 09:55:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:43.295 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port f21bb12f-f5e8-47f8-8432-0644c4356e89 with type ""
Feb 20 09:55:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:43.297 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a09b3407c3143c8ae0948ccb18c1e61', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9c56fb6-e554-4e38-9c79-0d9a4b411b62, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=7658129d-69b4-41d7-9baa-50264b0c3e63) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:43.299 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:43.301 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 7658129d-69b4-41d7-9baa-50264b0c3e63 in datapath 89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a unbound from our chassis
Feb 20 09:55:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:43.303 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:43.303 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:43.305 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[34bf0869-511a-4254-a406-a85071d4545d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:43 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:43.318 2 INFO neutron.agent.securitygroups_rpc [None req-b9b9f9d2-32dc-4ef0-9ebd-12b2d2ae4ee8 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']
Feb 20 09:55:43 np0005625203.localdomain podman[317026]: 2026-02-20 09:55:43.320819057 +0000 UTC m=+0.088583552 container remove 011114b252d5e72753b9e31677d967bd131d79fa8ee25d2aa0bccad6a04f273f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 20 09:55:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:43Z|00251|binding|INFO|Releasing lport 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 from this chassis (sb_readonly=0)
Feb 20 09:55:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:43.329 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:43Z|00252|binding|INFO|Setting lport 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 down in Southbound
Feb 20 09:55:43 np0005625203.localdomain kernel: device tap2e9dfde9-1a left promiscuous mode
Feb 20 09:55:43 np0005625203.localdomain systemd[1]: libpod-conmon-011114b252d5e72753b9e31677d967bd131d79fa8ee25d2aa0bccad6a04f273f.scope: Deactivated successfully.
Feb 20 09:55:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:43.343 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=2e9dfde9-1aab-46ff-b627-acda4eba4ec8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:43.345 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 2e9dfde9-1aab-46ff-b627-acda4eba4ec8 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:55:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:43.347 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:43.348 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 811e2462-6872-485d-9c09-d2dd9cb25273 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:43.349 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[7e384e82-c788-4fa1-b221-6224e688f5d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:43 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:43.351 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-a98e7a219227f937fb4a5b75e28dd4e4cf2b295babab2a91d43c2b0dd3efdf4e-merged.mount: Deactivated successfully.
Feb 20 09:55:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-011114b252d5e72753b9e31677d967bd131d79fa8ee25d2aa0bccad6a04f273f-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d1110774cfa80d2e4f72240e3cd82f44fc1b3811d8bccf35a16b8828103d8d39-merged.mount: Deactivated successfully.
Feb 20 09:55:43 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee78091695d392bda6dda02aeeba63fc9099a630240c37ab8e1b3f878575cbc3-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:43 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:43.578 262775 INFO neutron.agent.dhcp.agent [None req-56bfe218-3a34-4a4c-bbc9-c7346cd7bb7a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:43 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:55:43 np0005625203.localdomain podman[317071]: 
Feb 20 09:55:43 np0005625203.localdomain ceph-mon[296066]: osdmap e150: 6 total, 6 up, 6 in
Feb 20 09:55:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "format": "json"}]: dispatch
Feb 20 09:55:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "1ece01ec-80d8-4513-b3ec-6bcff358bcd5", "force": true, "format": "json"}]: dispatch
Feb 20 09:55:43 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:55:43 np0005625203.localdomain podman[317071]: 2026-02-20 09:55:43.635512213 +0000 UTC m=+0.089779829 container create ce739f7404f54819c45833ae556cd06cb5dd197dfc15d7bb44dd66c907d15ce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 20 09:55:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e151 e151: 6 total, 6 up, 6 in
Feb 20 09:55:43 np0005625203.localdomain systemd[1]: Started libpod-conmon-ce739f7404f54819c45833ae556cd06cb5dd197dfc15d7bb44dd66c907d15ce9.scope.
Feb 20 09:55:43 np0005625203.localdomain podman[317071]: 2026-02-20 09:55:43.594149763 +0000 UTC m=+0.048417409 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:43 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:43 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716141c660a28427e1f304157d8496984c639fd14619c98b75027032b545dbe1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:43.702 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:43 np0005625203.localdomain podman[317071]: 2026-02-20 09:55:43.712260887 +0000 UTC m=+0.166528503 container init ce739f7404f54819c45833ae556cd06cb5dd197dfc15d7bb44dd66c907d15ce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 20 09:55:43 np0005625203.localdomain podman[317071]: 2026-02-20 09:55:43.718826991 +0000 UTC m=+0.173094607 container start ce739f7404f54819c45833ae556cd06cb5dd197dfc15d7bb44dd66c907d15ce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 20 09:55:43 np0005625203.localdomain dnsmasq[317090]: started, version 2.85 cachesize 150
Feb 20 09:55:43 np0005625203.localdomain dnsmasq[317090]: DNS service limited to local subnets
Feb 20 09:55:43 np0005625203.localdomain dnsmasq[317090]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:43 np0005625203.localdomain dnsmasq[317090]: warning: no upstream servers configured
Feb 20 09:55:43 np0005625203.localdomain dnsmasq-dhcp[317090]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:43 np0005625203.localdomain dnsmasq[317090]: read /var/lib/neutron/dhcp/89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a/addn_hosts - 0 addresses
Feb 20 09:55:43 np0005625203.localdomain dnsmasq-dhcp[317090]: read /var/lib/neutron/dhcp/89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a/host
Feb 20 09:55:43 np0005625203.localdomain dnsmasq-dhcp[317090]: read /var/lib/neutron/dhcp/89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a/opts
Feb 20 09:55:43 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:43.874 262775 INFO neutron.agent.dhcp.agent [None req-64cc8c28-11e1-4e31-81dc-43f4b339894a - - - - - -] DHCP configuration for ports {'7658129d-69b4-41d7-9baa-50264b0c3e63', '1a523e93-af7f-43c2-83e8-65f63fcde633'} is completed
Feb 20 09:55:43 np0005625203.localdomain dnsmasq[317090]: exiting on receipt of SIGTERM
Feb 20 09:55:43 np0005625203.localdomain podman[317108]: 2026-02-20 09:55:43.963984435 +0000 UTC m=+0.056327444 container kill ce739f7404f54819c45833ae556cd06cb5dd197dfc15d7bb44dd66c907d15ce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:43 np0005625203.localdomain systemd[1]: libpod-ce739f7404f54819c45833ae556cd06cb5dd197dfc15d7bb44dd66c907d15ce9.scope: Deactivated successfully.
Feb 20 09:55:44 np0005625203.localdomain podman[317121]: 2026-02-20 09:55:44.03073099 +0000 UTC m=+0.055677674 container died ce739f7404f54819c45833ae556cd06cb5dd197dfc15d7bb44dd66c907d15ce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:44 np0005625203.localdomain podman[317121]: 2026-02-20 09:55:44.061216903 +0000 UTC m=+0.086163547 container cleanup ce739f7404f54819c45833ae556cd06cb5dd197dfc15d7bb44dd66c907d15ce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:44 np0005625203.localdomain systemd[1]: libpod-conmon-ce739f7404f54819c45833ae556cd06cb5dd197dfc15d7bb44dd66c907d15ce9.scope: Deactivated successfully.
Feb 20 09:55:44 np0005625203.localdomain podman[317128]: 2026-02-20 09:55:44.086733473 +0000 UTC m=+0.097294132 container remove ce739f7404f54819c45833ae556cd06cb5dd197dfc15d7bb44dd66c907d15ce9 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-89f3f5d9-5c8c-4a82-abd1-61dbaaa7f37a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 20 09:55:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:44.098 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:44 np0005625203.localdomain kernel: device tap7658129d-69 left promiscuous mode
Feb 20 09:55:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:44.114 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:44 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:44.128 262775 INFO neutron.agent.dhcp.agent [None req-d368a74c-fae9-42f9-910b-ecc88abc761b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:44 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:44.129 262775 INFO neutron.agent.dhcp.agent [None req-d368a74c-fae9-42f9-910b-ecc88abc761b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:44 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:44.130 262775 INFO neutron.agent.dhcp.agent [None req-d368a74c-fae9-42f9-910b-ecc88abc761b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:44 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:44.163 2 INFO neutron.agent.securitygroups_rpc [None req-6065317f-9707-4b49-a2f9-16f062a24577 061949b19d2146debcdb4e85c8db9eec b9f8945a2560410b988e395a1db7710f - - default default] Security group member updated ['9c30e397-a710-4013-bf42-b0dd9762b00a']
Feb 20 09:55:44 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:44.191 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:55:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:44.430 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-716141c660a28427e1f304157d8496984c639fd14619c98b75027032b545dbe1-merged.mount: Deactivated successfully.
Feb 20 09:55:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ce739f7404f54819c45833ae556cd06cb5dd197dfc15d7bb44dd66c907d15ce9-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:44 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d89f3f5d9\x2d5c8c\x2d4a82\x2dabd1\x2d61dbaaa7f37a.mount: Deactivated successfully.
Feb 20 09:55:44 np0005625203.localdomain ceph-mon[296066]: pgmap v282: 177 pgs: 177 active+clean; 146 MiB data, 791 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 8.5 KiB/s wr, 20 op/s
Feb 20 09:55:44 np0005625203.localdomain ceph-mon[296066]: osdmap e151: 6 total, 6 up, 6 in
Feb 20 09:55:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:44.818 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:45.851 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:46 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:46.079 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:46 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:46.189 262775 INFO neutron.agent.linux.ip_lib [None req-7b855fae-6988-44f3-a9c3-00f344fb5c6f - - - - - -] Device tap0b226212-d5 cannot be used as it has no MAC address
Feb 20 09:55:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:46.215 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:46 np0005625203.localdomain kernel: device tap0b226212-d5 entered promiscuous mode
Feb 20 09:55:46 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581346.2234] manager: (tap0b226212-d5): new Generic device (/org/freedesktop/NetworkManager/Devices/50)
Feb 20 09:55:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:46Z|00253|binding|INFO|Claiming lport 0b226212-d5ce-417b-aa07-43cbbaff0908 for this chassis.
Feb 20 09:55:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:46Z|00254|binding|INFO|0b226212-d5ce-417b-aa07-43cbbaff0908: Claiming unknown
Feb 20 09:55:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:46.227 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:46 np0005625203.localdomain systemd-udevd[317160]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:55:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:46.233 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe5d:82e2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=0b226212-d5ce-417b-aa07-43cbbaff0908) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:46.235 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 0b226212-d5ce-417b-aa07-43cbbaff0908 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:55:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:46.238 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2ba4eb49-b641-404d-86dd-cd6748e3e215 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:55:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:46.238 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:55:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:46.239 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7c6e20-778b-4b73-a70b-b4a16d8e5118]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:46Z|00255|binding|INFO|Setting lport 0b226212-d5ce-417b-aa07-43cbbaff0908 ovn-installed in OVS
Feb 20 09:55:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:46Z|00256|binding|INFO|Setting lport 0b226212-d5ce-417b-aa07-43cbbaff0908 up in Southbound
Feb 20 09:55:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0b226212-d5: No such device
Feb 20 09:55:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:46.261 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0b226212-d5: No such device
Feb 20 09:55:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0b226212-d5: No such device
Feb 20 09:55:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0b226212-d5: No such device
Feb 20 09:55:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0b226212-d5: No such device
Feb 20 09:55:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0b226212-d5: No such device
Feb 20 09:55:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0b226212-d5: No such device
Feb 20 09:55:46 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap0b226212-d5: No such device
Feb 20 09:55:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:46.300 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:46.328 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:46 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:46.496 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:46 np0005625203.localdomain ceph-mon[296066]: pgmap v284: 177 pgs: 177 active+clean; 146 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 23 KiB/s wr, 27 op/s
Feb 20 09:55:47 np0005625203.localdomain podman[317231]: 
Feb 20 09:55:47 np0005625203.localdomain podman[317231]: 2026-02-20 09:55:47.145262217 +0000 UTC m=+0.093330609 container create 7e74dc6209e0d4421906e09812c433536cd91526441b9fe9d6ac8e74c4eeeeb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:47 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:47.158 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2 2001:db8::f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:47 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:47.161 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated
Feb 20 09:55:47 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:47.165 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 2ba4eb49-b641-404d-86dd-cd6748e3e215 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:55:47 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:47.165 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:55:47 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:47.166 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ed71c7-eb5c-44f9-a9c1-bd4073182497]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:47 np0005625203.localdomain systemd[1]: Started libpod-conmon-7e74dc6209e0d4421906e09812c433536cd91526441b9fe9d6ac8e74c4eeeeb7.scope.
Feb 20 09:55:47 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:47 np0005625203.localdomain podman[317231]: 2026-02-20 09:55:47.100620236 +0000 UTC m=+0.048688658 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:47 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2598839b64078fb65cb4b12b324cd6c716ad8d3ed383d369b9c1735cd0ed1005/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:47 np0005625203.localdomain podman[317231]: 2026-02-20 09:55:47.212142226 +0000 UTC m=+0.160210628 container init 7e74dc6209e0d4421906e09812c433536cd91526441b9fe9d6ac8e74c4eeeeb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Feb 20 09:55:47 np0005625203.localdomain podman[317231]: 2026-02-20 09:55:47.217404589 +0000 UTC m=+0.165472971 container start 7e74dc6209e0d4421906e09812c433536cd91526441b9fe9d6ac8e74c4eeeeb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 20 09:55:47 np0005625203.localdomain dnsmasq[317249]: started, version 2.85 cachesize 150
Feb 20 09:55:47 np0005625203.localdomain dnsmasq[317249]: DNS service limited to local subnets
Feb 20 09:55:47 np0005625203.localdomain dnsmasq[317249]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:47 np0005625203.localdomain dnsmasq[317249]: warning: no upstream servers configured
Feb 20 09:55:47 np0005625203.localdomain dnsmasq[317249]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:47.362 262775 INFO neutron.agent.dhcp.agent [None req-b92686d7-cfa4-4380-80a4-5e0a9be3f292 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:47 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:47.492 2 INFO neutron.agent.securitygroups_rpc [None req-4aebfeae-a91f-4757-b68b-fe43601c173b 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:47 np0005625203.localdomain dnsmasq[317249]: exiting on receipt of SIGTERM
Feb 20 09:55:47 np0005625203.localdomain podman[317268]: 2026-02-20 09:55:47.578668275 +0000 UTC m=+0.068236242 container kill 7e74dc6209e0d4421906e09812c433536cd91526441b9fe9d6ac8e74c4eeeeb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:47 np0005625203.localdomain systemd[1]: libpod-7e74dc6209e0d4421906e09812c433536cd91526441b9fe9d6ac8e74c4eeeeb7.scope: Deactivated successfully.
Feb 20 09:55:47 np0005625203.localdomain podman[317282]: 2026-02-20 09:55:47.656644627 +0000 UTC m=+0.059525492 container died 7e74dc6209e0d4421906e09812c433536cd91526441b9fe9d6ac8e74c4eeeeb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:47 np0005625203.localdomain podman[317282]: 2026-02-20 09:55:47.692212128 +0000 UTC m=+0.095092953 container cleanup 7e74dc6209e0d4421906e09812c433536cd91526441b9fe9d6ac8e74c4eeeeb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:55:47 np0005625203.localdomain systemd[1]: libpod-conmon-7e74dc6209e0d4421906e09812c433536cd91526441b9fe9d6ac8e74c4eeeeb7.scope: Deactivated successfully.
Feb 20 09:55:47 np0005625203.localdomain podman[317285]: 2026-02-20 09:55:47.739486921 +0000 UTC m=+0.130267542 container remove 7e74dc6209e0d4421906e09812c433536cd91526441b9fe9d6ac8e74c4eeeeb7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e152 e152: 6 total, 6 up, 6 in
Feb 20 09:55:48 np0005625203.localdomain systemd[1]: tmp-crun.Xtz1U8.mount: Deactivated successfully.
Feb 20 09:55:48 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-2598839b64078fb65cb4b12b324cd6c716ad8d3ed383d369b9c1735cd0ed1005-merged.mount: Deactivated successfully.
Feb 20 09:55:48 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7e74dc6209e0d4421906e09812c433536cd91526441b9fe9d6ac8e74c4eeeeb7-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:48 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:48.620 2 INFO neutron.agent.securitygroups_rpc [None req-f40f35c0-4148-4e05-a7c2-3455638d7684 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:55:48 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:48.745 262775 INFO neutron.agent.linux.ip_lib [None req-c7cb4257-b4c4-423e-9206-26b11f6b80b0 - - - - - -] Device tap3b5cf5d8-58 cannot be used as it has no MAC address
Feb 20 09:55:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:48.772 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:48 np0005625203.localdomain kernel: device tap3b5cf5d8-58 entered promiscuous mode
Feb 20 09:55:48 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:48Z|00257|binding|INFO|Claiming lport 3b5cf5d8-5897-42f6-b615-9991a21bf32c for this chassis.
Feb 20 09:55:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:48.780 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:48 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581348.7809] manager: (tap3b5cf5d8-58): new Generic device (/org/freedesktop/NetworkManager/Devices/51)
Feb 20 09:55:48 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:48Z|00258|binding|INFO|3b5cf5d8-5897-42f6-b615-9991a21bf32c: Claiming unknown
Feb 20 09:55:48 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:48.792 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:2::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-18837b45-5571-4a62-8087-50344388e0e4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18837b45-5571-4a62-8087-50344388e0e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c44e13adebb4610b7c0cd2fdc62a5b7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd48c3a5-5b40-4f7a-8b37-6049cc6b1ddd, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=3b5cf5d8-5897-42f6-b615-9991a21bf32c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:48 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:48.794 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 3b5cf5d8-5897-42f6-b615-9991a21bf32c in datapath 18837b45-5571-4a62-8087-50344388e0e4 bound to our chassis
Feb 20 09:55:48 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:48.795 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18837b45-5571-4a62-8087-50344388e0e4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:48 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:48.796 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[54074f4e-3975-4587-b736-5340d8c8cecc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:48 np0005625203.localdomain ceph-mon[296066]: pgmap v285: 177 pgs: 177 active+clean; 146 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 15 KiB/s rd, 16 KiB/s wr, 24 op/s
Feb 20 09:55:48 np0005625203.localdomain ceph-mon[296066]: osdmap e152: 6 total, 6 up, 6 in
Feb 20 09:55:48 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:48Z|00259|binding|INFO|Setting lport 3b5cf5d8-5897-42f6-b615-9991a21bf32c ovn-installed in OVS
Feb 20 09:55:48 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:48Z|00260|binding|INFO|Setting lport 3b5cf5d8-5897-42f6-b615-9991a21bf32c up in Southbound
Feb 20 09:55:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:48.819 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:48.867 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:48.911 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:48 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:48.997 2 INFO neutron.agent.securitygroups_rpc [None req-b8e91bcb-8145-4128-b349-c2aa5e79d87e 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:55:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:49.458 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:49 np0005625203.localdomain podman[317419]: 
Feb 20 09:55:49 np0005625203.localdomain podman[317419]: 2026-02-20 09:55:49.874307417 +0000 UTC m=+0.093613917 container create 7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-18837b45-5571-4a62-8087-50344388e0e4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 20 09:55:49 np0005625203.localdomain systemd[1]: Started libpod-conmon-7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24.scope.
Feb 20 09:55:49 np0005625203.localdomain systemd[1]: tmp-crun.JZaMxR.mount: Deactivated successfully.
Feb 20 09:55:49 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:49 np0005625203.localdomain podman[317419]: 2026-02-20 09:55:49.832816173 +0000 UTC m=+0.052122703 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:49 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11997954db4388642733363e5051d573279f7d63d7b1c3903d92fd5f670c1eb6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:49 np0005625203.localdomain podman[317419]: 2026-02-20 09:55:49.942383973 +0000 UTC m=+0.161690453 container init 7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-18837b45-5571-4a62-8087-50344388e0e4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 20 09:55:49 np0005625203.localdomain podman[317419]: 2026-02-20 09:55:49.951096252 +0000 UTC m=+0.170402732 container start 7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-18837b45-5571-4a62-8087-50344388e0e4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:55:49 np0005625203.localdomain dnsmasq[317453]: started, version 2.85 cachesize 150
Feb 20 09:55:49 np0005625203.localdomain dnsmasq[317453]: DNS service limited to local subnets
Feb 20 09:55:49 np0005625203.localdomain dnsmasq[317453]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:49 np0005625203.localdomain dnsmasq[317453]: warning: no upstream servers configured
Feb 20 09:55:49 np0005625203.localdomain dnsmasq-dhcp[317453]: DHCPv6, static leases only on 2001:db8:2::, lease time 1d
Feb 20 09:55:49 np0005625203.localdomain dnsmasq[317453]: read /var/lib/neutron/dhcp/18837b45-5571-4a62-8087-50344388e0e4/addn_hosts - 0 addresses
Feb 20 09:55:49 np0005625203.localdomain dnsmasq-dhcp[317453]: read /var/lib/neutron/dhcp/18837b45-5571-4a62-8087-50344388e0e4/host
Feb 20 09:55:49 np0005625203.localdomain dnsmasq-dhcp[317453]: read /var/lib/neutron/dhcp/18837b45-5571-4a62-8087-50344388e0e4/opts
Feb 20 09:55:49 np0005625203.localdomain podman[317436]: 
Feb 20 09:55:49 np0005625203.localdomain podman[317436]: 2026-02-20 09:55:49.978131159 +0000 UTC m=+0.135853464 container create ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:55:49 np0005625203.localdomain systemd[1]: Started libpod-conmon-ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85.scope.
Feb 20 09:55:50 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:50 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/896d8842d34af147072164064c097e3f1dc97993e1b0f4109ce5c01aff794dd1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:50 np0005625203.localdomain podman[317436]: 2026-02-20 09:55:50.025271098 +0000 UTC m=+0.182993413 container init ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 20 09:55:50 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:50.025 262775 INFO neutron.agent.dhcp.agent [None req-c7cb4257-b4c4-423e-9206-26b11f6b80b0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:48Z, description=, device_id=7f90544d-5ad8-4f1b-8f43-7851902677f5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4cd42e0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4cd4940>], id=e73b4566-51f3-4d39-a46a-dc3689d34a8b, ip_allocation=immediate, mac_address=fa:16:3e:ac:10:b0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:45Z, description=, dns_domain=, id=18837b45-5571-4a62-8087-50344388e0e4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1640945430, port_security_enabled=True, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=36039, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2141, status=ACTIVE, subnets=['2a7363b1-c535-4a39-adf7-97bf71263713'], tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:55:47Z, vlan_transparent=None, network_id=18837b45-5571-4a62-8087-50344388e0e4, port_security_enabled=False, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2170, status=DOWN, tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:55:49Z on network 18837b45-5571-4a62-8087-50344388e0e4
Feb 20 09:55:50 np0005625203.localdomain podman[317436]: 2026-02-20 09:55:49.934195499 +0000 UTC m=+0.091917864 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:50 np0005625203.localdomain podman[317436]: 2026-02-20 09:55:50.0353735 +0000 UTC m=+0.193095815 container start ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:55:50 np0005625203.localdomain dnsmasq[317460]: started, version 2.85 cachesize 150
Feb 20 09:55:50 np0005625203.localdomain dnsmasq[317460]: DNS service limited to local subnets
Feb 20 09:55:50 np0005625203.localdomain dnsmasq[317460]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:50 np0005625203.localdomain dnsmasq[317460]: warning: no upstream servers configured
Feb 20 09:55:50 np0005625203.localdomain dnsmasq-dhcp[317460]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:55:50 np0005625203.localdomain dnsmasq[317460]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:50 np0005625203.localdomain dnsmasq-dhcp[317460]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:50 np0005625203.localdomain dnsmasq-dhcp[317460]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:50 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:50.055 2 INFO neutron.agent.securitygroups_rpc [None req-005508de-f69c-4da2-9936-4351a4d76fde 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:50 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:50.093 262775 INFO neutron.agent.dhcp.agent [None req-6102a43f-5d3d-44e9-adf9-0b7b892d4bf7 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:48Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c8b2e0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c8b0d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c8bd30>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c8b5e0>], id=b8eee3aa-b05a-441f-a007-2867c2248e4d, ip_allocation=immediate, mac_address=fa:16:3e:cb:a5:fe, name=tempest-NetworksTestDHCPv6-55150918, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=23, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['2f57ef4b-75aa-4d6f-b38d-263c7d06b45b', 'ced89249-f5e0-4e2a-93a8-e8aae046bb3f'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:45Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=2167, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:48Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:55:50 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:50.145 262775 INFO neutron.agent.dhcp.agent [None req-62993fc5-b5c9-4d5c-8ac5-c2c31669472a - - - - - -] DHCP configuration for ports {'aa231f4a-40a5-422d-8f36-db8c3eac43a2'} is completed
Feb 20 09:55:50 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:50.162 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:50 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:50.180 2 INFO neutron.agent.securitygroups_rpc [None req-8c2a2fb8-3a21-4d90-b744-6c75dba74fae f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:55:50 np0005625203.localdomain dnsmasq[317453]: read /var/lib/neutron/dhcp/18837b45-5571-4a62-8087-50344388e0e4/addn_hosts - 1 addresses
Feb 20 09:55:50 np0005625203.localdomain dnsmasq-dhcp[317453]: read /var/lib/neutron/dhcp/18837b45-5571-4a62-8087-50344388e0e4/host
Feb 20 09:55:50 np0005625203.localdomain podman[317479]: 2026-02-20 09:55:50.205967887 +0000 UTC m=+0.064025332 container kill 7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-18837b45-5571-4a62-8087-50344388e0e4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:50 np0005625203.localdomain dnsmasq-dhcp[317453]: read /var/lib/neutron/dhcp/18837b45-5571-4a62-8087-50344388e0e4/opts
Feb 20 09:55:50 np0005625203.localdomain dnsmasq[317460]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 2 addresses
Feb 20 09:55:50 np0005625203.localdomain podman[317513]: 2026-02-20 09:55:50.342822552 +0000 UTC m=+0.059692198 container kill ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:55:50 np0005625203.localdomain dnsmasq-dhcp[317460]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:50 np0005625203.localdomain dnsmasq-dhcp[317460]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:50 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:50.350 262775 INFO neutron.agent.dhcp.agent [None req-49edfaa9-bca9-4adf-b047-0d02b6fe2ae8 - - - - - -] DHCP configuration for ports {'0b226212-d5ce-417b-aa07-43cbbaff0908', 'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:50 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:50.494 262775 INFO neutron.agent.dhcp.agent [None req-ad56ca15-54f3-4927-8040-f50207aa8970 - - - - - -] DHCP configuration for ports {'e73b4566-51f3-4d39-a46a-dc3689d34a8b'} is completed
Feb 20 09:55:50 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:50.619 262775 INFO neutron.agent.dhcp.agent [None req-3575f22b-c39d-48b5-9ca5-053dbae30efe - - - - - -] DHCP configuration for ports {'b8eee3aa-b05a-441f-a007-2867c2248e4d'} is completed
Feb 20 09:55:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:55:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:55:50 np0005625203.localdomain dnsmasq[317460]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:50 np0005625203.localdomain podman[317578]: 2026-02-20 09:55:50.797391255 +0000 UTC m=+0.052876146 container kill ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:50 np0005625203.localdomain dnsmasq-dhcp[317460]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:50 np0005625203.localdomain dnsmasq-dhcp[317460]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:50 np0005625203.localdomain ceph-mon[296066]: pgmap v287: 177 pgs: 177 active+clean; 146 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 1.2 MiB/s rd, 19 KiB/s wr, 6 op/s
Feb 20 09:55:50 np0005625203.localdomain podman[317552]: 2026-02-20 09:55:50.785200518 +0000 UTC m=+0.090220202 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:55:50 np0005625203.localdomain podman[317552]: 2026-02-20 09:55:50.870478917 +0000 UTC m=+0.175498601 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:55:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:50.904 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:50 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:55:50 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:50.916 2 INFO neutron.agent.securitygroups_rpc [None req-8cf476ac-f2bb-4715-a401-741101924898 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:50 np0005625203.localdomain podman[317548]: 2026-02-20 09:55:50.837562178 +0000 UTC m=+0.142513990 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:55:50 np0005625203.localdomain podman[317548]: 2026-02-20 09:55:50.972353988 +0000 UTC m=+0.277305740 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:55:50 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:55:51 np0005625203.localdomain dnsmasq[317460]: exiting on receipt of SIGTERM
Feb 20 09:55:51 np0005625203.localdomain podman[317643]: 2026-02-20 09:55:51.287073255 +0000 UTC m=+0.061058201 container kill ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:55:51 np0005625203.localdomain systemd[1]: libpod-ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85.scope: Deactivated successfully.
Feb 20 09:55:51 np0005625203.localdomain podman[317656]: 2026-02-20 09:55:51.365179061 +0000 UTC m=+0.059113940 container died ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:55:51 np0005625203.localdomain podman[317656]: 2026-02-20 09:55:51.397073848 +0000 UTC m=+0.091008657 container cleanup ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb 20 09:55:51 np0005625203.localdomain systemd[1]: libpod-conmon-ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85.scope: Deactivated successfully.
Feb 20 09:55:51 np0005625203.localdomain podman[317658]: 2026-02-20 09:55:51.440728458 +0000 UTC m=+0.128598829 container remove ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:55:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:51.454 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:51 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:51Z|00261|binding|INFO|Releasing lport 0b226212-d5ce-417b-aa07-43cbbaff0908 from this chassis (sb_readonly=0)
Feb 20 09:55:51 np0005625203.localdomain kernel: device tap0b226212-d5 left promiscuous mode
Feb 20 09:55:51 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:51Z|00262|binding|INFO|Setting lport 0b226212-d5ce-417b-aa07-43cbbaff0908 down in Southbound
Feb 20 09:55:51 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:51.464 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fe5d:82e2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=0b226212-d5ce-417b-aa07-43cbbaff0908) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:51 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:51.466 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 0b226212-d5ce-417b-aa07-43cbbaff0908 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:55:51 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:51.469 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:55:51 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:51.471 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[d71e8532-d1e3-4ca5-8775-bb313b7dda52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:51.478 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:51 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:51.685 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:48Z, description=, device_id=7f90544d-5ad8-4f1b-8f43-7851902677f5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4dd5ac0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da56d4d00>], id=e73b4566-51f3-4d39-a46a-dc3689d34a8b, ip_allocation=immediate, mac_address=fa:16:3e:ac:10:b0, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:45Z, description=, dns_domain=, id=18837b45-5571-4a62-8087-50344388e0e4, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-1640945430, port_security_enabled=True, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=36039, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2141, status=ACTIVE, subnets=['2a7363b1-c535-4a39-adf7-97bf71263713'], tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:55:47Z, vlan_transparent=None, network_id=18837b45-5571-4a62-8087-50344388e0e4, port_security_enabled=False, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2170, status=DOWN, tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:55:49Z on network 18837b45-5571-4a62-8087-50344388e0e4
Feb 20 09:55:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:55:51 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:55:51 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:51.762 262775 INFO neutron.agent.dhcp.agent [None req-00fe30e7-6437-4854-b364-5407eba786dc - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:55:51 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:55:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-896d8842d34af147072164064c097e3f1dc97993e1b0f4109ce5c01aff794dd1-merged.mount: Deactivated successfully.
Feb 20 09:55:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ed24e76bfada47c9e101e5ef3fe05cd069909b48c0cac0459e47078d979ada85-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:51 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:55:51 np0005625203.localdomain dnsmasq[317453]: read /var/lib/neutron/dhcp/18837b45-5571-4a62-8087-50344388e0e4/addn_hosts - 1 addresses
Feb 20 09:55:51 np0005625203.localdomain dnsmasq-dhcp[317453]: read /var/lib/neutron/dhcp/18837b45-5571-4a62-8087-50344388e0e4/host
Feb 20 09:55:51 np0005625203.localdomain dnsmasq-dhcp[317453]: read /var/lib/neutron/dhcp/18837b45-5571-4a62-8087-50344388e0e4/opts
Feb 20 09:55:51 np0005625203.localdomain podman[317704]: 2026-02-20 09:55:51.890238106 +0000 UTC m=+0.058440519 container kill 7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-18837b45-5571-4a62-8087-50344388e0e4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:55:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:52.182 262775 INFO neutron.agent.dhcp.agent [None req-a7504d83-e4ef-4af7-bcd4-30dcb47a1549 - - - - - -] DHCP configuration for ports {'e73b4566-51f3-4d39-a46a-dc3689d34a8b'} is completed
Feb 20 09:55:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:52.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:55:52 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:52.621 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 2001:db8::f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:52 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:52.623 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated
Feb 20 09:55:52 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:52.628 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:55:52 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:52.629 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[bfaaef53-2a9f-43a7-9bb0-bee3ddd9b9fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:52 np0005625203.localdomain ceph-mon[296066]: pgmap v288: 177 pgs: 177 active+clean; 146 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 2.6 MiB/s rd, 16 KiB/s wr, 14 op/s
Feb 20 09:55:52 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:55:52 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "format": "json"}]: dispatch
Feb 20 09:55:53 np0005625203.localdomain sshd[317725]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:55:53 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:53.552 262775 INFO neutron.agent.linux.ip_lib [None req-ae5c5567-c58a-4640-8a90-62767c3d1987 - - - - - -] Device tapdfde524f-61 cannot be used as it has no MAC address
Feb 20 09:55:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:53.572 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:53 np0005625203.localdomain kernel: device tapdfde524f-61 entered promiscuous mode
Feb 20 09:55:53 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581353.5808] manager: (tapdfde524f-61): new Generic device (/org/freedesktop/NetworkManager/Devices/52)
Feb 20 09:55:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:53.581 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:53 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:53Z|00263|binding|INFO|Claiming lport dfde524f-617a-4f02-912a-fdbeb9d1d61e for this chassis.
Feb 20 09:55:53 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:53Z|00264|binding|INFO|dfde524f-617a-4f02-912a-fdbeb9d1d61e: Claiming unknown
Feb 20 09:55:53 np0005625203.localdomain systemd-udevd[317737]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:55:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:53.588 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feff:f8fb/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=dfde524f-617a-4f02-912a-fdbeb9d1d61e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:53.590 161112 INFO neutron.agent.ovn.metadata.agent [-] Port dfde524f-617a-4f02-912a-fdbeb9d1d61e in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:55:53 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:53Z|00265|binding|INFO|Setting lport dfde524f-617a-4f02-912a-fdbeb9d1d61e ovn-installed in OVS
Feb 20 09:55:53 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:53Z|00266|binding|INFO|Setting lport dfde524f-617a-4f02-912a-fdbeb9d1d61e up in Southbound
Feb 20 09:55:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:53.592 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port cc30f213-7bf8-4439-aa79-6da7bcfae648 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:55:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:53.592 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:55:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:53.592 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8955dd-d309-4489-888d-9ddf2fa6139a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:53.593 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:53.595 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapdfde524f-61: No such device
Feb 20 09:55:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:53.621 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapdfde524f-61: No such device
Feb 20 09:55:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapdfde524f-61: No such device
Feb 20 09:55:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapdfde524f-61: No such device
Feb 20 09:55:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapdfde524f-61: No such device
Feb 20 09:55:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapdfde524f-61: No such device
Feb 20 09:55:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapdfde524f-61: No such device
Feb 20 09:55:53 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tapdfde524f-61: No such device
Feb 20 09:55:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:53.672 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:53.707 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:54.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:55:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:55:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:54.371 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:55:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:54.371 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:55:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:54.372 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:55:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:54.372 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:55:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:54.372 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:55:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:54.461 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:54 np0005625203.localdomain podman[317809]: 
Feb 20 09:55:54 np0005625203.localdomain podman[317809]: 2026-02-20 09:55:54.565308186 +0000 UTC m=+0.096388583 container create 0b512801eedbb9c8faa22d00ddefb3ec31a34fcc7f0bf11840e66165594e3306 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:55:54 np0005625203.localdomain systemd[1]: Started libpod-conmon-0b512801eedbb9c8faa22d00ddefb3ec31a34fcc7f0bf11840e66165594e3306.scope.
Feb 20 09:55:54 np0005625203.localdomain podman[317809]: 2026-02-20 09:55:54.519963613 +0000 UTC m=+0.051044040 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:54 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:54 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a82877c7e7e974d548a2e06a268e43b6e4108064ac88caec2836fea0fb6d6279/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:54 np0005625203.localdomain podman[317809]: 2026-02-20 09:55:54.648955424 +0000 UTC m=+0.180035821 container init 0b512801eedbb9c8faa22d00ddefb3ec31a34fcc7f0bf11840e66165594e3306 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 20 09:55:54 np0005625203.localdomain podman[317809]: 2026-02-20 09:55:54.662533574 +0000 UTC m=+0.193613961 container start 0b512801eedbb9c8faa22d00ddefb3ec31a34fcc7f0bf11840e66165594e3306 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 20 09:55:54 np0005625203.localdomain dnsmasq[317845]: started, version 2.85 cachesize 150
Feb 20 09:55:54 np0005625203.localdomain dnsmasq[317845]: DNS service limited to local subnets
Feb 20 09:55:54 np0005625203.localdomain dnsmasq[317845]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:54 np0005625203.localdomain dnsmasq[317845]: warning: no upstream servers configured
Feb 20 09:55:54 np0005625203.localdomain dnsmasq-dhcp[317845]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:54 np0005625203.localdomain dnsmasq[317845]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:54 np0005625203.localdomain dnsmasq-dhcp[317845]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:54 np0005625203.localdomain dnsmasq-dhcp[317845]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:54 np0005625203.localdomain sshd[317725]: Invalid user oracle from 103.48.192.48 port 32980
Feb 20 09:55:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:55:54 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/223948785' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:55:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:54.846 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:55:54 np0005625203.localdomain ceph-mon[296066]: pgmap v289: 177 pgs: 177 active+clean; 146 MiB data, 792 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 13 KiB/s wr, 11 op/s
Feb 20 09:55:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/223948785' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:55:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:54.867 262775 INFO neutron.agent.dhcp.agent [None req-4e9080a2-43e8-4cf2-b940-6e0b02664f0c - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:54 np0005625203.localdomain sshd[317725]: Received disconnect from 103.48.192.48 port 32980:11: Bye Bye [preauth]
Feb 20 09:55:54 np0005625203.localdomain sshd[317725]: Disconnected from invalid user oracle 103.48.192.48 port 32980 [preauth]
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.051 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.053 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11701MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.053 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.053 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:55:55 np0005625203.localdomain dnsmasq[317845]: exiting on receipt of SIGTERM
Feb 20 09:55:55 np0005625203.localdomain podman[317866]: 2026-02-20 09:55:55.077859393 +0000 UTC m=+0.053416193 container kill 0b512801eedbb9c8faa22d00ddefb3ec31a34fcc7f0bf11840e66165594e3306 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:55:55 np0005625203.localdomain systemd[1]: libpod-0b512801eedbb9c8faa22d00ddefb3ec31a34fcc7f0bf11840e66165594e3306.scope: Deactivated successfully.
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.133 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.133 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:55:55 np0005625203.localdomain podman[317879]: 2026-02-20 09:55:55.144803805 +0000 UTC m=+0.054588740 container died 0b512801eedbb9c8faa22d00ddefb3ec31a34fcc7f0bf11840e66165594e3306 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.177 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:55:55 np0005625203.localdomain podman[317879]: 2026-02-20 09:55:55.18279694 +0000 UTC m=+0.092581825 container cleanup 0b512801eedbb9c8faa22d00ddefb3ec31a34fcc7f0bf11840e66165594e3306 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:55 np0005625203.localdomain systemd[1]: libpod-conmon-0b512801eedbb9c8faa22d00ddefb3ec31a34fcc7f0bf11840e66165594e3306.scope: Deactivated successfully.
Feb 20 09:55:55 np0005625203.localdomain podman[317881]: 2026-02-20 09:55:55.213347606 +0000 UTC m=+0.114995159 container remove 0b512801eedbb9c8faa22d00ddefb3ec31a34fcc7f0bf11840e66165594e3306 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:55:55 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/469548279' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:55:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:55:55 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/469548279' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:55:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-a82877c7e7e974d548a2e06a268e43b6e4108064ac88caec2836fea0fb6d6279-merged.mount: Deactivated successfully.
Feb 20 09:55:55 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b512801eedbb9c8faa22d00ddefb3ec31a34fcc7f0bf11840e66165594e3306-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:55:55 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2497222268' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.625 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.631 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.672 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.674 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.675 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.675 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.676 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.695 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 20 09:55:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "snap_name": "da20d4b5-2abc-49bc-a78c-39ce3cdadf16", "format": "json"}]: dispatch
Feb 20 09:55:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/469548279' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:55:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/469548279' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:55:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2497222268' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:55:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:55.908 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:56 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:56.010 2 INFO neutron.agent.securitygroups_rpc [None req-29e4bc85-2be1-46ee-a8e4-a169ea695f47 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:55:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:56.030 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2 2001:db8::f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:56.032 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated
Feb 20 09:55:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:56.036 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port cc30f213-7bf8-4439-aa79-6da7bcfae648 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:55:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:56.036 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:55:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:56.038 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[833f2509-db45-4aef-8052-7b9879d0d9a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:56 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:56.448 262775 INFO neutron.agent.linux.ip_lib [None req-cf18d18b-1016-470a-ac1a-b72c1d11e2a0 - - - - - -] Device tap11b269d4-75 cannot be used as it has no MAC address
Feb 20 09:55:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:56.480 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:56 np0005625203.localdomain kernel: device tap11b269d4-75 entered promiscuous mode
Feb 20 09:55:56 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581356.4897] manager: (tap11b269d4-75): new Generic device (/org/freedesktop/NetworkManager/Devices/53)
Feb 20 09:55:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:56Z|00267|binding|INFO|Claiming lport 11b269d4-7522-4683-9ab1-80891c8abc8d for this chassis.
Feb 20 09:55:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:56.491 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:56Z|00268|binding|INFO|11b269d4-7522-4683-9ab1-80891c8abc8d: Claiming unknown
Feb 20 09:55:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:56.501 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:3::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-bf4780b3-f804-48fa-9310-009c8fa52c1e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf4780b3-f804-48fa-9310-009c8fa52c1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c44e13adebb4610b7c0cd2fdc62a5b7', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7e4e0ac-f792-4e51-9629-bc5f266735a6, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=11b269d4-7522-4683-9ab1-80891c8abc8d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:56.503 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 11b269d4-7522-4683-9ab1-80891c8abc8d in datapath bf4780b3-f804-48fa-9310-009c8fa52c1e bound to our chassis
Feb 20 09:55:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:56.506 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bf4780b3-f804-48fa-9310-009c8fa52c1e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:55:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:56.507 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[5b17a585-121a-4093-b3ce-67e08bb76735]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:55:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:56Z|00269|binding|INFO|Setting lport 11b269d4-7522-4683-9ab1-80891c8abc8d ovn-installed in OVS
Feb 20 09:55:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:55:56Z|00270|binding|INFO|Setting lport 11b269d4-7522-4683-9ab1-80891c8abc8d up in Southbound
Feb 20 09:55:56 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:56.539 2 INFO neutron.agent.securitygroups_rpc [None req-35ffc450-9844-472b-bd23-e1de49029696 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:55:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:56.539 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:56.595 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:56.629 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:56.749 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:55:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:55:56.750 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:55:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:56.753 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:56 np0005625203.localdomain ceph-mon[296066]: pgmap v290: 177 pgs: 177 active+clean; 192 MiB data, 856 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 52 op/s
Feb 20 09:55:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1770512086' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:55:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:55:56 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:55:56 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:56.992 2 INFO neutron.agent.securitygroups_rpc [None req-add9b10e-24c8-47e6-9727-38256205ffd5 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:55:57 np0005625203.localdomain podman[318020]: 
Feb 20 09:55:57 np0005625203.localdomain podman[318020]: 2026-02-20 09:55:57.490839638 +0000 UTC m=+0.104705031 container create b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:57 np0005625203.localdomain systemd[1]: Started libpod-conmon-b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521.scope.
Feb 20 09:55:57 np0005625203.localdomain podman[318020]: 2026-02-20 09:55:57.446047702 +0000 UTC m=+0.059913165 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:57 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:57 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99add1eece590115829b18f5b1150c9938019d6d754fb5fe4b83cc96e92c525c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:57 np0005625203.localdomain podman[318020]: 2026-02-20 09:55:57.576679393 +0000 UTC m=+0.190544776 container init b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:55:57 np0005625203.localdomain podman[318020]: 2026-02-20 09:55:57.584594128 +0000 UTC m=+0.198459521 container start b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:55:57 np0005625203.localdomain dnsmasq[318070]: started, version 2.85 cachesize 150
Feb 20 09:55:57 np0005625203.localdomain dnsmasq[318070]: DNS service limited to local subnets
Feb 20 09:55:57 np0005625203.localdomain dnsmasq[318070]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:57 np0005625203.localdomain dnsmasq[318070]: warning: no upstream servers configured
Feb 20 09:55:57 np0005625203.localdomain dnsmasq-dhcp[318070]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:55:57 np0005625203.localdomain dnsmasq-dhcp[318070]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:55:57 np0005625203.localdomain dnsmasq[318070]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:57 np0005625203.localdomain dnsmasq-dhcp[318070]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:57 np0005625203.localdomain dnsmasq-dhcp[318070]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:57 np0005625203.localdomain podman[318054]: 
Feb 20 09:55:57 np0005625203.localdomain podman[318054]: 2026-02-20 09:55:57.622971626 +0000 UTC m=+0.113857644 container create 2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf4780b3-f804-48fa-9310-009c8fa52c1e, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:55:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:57.642 262775 INFO neutron.agent.dhcp.agent [None req-06ebd3c9-a64e-4a6b-9e09-a9088a37233b - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:56Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c85a30>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c85040>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c85a00>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c85520>], id=7384dba4-3b04-4f3f-833f-22f7ec4a916f, ip_allocation=immediate, mac_address=fa:16:3e:bc:98:08, name=tempest-NetworksTestDHCPv6-1413748907, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=27, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['43356d50-c0e0-4502-9b66-c436681e0cb9', 'db01e8be-e939-4d4e-803b-22bf82ae36b4'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:53Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=2221, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:55:56Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:55:57 np0005625203.localdomain systemd[1]: Started libpod-conmon-2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3.scope.
Feb 20 09:55:57 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:55:57 np0005625203.localdomain podman[318054]: 2026-02-20 09:55:57.573389931 +0000 UTC m=+0.064275989 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:55:57 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c086024eef014804d116e25640b88f867a9d1a6ac7654915f84b2720bf66d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:55:57 np0005625203.localdomain podman[318054]: 2026-02-20 09:55:57.683745956 +0000 UTC m=+0.174631964 container init 2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf4780b3-f804-48fa-9310-009c8fa52c1e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:55:57 np0005625203.localdomain podman[318054]: 2026-02-20 09:55:57.693837888 +0000 UTC m=+0.184723906 container start 2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf4780b3-f804-48fa-9310-009c8fa52c1e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:57 np0005625203.localdomain dnsmasq[318079]: started, version 2.85 cachesize 150
Feb 20 09:55:57 np0005625203.localdomain dnsmasq[318079]: DNS service limited to local subnets
Feb 20 09:55:57 np0005625203.localdomain dnsmasq[318079]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:55:57 np0005625203.localdomain dnsmasq[318079]: warning: no upstream servers configured
Feb 20 09:55:57 np0005625203.localdomain dnsmasq-dhcp[318079]: DHCPv6, static leases only on 2001:db8:3::, lease time 1d
Feb 20 09:55:57 np0005625203.localdomain dnsmasq[318079]: read /var/lib/neutron/dhcp/bf4780b3-f804-48fa-9310-009c8fa52c1e/addn_hosts - 0 addresses
Feb 20 09:55:57 np0005625203.localdomain dnsmasq-dhcp[318079]: read /var/lib/neutron/dhcp/bf4780b3-f804-48fa-9310-009c8fa52c1e/host
Feb 20 09:55:57 np0005625203.localdomain dnsmasq-dhcp[318079]: read /var/lib/neutron/dhcp/bf4780b3-f804-48fa-9310-009c8fa52c1e/opts
Feb 20 09:55:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:57.720 262775 INFO neutron.agent.dhcp.agent [None req-6ac8e99a-1779-419e-9eca-b0b4e2a512c9 - - - - - -] DHCP configuration for ports {'dfde524f-617a-4f02-912a-fdbeb9d1d61e', 'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:55:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:57.768 262775 INFO neutron.agent.dhcp.agent [None req-cf18d18b-1016-470a-ac1a-b72c1d11e2a0 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:55Z, description=, device_id=7f90544d-5ad8-4f1b-8f43-7851902677f5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4f640a0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ed1af0>], id=9aefe73b-82f5-4c10-a448-4878a55ca0bb, ip_allocation=immediate, mac_address=fa:16:3e:8c:d5:16, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:52Z, description=, dns_domain=, id=bf4780b3-f804-48fa-9310-009c8fa52c1e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-416237820, port_security_enabled=True, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58920, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2178, status=ACTIVE, subnets=['43cb6b99-a821-4352-b742-1b146fc4ff9e'], tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:55:54Z, vlan_transparent=None, network_id=bf4780b3-f804-48fa-9310-009c8fa52c1e, port_security_enabled=False, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2212, status=DOWN, tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:55:56Z on network bf4780b3-f804-48fa-9310-009c8fa52c1e
Feb 20 09:55:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:57.853 262775 INFO neutron.agent.dhcp.agent [None req-626d952b-77aa-465e-aa33-30dd66b49bd3 - - - - - -] DHCP configuration for ports {'235459d1-0522-4741-8439-8af0e6ad0365'} is completed
Feb 20 09:55:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:55:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "format": "json"}]: dispatch
Feb 20 09:55:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:55:57 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/438322960' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:55:57 np0005625203.localdomain dnsmasq[318070]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 2 addresses
Feb 20 09:55:57 np0005625203.localdomain podman[318101]: 2026-02-20 09:55:57.936482155 +0000 UTC m=+0.118131655 container kill b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 09:55:57 np0005625203.localdomain dnsmasq-dhcp[318070]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:57 np0005625203.localdomain dnsmasq-dhcp[318070]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:57 np0005625203.localdomain dnsmasq[318079]: read /var/lib/neutron/dhcp/bf4780b3-f804-48fa-9310-009c8fa52c1e/addn_hosts - 1 addresses
Feb 20 09:55:57 np0005625203.localdomain dnsmasq-dhcp[318079]: read /var/lib/neutron/dhcp/bf4780b3-f804-48fa-9310-009c8fa52c1e/host
Feb 20 09:55:57 np0005625203.localdomain dnsmasq-dhcp[318079]: read /var/lib/neutron/dhcp/bf4780b3-f804-48fa-9310-009c8fa52c1e/opts
Feb 20 09:55:57 np0005625203.localdomain podman[318126]: 2026-02-20 09:55:57.983473999 +0000 UTC m=+0.071776782 container kill 2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf4780b3-f804-48fa-9310-009c8fa52c1e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:55:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:58.358 262775 INFO neutron.agent.dhcp.agent [None req-027540d9-74e6-42a2-9c9f-724f4204847b - - - - - -] DHCP configuration for ports {'9aefe73b-82f5-4c10-a448-4878a55ca0bb', '7384dba4-3b04-4f3f-833f-22f7ec4a916f'} is completed
Feb 20 09:55:58 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:58.487 2 INFO neutron.agent.securitygroups_rpc [None req-a5703a13-6375-4e7f-aba2-f531a9b12f0a f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:55:58 np0005625203.localdomain dnsmasq[318070]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:55:58 np0005625203.localdomain dnsmasq-dhcp[318070]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:55:58 np0005625203.localdomain dnsmasq-dhcp[318070]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:55:58 np0005625203.localdomain podman[318175]: 2026-02-20 09:55:58.754986998 +0000 UTC m=+0.062866176 container kill b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 20 09:55:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:58.822 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:55:55Z, description=, device_id=7f90544d-5ad8-4f1b-8f43-7851902677f5, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e1a490>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e1a220>], id=9aefe73b-82f5-4c10-a448-4878a55ca0bb, ip_allocation=immediate, mac_address=fa:16:3e:8c:d5:16, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:52Z, description=, dns_domain=, id=bf4780b3-f804-48fa-9310-009c8fa52c1e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-RoutersIpV6Test-416237820, port_security_enabled=True, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=58920, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2178, status=ACTIVE, subnets=['43cb6b99-a821-4352-b742-1b146fc4ff9e'], tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:55:54Z, vlan_transparent=None, network_id=bf4780b3-f804-48fa-9310-009c8fa52c1e, port_security_enabled=False, project_id=1c44e13adebb4610b7c0cd2fdc62a5b7, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2212, status=DOWN, tags=[], tenant_id=1c44e13adebb4610b7c0cd2fdc62a5b7, updated_at=2026-02-20T09:55:56Z on network bf4780b3-f804-48fa-9310-009c8fa52c1e
Feb 20 09:55:58 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:58.868 2 INFO neutron.agent.securitygroups_rpc [None req-ca73f77f-8256-4f4e-b317-bc2e72fd527f 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:55:58 np0005625203.localdomain ceph-mon[296066]: pgmap v291: 177 pgs: 177 active+clean; 192 MiB data, 856 MiB used, 41 GiB / 42 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 52 op/s
Feb 20 09:55:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:55:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:55:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:55:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 163270 "" "Go-http-client/1.1"
Feb 20 09:55:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 20197 "" "Go-http-client/1.1"
Feb 20 09:55:59 np0005625203.localdomain podman[318216]: 2026-02-20 09:55:59.074356768 +0000 UTC m=+0.109739096 container kill 2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf4780b3-f804-48fa-9310-009c8fa52c1e, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:55:59 np0005625203.localdomain dnsmasq[318079]: read /var/lib/neutron/dhcp/bf4780b3-f804-48fa-9310-009c8fa52c1e/addn_hosts - 1 addresses
Feb 20 09:55:59 np0005625203.localdomain dnsmasq-dhcp[318079]: read /var/lib/neutron/dhcp/bf4780b3-f804-48fa-9310-009c8fa52c1e/host
Feb 20 09:55:59 np0005625203.localdomain dnsmasq-dhcp[318079]: read /var/lib/neutron/dhcp/bf4780b3-f804-48fa-9310-009c8fa52c1e/opts
Feb 20 09:55:59 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:55:59.304 262775 INFO neutron.agent.dhcp.agent [None req-c6abd989-03bb-44ca-82fe-a33063f99481 - - - - - -] DHCP configuration for ports {'9aefe73b-82f5-4c10-a448-4878a55ca0bb'} is completed
Feb 20 09:55:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:55:59 np0005625203.localdomain sshd[318253]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:55:59 np0005625203.localdomain dnsmasq[318070]: exiting on receipt of SIGTERM
Feb 20 09:55:59 np0005625203.localdomain podman[318255]: 2026-02-20 09:55:59.445817151 +0000 UTC m=+0.062025790 container kill b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb 20 09:55:59 np0005625203.localdomain systemd[1]: libpod-b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521.scope: Deactivated successfully.
Feb 20 09:55:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:55:59.500 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:55:59 np0005625203.localdomain podman[318267]: 2026-02-20 09:55:59.520120269 +0000 UTC m=+0.061558516 container died b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:55:59 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521-userdata-shm.mount: Deactivated successfully.
Feb 20 09:55:59 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-99add1eece590115829b18f5b1150c9938019d6d754fb5fe4b83cc96e92c525c-merged.mount: Deactivated successfully.
Feb 20 09:55:59 np0005625203.localdomain podman[318267]: 2026-02-20 09:55:59.553313926 +0000 UTC m=+0.094752143 container cleanup b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb 20 09:55:59 np0005625203.localdomain systemd[1]: libpod-conmon-b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521.scope: Deactivated successfully.
Feb 20 09:55:59 np0005625203.localdomain sshd[318253]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:55:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:55:59 np0005625203.localdomain podman[318273]: 2026-02-20 09:55:59.609192975 +0000 UTC m=+0.135703629 container remove b806bd85a31a708f3787f558aaba0d3beb31679d5a2d4bd4707371d459a50521 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:55:59 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:55:59.674 2 INFO neutron.agent.securitygroups_rpc [None req-ca73f77f-8256-4f4e-b317-bc2e72fd527f 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:55:59 np0005625203.localdomain podman[318295]: 2026-02-20 09:55:59.691505411 +0000 UTC m=+0.086646862 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Feb 20 09:55:59 np0005625203.localdomain podman[318295]: 2026-02-20 09:55:59.722590203 +0000 UTC m=+0.117731644 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:55:59 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:55:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "snap_name": "da20d4b5-2abc-49bc-a78c-39ce3cdadf16", "target_sub_name": "f9ac42b7-680c-41fc-8784-6176baa738f7", "format": "json"}]: dispatch
Feb 20 09:55:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9ac42b7-680c-41fc-8784-6176baa738f7", "format": "json"}]: dispatch
Feb 20 09:56:00 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:00.498 2 INFO neutron.agent.securitygroups_rpc [None req-78688fc9-1f65-4ea8-8870-27e8d247cb32 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:00.543 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:00 np0005625203.localdomain podman[318363]: 
Feb 20 09:56:00 np0005625203.localdomain podman[318363]: 2026-02-20 09:56:00.593555998 +0000 UTC m=+0.101397298 container create c1c61d51368f6f394268ceadc1190c8d3499f74f1884e36fabab7ca0f5eb2272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:00 np0005625203.localdomain systemd[1]: Started libpod-conmon-c1c61d51368f6f394268ceadc1190c8d3499f74f1884e36fabab7ca0f5eb2272.scope.
Feb 20 09:56:00 np0005625203.localdomain podman[318363]: 2026-02-20 09:56:00.546990568 +0000 UTC m=+0.054831888 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:00 np0005625203.localdomain systemd[1]: tmp-crun.cpMDJp.mount: Deactivated successfully.
Feb 20 09:56:00 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a8a6860df01a6944303fa9a78bfa3aaeecb1b3ae2933f6203914bc25a3b6ba9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:00 np0005625203.localdomain podman[318363]: 2026-02-20 09:56:00.672726268 +0000 UTC m=+0.180567558 container init c1c61d51368f6f394268ceadc1190c8d3499f74f1884e36fabab7ca0f5eb2272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:56:00 np0005625203.localdomain podman[318363]: 2026-02-20 09:56:00.68573922 +0000 UTC m=+0.193580520 container start c1c61d51368f6f394268ceadc1190c8d3499f74f1884e36fabab7ca0f5eb2272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:00 np0005625203.localdomain dnsmasq[318381]: started, version 2.85 cachesize 150
Feb 20 09:56:00 np0005625203.localdomain dnsmasq[318381]: DNS service limited to local subnets
Feb 20 09:56:00 np0005625203.localdomain dnsmasq[318381]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:00 np0005625203.localdomain dnsmasq[318381]: warning: no upstream servers configured
Feb 20 09:56:00 np0005625203.localdomain dnsmasq-dhcp[318381]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:56:00 np0005625203.localdomain dnsmasq[318381]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:00 np0005625203.localdomain dnsmasq-dhcp[318381]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:00 np0005625203.localdomain dnsmasq-dhcp[318381]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:00.698 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:00.912 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:00 np0005625203.localdomain ceph-mon[296066]: pgmap v292: 177 pgs: 177 active+clean; 177 MiB data, 855 MiB used, 41 GiB / 42 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 58 op/s
Feb 20 09:56:00 np0005625203.localdomain ceph-mon[296066]: mgrmap e49: np0005625202.arwxwo(active, since 7m), standbys: np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:56:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e1c014f1-0af0-4079-b9b5-c123fb6102a7", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:56:01 np0005625203.localdomain dnsmasq[318381]: exiting on receipt of SIGTERM
Feb 20 09:56:01 np0005625203.localdomain podman[318399]: 2026-02-20 09:56:01.06390503 +0000 UTC m=+0.045310323 container kill c1c61d51368f6f394268ceadc1190c8d3499f74f1884e36fabab7ca0f5eb2272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:01 np0005625203.localdomain systemd[1]: libpod-c1c61d51368f6f394268ceadc1190c8d3499f74f1884e36fabab7ca0f5eb2272.scope: Deactivated successfully.
Feb 20 09:56:01 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:01.075 262775 INFO neutron.agent.dhcp.agent [None req-fad90d44-821b-45af-88dc-ed2242440496 - - - - - -] DHCP configuration for ports {'dfde524f-617a-4f02-912a-fdbeb9d1d61e', 'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:01 np0005625203.localdomain podman[318413]: 2026-02-20 09:56:01.122759431 +0000 UTC m=+0.041085072 container died c1c61d51368f6f394268ceadc1190c8d3499f74f1884e36fabab7ca0f5eb2272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:01 np0005625203.localdomain podman[318413]: 2026-02-20 09:56:01.167149034 +0000 UTC m=+0.085474635 container remove c1c61d51368f6f394268ceadc1190c8d3499f74f1884e36fabab7ca0f5eb2272 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:01.181 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:01Z|00271|binding|INFO|Releasing lport dfde524f-617a-4f02-912a-fdbeb9d1d61e from this chassis (sb_readonly=0)
Feb 20 09:56:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:01Z|00272|binding|INFO|Setting lport dfde524f-617a-4f02-912a-fdbeb9d1d61e down in Southbound
Feb 20 09:56:01 np0005625203.localdomain kernel: device tapdfde524f-61 left promiscuous mode
Feb 20 09:56:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:01.189 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:feff:f8fb/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=dfde524f-617a-4f02-912a-fdbeb9d1d61e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:01.191 161112 INFO neutron.agent.ovn.metadata.agent [-] Port dfde524f-617a-4f02-912a-fdbeb9d1d61e in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:56:01 np0005625203.localdomain auditd[725]: Audit daemon rotating log files
Feb 20 09:56:01 np0005625203.localdomain systemd[1]: libpod-conmon-c1c61d51368f6f394268ceadc1190c8d3499f74f1884e36fabab7ca0f5eb2272.scope: Deactivated successfully.
Feb 20 09:56:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:01.195 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:01.196 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[e1113410-7d76-41b9-a1b1-6c117847b775]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:01.200 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:01.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-2a8a6860df01a6944303fa9a78bfa3aaeecb1b3ae2933f6203914bc25a3b6ba9-merged.mount: Deactivated successfully.
Feb 20 09:56:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c1c61d51368f6f394268ceadc1190c8d3499f74f1884e36fabab7ca0f5eb2272-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:01 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:56:01 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:01.579 2 INFO neutron.agent.securitygroups_rpc [None req-a4e037d0-314c-4de3-aad9-537a96cc703d 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:56:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:56:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:02.124 262775 INFO neutron.agent.linux.ip_lib [None req-e9d3e2da-9bb1-4f60-babd-3f5539e18fb7 - - - - - -] Device tap1aa91995-3c cannot be used as it has no MAC address
Feb 20 09:56:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:02.157 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:02 np0005625203.localdomain kernel: device tap1aa91995-3c entered promiscuous mode
Feb 20 09:56:02 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581362.1692] manager: (tap1aa91995-3c): new Generic device (/org/freedesktop/NetworkManager/Devices/54)
Feb 20 09:56:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:02.171 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:02 np0005625203.localdomain systemd-udevd[318446]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:02.180 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:02.204 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:02 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:02.253 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:02.288 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:02.340 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:02.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:02.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:56:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:02.879 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:02 np0005625203.localdomain ceph-mon[296066]: pgmap v293: 177 pgs: 177 active+clean; 146 MiB data, 806 MiB used, 41 GiB / 42 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 60 op/s
Feb 20 09:56:02 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e1c014f1-0af0-4079-b9b5-c123fb6102a7", "format": "json"}]: dispatch
Feb 20 09:56:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1014543227' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1014543227' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:02 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:56:03 np0005625203.localdomain podman[318517]: 
Feb 20 09:56:03 np0005625203.localdomain podman[318517]: 2026-02-20 09:56:03.202395131 +0000 UTC m=+0.085299111 container create 94971a8ad1c283cfca4952197816b6215838c13463abfaa96633df1a099b7719 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 20 09:56:03 np0005625203.localdomain systemd[1]: Started libpod-conmon-94971a8ad1c283cfca4952197816b6215838c13463abfaa96633df1a099b7719.scope.
Feb 20 09:56:03 np0005625203.localdomain podman[318517]: 2026-02-20 09:56:03.162484835 +0000 UTC m=+0.045388815 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:03 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:03 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc929f29fcca66f59cba1b95824675c392046872e0aff46ce449b4629684ac5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:03 np0005625203.localdomain podman[318517]: 2026-02-20 09:56:03.282735476 +0000 UTC m=+0.165639446 container init 94971a8ad1c283cfca4952197816b6215838c13463abfaa96633df1a099b7719 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:03 np0005625203.localdomain podman[318517]: 2026-02-20 09:56:03.291672432 +0000 UTC m=+0.174576402 container start 94971a8ad1c283cfca4952197816b6215838c13463abfaa96633df1a099b7719 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:56:03 np0005625203.localdomain dnsmasq[318535]: started, version 2.85 cachesize 150
Feb 20 09:56:03 np0005625203.localdomain dnsmasq[318535]: DNS service limited to local subnets
Feb 20 09:56:03 np0005625203.localdomain dnsmasq[318535]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:03 np0005625203.localdomain dnsmasq[318535]: warning: no upstream servers configured
Feb 20 09:56:03 np0005625203.localdomain dnsmasq-dhcp[318535]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:56:03 np0005625203.localdomain dnsmasq[318535]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:03 np0005625203.localdomain dnsmasq-dhcp[318535]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:03 np0005625203.localdomain dnsmasq-dhcp[318535]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:03.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:03 np0005625203.localdomain kernel: device tap1aa91995-3c left promiscuous mode
Feb 20 09:56:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:03.435 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:03.449 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:03.483 262775 INFO neutron.agent.dhcp.agent [None req-04ed0f53-134f-409c-9833-2ceb0c530bd5 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:03 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:03.753 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:56:03 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:03.901 2 INFO neutron.agent.securitygroups_rpc [None req-7d0bf5e0-9e1d-414c-8190-249e450828ca 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:56:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "e1c014f1-0af0-4079-b9b5-c123fb6102a7", "new_size": 2147483648, "format": "json"}]: dispatch
Feb 20 09:56:04 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:04.198 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:04 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:04.200 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated
Feb 20 09:56:04 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:04.204 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:04 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:04.205 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[f844b048-17e1-498f-a3cc-7b723df53225]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:04 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:56:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:04.545 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:05 np0005625203.localdomain ceph-mon[296066]: pgmap v294: 177 pgs: 177 active+clean; 146 MiB data, 806 MiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Feb 20 09:56:05 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1963234622' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:56:05 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/462116900' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:05 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/462116900' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:05.337 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:05.340 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:05.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:56:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:05.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:56:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:05.365 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:56:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:05.366 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:05.915 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/542996593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:56:06 np0005625203.localdomain dnsmasq[318079]: read /var/lib/neutron/dhcp/bf4780b3-f804-48fa-9310-009c8fa52c1e/addn_hosts - 0 addresses
Feb 20 09:56:06 np0005625203.localdomain dnsmasq-dhcp[318079]: read /var/lib/neutron/dhcp/bf4780b3-f804-48fa-9310-009c8fa52c1e/host
Feb 20 09:56:06 np0005625203.localdomain systemd[1]: tmp-crun.67n0w5.mount: Deactivated successfully.
Feb 20 09:56:06 np0005625203.localdomain podman[318554]: 2026-02-20 09:56:06.309378325 +0000 UTC m=+0.061668679 container kill 2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf4780b3-f804-48fa-9310-009c8fa52c1e, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:06 np0005625203.localdomain dnsmasq-dhcp[318079]: read /var/lib/neutron/dhcp/bf4780b3-f804-48fa-9310-009c8fa52c1e/opts
Feb 20 09:56:06 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:56:06 np0005625203.localdomain podman[318568]: 2026-02-20 09:56:06.423801365 +0000 UTC m=+0.084586749 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:56:06 np0005625203.localdomain podman[318568]: 2026-02-20 09:56:06.481778918 +0000 UTC m=+0.142564252 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 09:56:06 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:06Z|00273|binding|INFO|Releasing lport 11b269d4-7522-4683-9ab1-80891c8abc8d from this chassis (sb_readonly=0)
Feb 20 09:56:06 np0005625203.localdomain kernel: device tap11b269d4-75 left promiscuous mode
Feb 20 09:56:06 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:06Z|00274|binding|INFO|Setting lport 11b269d4-7522-4683-9ab1-80891c8abc8d down in Southbound
Feb 20 09:56:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:06.482 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:06 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:56:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:06.497 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:06.501 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:3::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-bf4780b3-f804-48fa-9310-009c8fa52c1e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bf4780b3-f804-48fa-9310-009c8fa52c1e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c44e13adebb4610b7c0cd2fdc62a5b7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7e4e0ac-f792-4e51-9629-bc5f266735a6, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=11b269d4-7522-4683-9ab1-80891c8abc8d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:06.503 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 11b269d4-7522-4683-9ab1-80891c8abc8d in datapath bf4780b3-f804-48fa-9310-009c8fa52c1e unbound from our chassis
Feb 20 09:56:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:06.505 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network bf4780b3-f804-48fa-9310-009c8fa52c1e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:56:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:06.506 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[462ad1da-d5fa-4fd1-87b9-a867a4bbd1bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:07 np0005625203.localdomain ceph-mon[296066]: pgmap v295: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 43 KiB/s rd, 1.8 MiB/s wr, 70 op/s
Feb 20 09:56:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:56:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:56:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:56:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:56:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:56:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:56:07 np0005625203.localdomain systemd[1]: tmp-crun.kfXxkI.mount: Deactivated successfully.
Feb 20 09:56:07 np0005625203.localdomain podman[318619]: 2026-02-20 09:56:07.506632025 +0000 UTC m=+0.063535177 container kill 2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf4780b3-f804-48fa-9310-009c8fa52c1e, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:07 np0005625203.localdomain dnsmasq[318079]: exiting on receipt of SIGTERM
Feb 20 09:56:07 np0005625203.localdomain systemd[1]: libpod-2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3.scope: Deactivated successfully.
Feb 20 09:56:07 np0005625203.localdomain podman[318632]: 2026-02-20 09:56:07.585789644 +0000 UTC m=+0.060165492 container died 2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf4780b3-f804-48fa-9310-009c8fa52c1e, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:56:07 np0005625203.localdomain podman[318632]: 2026-02-20 09:56:07.620193329 +0000 UTC m=+0.094569147 container cleanup 2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf4780b3-f804-48fa-9310-009c8fa52c1e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:07 np0005625203.localdomain systemd[1]: libpod-conmon-2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3.scope: Deactivated successfully.
Feb 20 09:56:07 np0005625203.localdomain podman[318633]: 2026-02-20 09:56:07.663914881 +0000 UTC m=+0.134375729 container remove 2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-bf4780b3-f804-48fa-9310-009c8fa52c1e, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:56:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:07.670 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:56:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:07.670 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:56:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:07.671 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:56:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:07.876 262775 INFO neutron.agent.dhcp.agent [None req-694787b9-5cf8-45a3-9eee-d87b029546b1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:07 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:07.894 2 INFO neutron.agent.securitygroups_rpc [None req-8965acb2-2b16-4d89-a227-154eee5fe38f 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e1c014f1-0af0-4079-b9b5-c123fb6102a7", "format": "json"}]: dispatch
Feb 20 09:56:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e1c014f1-0af0-4079-b9b5-c123fb6102a7", "force": true, "format": "json"}]: dispatch
Feb 20 09:56:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/766486190' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/766486190' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:08 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:08.141 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:08.377 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.3 2001:db8::f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:08.379 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated
Feb 20 09:56:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:08.383 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:08.383 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee9b731-cf3b-4744-b9ec-4ab6554ca4bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-e6c086024eef014804d116e25640b88f867a9d1a6ac7654915f84b2720bf66d3-merged.mount: Deactivated successfully.
Feb 20 09:56:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2da2134a70b73955a1c3d97c4391ba667d14263e44172ec658cef6a8d908a4d3-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:08 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2dbf4780b3\x2df804\x2d48fa\x2d9310\x2d009c8fa52c1e.mount: Deactivated successfully.
Feb 20 09:56:08 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:08.593 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:56:08 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:08.683 2 INFO neutron.agent.securitygroups_rpc [None req-d51c801f-c66b-4697-8723-78081587d201 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:08 np0005625203.localdomain systemd[1]: tmp-crun.EwwaD5.mount: Deactivated successfully.
Feb 20 09:56:08 np0005625203.localdomain podman[318659]: 2026-02-20 09:56:08.765849243 +0000 UTC m=+0.087264162 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, architecture=x86_64, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.7)
Feb 20 09:56:08 np0005625203.localdomain podman[318659]: 2026-02-20 09:56:08.784400976 +0000 UTC m=+0.105815875 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible)
Feb 20 09:56:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:56:08 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:56:08 np0005625203.localdomain dnsmasq[318535]: exiting on receipt of SIGTERM
Feb 20 09:56:08 np0005625203.localdomain podman[318695]: 2026-02-20 09:56:08.848762897 +0000 UTC m=+0.066241869 container kill 94971a8ad1c283cfca4952197816b6215838c13463abfaa96633df1a099b7719 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:56:08 np0005625203.localdomain systemd[1]: libpod-94971a8ad1c283cfca4952197816b6215838c13463abfaa96633df1a099b7719.scope: Deactivated successfully.
Feb 20 09:56:08 np0005625203.localdomain podman[318703]: 2026-02-20 09:56:08.90410636 +0000 UTC m=+0.092073000 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Feb 20 09:56:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:08.917 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:08 np0005625203.localdomain podman[318720]: 2026-02-20 09:56:08.937104241 +0000 UTC m=+0.066920842 container died 94971a8ad1c283cfca4952197816b6215838c13463abfaa96633df1a099b7719 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 09:56:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:08.950 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:08 np0005625203.localdomain podman[318720]: 2026-02-20 09:56:08.986779317 +0000 UTC m=+0.116595898 container remove 94971a8ad1c283cfca4952197816b6215838c13463abfaa96633df1a099b7719 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:56:08 np0005625203.localdomain podman[318703]: 2026-02-20 09:56:08.991140352 +0000 UTC m=+0.179107072 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Feb 20 09:56:09 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:56:09 np0005625203.localdomain systemd[1]: libpod-conmon-94971a8ad1c283cfca4952197816b6215838c13463abfaa96633df1a099b7719.scope: Deactivated successfully.
Feb 20 09:56:09 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:09.068 262775 INFO neutron.agent.linux.ip_lib [None req-482cc349-d187-47bf-ba79-72c4fe781e76 - - - - - -] Device tap1aa91995-3c cannot be used as it has no MAC address
Feb 20 09:56:09 np0005625203.localdomain ceph-mon[296066]: pgmap v296: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 25 KiB/s wr, 34 op/s
Feb 20 09:56:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:09.090 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:09 np0005625203.localdomain kernel: device tap1aa91995-3c entered promiscuous mode
Feb 20 09:56:09 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:09Z|00275|binding|INFO|Claiming lport 1aa91995-3c2d-4cd2-9686-c296dd8b2b89 for this chassis.
Feb 20 09:56:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:09.095 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:09 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:09Z|00276|binding|INFO|1aa91995-3c2d-4cd2-9686-c296dd8b2b89: Claiming unknown
Feb 20 09:56:09 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581369.0974] manager: (tap1aa91995-3c): new Generic device (/org/freedesktop/NetworkManager/Devices/55)
Feb 20 09:56:09 np0005625203.localdomain systemd-udevd[318758]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:09.105 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec0:1aa2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': ''}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=1aa91995-3c2d-4cd2-9686-c296dd8b2b89) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:09.106 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 1aa91995-3c2d-4cd2-9686-c296dd8b2b89 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:56:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:09.109 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 062439ca-c35a-47ea-84a7-b29d0e3b4dbc IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:56:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:09.109 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:09.111 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[fedd7335-83bc-4d26-a41e-c16d2b799e20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:09 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:09 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:09Z|00277|binding|INFO|Setting lport 1aa91995-3c2d-4cd2-9686-c296dd8b2b89 ovn-installed in OVS
Feb 20 09:56:09 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:09Z|00278|binding|INFO|Setting lport 1aa91995-3c2d-4cd2-9686-c296dd8b2b89 up in Southbound
Feb 20 09:56:09 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:09.132 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:09 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:09 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:09 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:09 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:09 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:09 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1aa91995-3c: No such device
Feb 20 09:56:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:09.171 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:09.193 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:09 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:09.291 262775 INFO neutron.agent.linux.ip_lib [None req-9accc529-9fac-4c91-b14b-520c5d2f2746 - - - - - -] Device tapce08eb39-e0 cannot be used as it has no MAC address
Feb 20 09:56:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:09.323 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:09 np0005625203.localdomain kernel: device tapce08eb39-e0 entered promiscuous mode
Feb 20 09:56:09 np0005625203.localdomain systemd-udevd[318760]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:09 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581369.3287] manager: (tapce08eb39-e0): new Generic device (/org/freedesktop/NetworkManager/Devices/56)
Feb 20 09:56:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:09.331 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:09 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:09Z|00279|binding|INFO|Claiming lport ce08eb39-e0b4-4e5d-a2ce-d53a15770744 for this chassis.
Feb 20 09:56:09 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:09Z|00280|binding|INFO|ce08eb39-e0b4-4e5d-a2ce-d53a15770744: Claiming unknown
Feb 20 09:56:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:56:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:09.347 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-519ed234-0c28-4a63-b6ed-1122a8d9dfc9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-519ed234-0c28-4a63-b6ed-1122a8d9dfc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a09b3407c3143c8ae0948ccb18c1e61', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5ca6d31-d3f7-4561-ade0-690f710982c4, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=ce08eb39-e0b4-4e5d-a2ce-d53a15770744) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:09.349 161112 INFO neutron.agent.ovn.metadata.agent [-] Port ce08eb39-e0b4-4e5d-a2ce-d53a15770744 in datapath 519ed234-0c28-4a63-b6ed-1122a8d9dfc9 bound to our chassis
Feb 20 09:56:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:09.351 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 519ed234-0c28-4a63-b6ed-1122a8d9dfc9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:56:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:09.352 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[63929da7-0738-449c-9ccf-32eed571a6e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:09.378 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:09 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:09Z|00281|binding|INFO|Setting lport ce08eb39-e0b4-4e5d-a2ce-d53a15770744 ovn-installed in OVS
Feb 20 09:56:09 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:09Z|00282|binding|INFO|Setting lport ce08eb39-e0b4-4e5d-a2ce-d53a15770744 up in Southbound
Feb 20 09:56:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:09.383 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:09.473 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:09 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:09.497 2 INFO neutron.agent.securitygroups_rpc [None req-166aeacb-5366-40db-a13d-35c7cc5a7a14 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:09 np0005625203.localdomain systemd[1]: tmp-crun.3tuHQZ.mount: Deactivated successfully.
Feb 20 09:56:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-abc929f29fcca66f59cba1b95824675c392046872e0aff46ce449b4629684ac5-merged.mount: Deactivated successfully.
Feb 20 09:56:09 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-94971a8ad1c283cfca4952197816b6215838c13463abfaa96633df1a099b7719-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:09.506 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:09.546 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:09 np0005625203.localdomain dnsmasq[317453]: read /var/lib/neutron/dhcp/18837b45-5571-4a62-8087-50344388e0e4/addn_hosts - 0 addresses
Feb 20 09:56:09 np0005625203.localdomain podman[318846]: 2026-02-20 09:56:09.803978099 +0000 UTC m=+0.059397628 container kill 7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-18837b45-5571-4a62-8087-50344388e0e4, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb 20 09:56:09 np0005625203.localdomain dnsmasq-dhcp[317453]: read /var/lib/neutron/dhcp/18837b45-5571-4a62-8087-50344388e0e4/host
Feb 20 09:56:09 np0005625203.localdomain dnsmasq-dhcp[317453]: read /var/lib/neutron/dhcp/18837b45-5571-4a62-8087-50344388e0e4/opts
Feb 20 09:56:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:56:10 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:56:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:10.049 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:10 np0005625203.localdomain kernel: device tap3b5cf5d8-58 left promiscuous mode
Feb 20 09:56:10 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:10Z|00283|binding|INFO|Releasing lport 3b5cf5d8-5897-42f6-b615-9991a21bf32c from this chassis (sb_readonly=0)
Feb 20 09:56:10 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:10Z|00284|binding|INFO|Setting lport 3b5cf5d8-5897-42f6-b615-9991a21bf32c down in Southbound
Feb 20 09:56:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:10.070 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:10 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:56:10 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:10.133 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:2::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-18837b45-5571-4a62-8087-50344388e0e4', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18837b45-5571-4a62-8087-50344388e0e4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c44e13adebb4610b7c0cd2fdc62a5b7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd48c3a5-5b40-4f7a-8b37-6049cc6b1ddd, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=3b5cf5d8-5897-42f6-b615-9991a21bf32c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:10 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:10.134 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 3b5cf5d8-5897-42f6-b615-9991a21bf32c in datapath 18837b45-5571-4a62-8087-50344388e0e4 unbound from our chassis
Feb 20 09:56:10 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:10.135 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 18837b45-5571-4a62-8087-50344388e0e4 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:56:10 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:10.136 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[a6dbc1a4-1134-4662-bdf7-136019962941]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:10 np0005625203.localdomain podman[318910]: 
Feb 20 09:56:10 np0005625203.localdomain podman[318910]: 2026-02-20 09:56:10.231655481 +0000 UTC m=+0.079966755 container create b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:10 np0005625203.localdomain systemd[1]: Started libpod-conmon-b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680.scope.
Feb 20 09:56:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:10 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d8b6bd929da966a51a1b906de735834dadc018de3ef44a203f4b4cbc7ac43b4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:10 np0005625203.localdomain podman[318910]: 2026-02-20 09:56:10.290783441 +0000 UTC m=+0.139094715 container init b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 20 09:56:10 np0005625203.localdomain podman[318910]: 2026-02-20 09:56:10.195363838 +0000 UTC m=+0.043675102 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:10 np0005625203.localdomain podman[318910]: 2026-02-20 09:56:10.299998835 +0000 UTC m=+0.148310109 container start b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:10 np0005625203.localdomain dnsmasq[318933]: started, version 2.85 cachesize 150
Feb 20 09:56:10 np0005625203.localdomain dnsmasq[318933]: DNS service limited to local subnets
Feb 20 09:56:10 np0005625203.localdomain dnsmasq[318933]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:10 np0005625203.localdomain dnsmasq[318933]: warning: no upstream servers configured
Feb 20 09:56:10 np0005625203.localdomain dnsmasq-dhcp[318933]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:56:10 np0005625203.localdomain dnsmasq[318933]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:10 np0005625203.localdomain dnsmasq-dhcp[318933]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:10 np0005625203.localdomain dnsmasq-dhcp[318933]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:10 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:10.346 262775 INFO neutron.agent.dhcp.agent [None req-cad95119-f5e2-48cf-834a-edfce8abd2af - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:09Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c99910>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c99610>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c998b0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c997f0>], id=0b805dca-9889-44d4-8581-9b6f45ffa646, ip_allocation=immediate, mac_address=fa:16:3e:0f:9f:b8, name=tempest-NetworksTestDHCPv6-2086721368, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=31, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['5b3ed220-8f5e-488f-9b81-0d64ca505279', '9bcd7367-7a41-4a12-ba7b-dcf4b5b9b014'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:05Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=2252, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:09Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:56:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:10.354 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:10 np0005625203.localdomain podman[318951]: 
Feb 20 09:56:10 np0005625203.localdomain podman[318951]: 2026-02-20 09:56:10.481173181 +0000 UTC m=+0.082836844 container create 0635da9aa2a57c7b9a458bedc07bd83fef14c3dc8e9205224adcb592a3d87af8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb 20 09:56:10 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:10.485 262775 INFO neutron.agent.dhcp.agent [None req-8a2c23c1-317a-4a5a-989a-91e8a22e9ae7 - - - - - -] DHCP configuration for ports {'1aa91995-3c2d-4cd2-9686-c296dd8b2b89', 'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:10 np0005625203.localdomain systemd[1]: Started libpod-conmon-0635da9aa2a57c7b9a458bedc07bd83fef14c3dc8e9205224adcb592a3d87af8.scope.
Feb 20 09:56:10 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:10 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/473a081cd6523a73e47d3c5f73a0b8216a22047c30af45a9732405f72eb66fe1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:10 np0005625203.localdomain podman[318951]: 2026-02-20 09:56:10.443436333 +0000 UTC m=+0.045100056 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:10 np0005625203.localdomain podman[318951]: 2026-02-20 09:56:10.547229224 +0000 UTC m=+0.148892887 container init 0635da9aa2a57c7b9a458bedc07bd83fef14c3dc8e9205224adcb592a3d87af8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:56:10 np0005625203.localdomain podman[318951]: 2026-02-20 09:56:10.555349705 +0000 UTC m=+0.157013368 container start 0635da9aa2a57c7b9a458bedc07bd83fef14c3dc8e9205224adcb592a3d87af8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 20 09:56:10 np0005625203.localdomain dnsmasq[318989]: started, version 2.85 cachesize 150
Feb 20 09:56:10 np0005625203.localdomain dnsmasq[318989]: DNS service limited to local subnets
Feb 20 09:56:10 np0005625203.localdomain dnsmasq[318989]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:10 np0005625203.localdomain dnsmasq[318989]: warning: no upstream servers configured
Feb 20 09:56:10 np0005625203.localdomain dnsmasq-dhcp[318989]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:56:10 np0005625203.localdomain dnsmasq[318989]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/addn_hosts - 0 addresses
Feb 20 09:56:10 np0005625203.localdomain dnsmasq-dhcp[318989]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/host
Feb 20 09:56:10 np0005625203.localdomain dnsmasq-dhcp[318989]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/opts
Feb 20 09:56:10 np0005625203.localdomain podman[318981]: 2026-02-20 09:56:10.606666373 +0000 UTC m=+0.065497868 container kill b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 20 09:56:10 np0005625203.localdomain dnsmasq[318933]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 2 addresses
Feb 20 09:56:10 np0005625203.localdomain dnsmasq-dhcp[318933]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:10 np0005625203.localdomain dnsmasq-dhcp[318933]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:10 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:10.772 262775 INFO neutron.agent.dhcp.agent [None req-7ab60d9f-21d1-40aa-b37d-1cf6bb676845 - - - - - -] DHCP configuration for ports {'8b912c3c-4bb6-4ee1-afd5-eacd5c98ea0f', '4ae9bb75-db03-4abb-b744-8dfda71e8f04'} is completed
Feb 20 09:56:10 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:10.905 262775 INFO neutron.agent.dhcp.agent [None req-8f829f25-d074-4695-a277-17f5f27152ce - - - - - -] DHCP configuration for ports {'0b805dca-9889-44d4-8581-9b6f45ffa646'} is completed
Feb 20 09:56:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:10.957 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:11 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:11.022 2 INFO neutron.agent.securitygroups_rpc [None req-7e54c9a7-f5f5-46c1-ae1b-688f8acab697 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['46f15231-c0dd-46d4-9abc-adba5985e75b']
Feb 20 09:56:11 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:11.099 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:10Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c50310>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c50160>], id=9e6826ae-e6f2-4808-bb4f-cc4bd2ae5a89, ip_allocation=immediate, mac_address=fa:16:3e:5c:a5:f6, name=tempest-PortsIpV6TestJSON-1030810777, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:04Z, description=, dns_domain=, id=519ed234-0c28-4a63-b6ed-1122a8d9dfc9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-test-network-1956785713, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21486, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=1951, status=ACTIVE, subnets=['547d5635-bd3a-4793-a787-df34969103b5'], tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:56:08Z, vlan_transparent=None, network_id=519ed234-0c28-4a63-b6ed-1122a8d9dfc9, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['46f15231-c0dd-46d4-9abc-adba5985e75b'], standard_attr_id=2263, status=DOWN, tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:56:10Z on network 519ed234-0c28-4a63-b6ed-1122a8d9dfc9
Feb 20 09:56:11 np0005625203.localdomain ceph-mon[296066]: pgmap v297: 177 pgs: 177 active+clean; 146 MiB data, 797 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 31 KiB/s wr, 47 op/s
Feb 20 09:56:11 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ec3b06e0-45bf-4e34-99f4-1c4bd5601e66", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:56:11 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ec3b06e0-45bf-4e34-99f4-1c4bd5601e66", "format": "json"}]: dispatch
Feb 20 09:56:11 np0005625203.localdomain dnsmasq[318989]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/addn_hosts - 1 addresses
Feb 20 09:56:11 np0005625203.localdomain dnsmasq-dhcp[318989]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/host
Feb 20 09:56:11 np0005625203.localdomain dnsmasq-dhcp[318989]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/opts
Feb 20 09:56:11 np0005625203.localdomain podman[319038]: 2026-02-20 09:56:11.288835668 +0000 UTC m=+0.059238484 container kill 0635da9aa2a57c7b9a458bedc07bd83fef14c3dc8e9205224adcb592a3d87af8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 20 09:56:11 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:11.325 2 INFO neutron.agent.securitygroups_rpc [None req-4b1a20ca-0949-416f-91ae-525739a1e77a f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:11 np0005625203.localdomain dnsmasq[317453]: exiting on receipt of SIGTERM
Feb 20 09:56:11 np0005625203.localdomain podman[319047]: 2026-02-20 09:56:11.332942663 +0000 UTC m=+0.061443102 container kill 7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-18837b45-5571-4a62-8087-50344388e0e4, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:11 np0005625203.localdomain systemd[1]: libpod-7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24.scope: Deactivated successfully.
Feb 20 09:56:11 np0005625203.localdomain podman[319067]: 2026-02-20 09:56:11.409364227 +0000 UTC m=+0.061186964 container died 7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-18837b45-5571-4a62-8087-50344388e0e4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:11 np0005625203.localdomain podman[319067]: 2026-02-20 09:56:11.442327267 +0000 UTC m=+0.094149974 container cleanup 7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-18837b45-5571-4a62-8087-50344388e0e4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 20 09:56:11 np0005625203.localdomain systemd[1]: libpod-conmon-7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24.scope: Deactivated successfully.
Feb 20 09:56:11 np0005625203.localdomain podman[319075]: 2026-02-20 09:56:11.489204577 +0000 UTC m=+0.123109879 container remove 7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-18837b45-5571-4a62-8087-50344388e0e4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 09:56:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-11997954db4388642733363e5051d573279f7d63d7b1c3903d92fd5f670c1eb6-merged.mount: Deactivated successfully.
Feb 20 09:56:11 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7cbc1db28c7207c0a7c50287198f63ae70605a27eaf211f2c020e3c2b839fc24-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:11 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:11.563 262775 INFO neutron.agent.dhcp.agent [None req-0f39faf3-0ebc-4d79-83c2-cb5d912428d9 - - - - - -] DHCP configuration for ports {'9e6826ae-e6f2-4808-bb4f-cc4bd2ae5a89'} is completed
Feb 20 09:56:11 np0005625203.localdomain dnsmasq[318933]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:11 np0005625203.localdomain dnsmasq-dhcp[318933]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:11 np0005625203.localdomain dnsmasq-dhcp[318933]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:11 np0005625203.localdomain podman[319119]: 2026-02-20 09:56:11.60825986 +0000 UTC m=+0.061570606 container kill b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:11 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d18837b45\x2d5571\x2d4a62\x2d8087\x2d50344388e0e4.mount: Deactivated successfully.
Feb 20 09:56:11 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:11.732 262775 INFO neutron.agent.dhcp.agent [None req-d51eddf1-4251-4023-8bc8-ec73ad37afc3 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:11 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:11.948 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:12 np0005625203.localdomain dnsmasq[318933]: exiting on receipt of SIGTERM
Feb 20 09:56:12 np0005625203.localdomain podman[319160]: 2026-02-20 09:56:12.325108118 +0000 UTC m=+0.059674487 container kill b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 20 09:56:12 np0005625203.localdomain systemd[1]: libpod-b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680.scope: Deactivated successfully.
Feb 20 09:56:12 np0005625203.localdomain podman[319172]: 2026-02-20 09:56:12.401512751 +0000 UTC m=+0.062197045 container died b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Feb 20 09:56:12 np0005625203.localdomain podman[319172]: 2026-02-20 09:56:12.425412561 +0000 UTC m=+0.086096825 container cleanup b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 20 09:56:12 np0005625203.localdomain systemd[1]: libpod-conmon-b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680.scope: Deactivated successfully.
Feb 20 09:56:12 np0005625203.localdomain podman[319174]: 2026-02-20 09:56:12.477247404 +0000 UTC m=+0.132400197 container remove b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:56:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-5d8b6bd929da966a51a1b906de735834dadc018de3ef44a203f4b4cbc7ac43b4-merged.mount: Deactivated successfully.
Feb 20 09:56:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b044ad2a24bafa75368bde8b1d2b20cee2ee4dd2dc9f035aec5a0b9cf5efc680-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:12 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:12.882 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:13 np0005625203.localdomain ceph-mon[296066]: pgmap v298: 177 pgs: 177 active+clean; 146 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 20 KiB/s wr, 38 op/s
Feb 20 09:56:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:13.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:13.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 20 09:56:13 np0005625203.localdomain podman[319265]: 
Feb 20 09:56:13 np0005625203.localdomain podman[319265]: 2026-02-20 09:56:13.365538206 +0000 UTC m=+0.088553170 container create 067811e4b05141e5e95ece26d39c60ffe61e4c8c608caa58c5d90fb93c64cf1c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:13 np0005625203.localdomain systemd[1]: Started libpod-conmon-067811e4b05141e5e95ece26d39c60ffe61e4c8c608caa58c5d90fb93c64cf1c.scope.
Feb 20 09:56:13 np0005625203.localdomain podman[319265]: 2026-02-20 09:56:13.318684377 +0000 UTC m=+0.041699361 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:13 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:13 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25ef9a0337096fb462cabaad0885cd5761b8bfd6a8f5a3b6be05ddb6e10483a8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:13 np0005625203.localdomain dnsmasq[318989]: exiting on receipt of SIGTERM
Feb 20 09:56:13 np0005625203.localdomain podman[319278]: 2026-02-20 09:56:13.441960051 +0000 UTC m=+0.123186132 container kill 0635da9aa2a57c7b9a458bedc07bd83fef14c3dc8e9205224adcb592a3d87af8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:13 np0005625203.localdomain systemd[1]: libpod-0635da9aa2a57c7b9a458bedc07bd83fef14c3dc8e9205224adcb592a3d87af8.scope: Deactivated successfully.
Feb 20 09:56:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:13.473 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:13 np0005625203.localdomain podman[319265]: 2026-02-20 09:56:13.488667425 +0000 UTC m=+0.211682399 container init 067811e4b05141e5e95ece26d39c60ffe61e4c8c608caa58c5d90fb93c64cf1c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 20 09:56:13 np0005625203.localdomain podman[319265]: 2026-02-20 09:56:13.502786002 +0000 UTC m=+0.225800966 container start 067811e4b05141e5e95ece26d39c60ffe61e4c8c608caa58c5d90fb93c64cf1c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:13 np0005625203.localdomain dnsmasq[319318]: started, version 2.85 cachesize 150
Feb 20 09:56:13 np0005625203.localdomain dnsmasq[319318]: DNS service limited to local subnets
Feb 20 09:56:13 np0005625203.localdomain dnsmasq[319318]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:13 np0005625203.localdomain dnsmasq[319318]: warning: no upstream servers configured
Feb 20 09:56:13 np0005625203.localdomain dnsmasq[319318]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:13 np0005625203.localdomain podman[319297]: 2026-02-20 09:56:13.532322986 +0000 UTC m=+0.069112849 container died 0635da9aa2a57c7b9a458bedc07bd83fef14c3dc8e9205224adcb592a3d87af8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:56:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0635da9aa2a57c7b9a458bedc07bd83fef14c3dc8e9205224adcb592a3d87af8-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-473a081cd6523a73e47d3c5f73a0b8216a22047c30af45a9732405f72eb66fe1-merged.mount: Deactivated successfully.
Feb 20 09:56:13 np0005625203.localdomain podman[319297]: 2026-02-20 09:56:13.578725412 +0000 UTC m=+0.115515295 container remove 0635da9aa2a57c7b9a458bedc07bd83fef14c3dc8e9205224adcb592a3d87af8 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:56:13 np0005625203.localdomain systemd[1]: libpod-conmon-0635da9aa2a57c7b9a458bedc07bd83fef14c3dc8e9205224adcb592a3d87af8.scope: Deactivated successfully.
Feb 20 09:56:13 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:13.711 262775 INFO neutron.agent.dhcp.agent [None req-731226c5-9d01-4973-816b-23ed30c3323c - - - - - -] DHCP configuration for ports {'1aa91995-3c2d-4cd2-9686-c296dd8b2b89', 'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:13 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:13.742 2 INFO neutron.agent.securitygroups_rpc [None req-93b1773d-c2eb-4652-8e8d-0c460cd5364e 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['46f15231-c0dd-46d4-9abc-adba5985e75b', '446482cb-8c18-450e-acf7-2fbe583929b8']
Feb 20 09:56:13 np0005625203.localdomain dnsmasq[319318]: exiting on receipt of SIGTERM
Feb 20 09:56:13 np0005625203.localdomain podman[319344]: 2026-02-20 09:56:13.888994231 +0000 UTC m=+0.062823415 container kill 067811e4b05141e5e95ece26d39c60ffe61e4c8c608caa58c5d90fb93c64cf1c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:56:13 np0005625203.localdomain systemd[1]: libpod-067811e4b05141e5e95ece26d39c60ffe61e4c8c608caa58c5d90fb93c64cf1c.scope: Deactivated successfully.
Feb 20 09:56:13 np0005625203.localdomain podman[319358]: 2026-02-20 09:56:13.962067341 +0000 UTC m=+0.057367135 container died 067811e4b05141e5e95ece26d39c60ffe61e4c8c608caa58c5d90fb93c64cf1c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:13 np0005625203.localdomain podman[319358]: 2026-02-20 09:56:13.99206588 +0000 UTC m=+0.087365634 container cleanup 067811e4b05141e5e95ece26d39c60ffe61e4c8c608caa58c5d90fb93c64cf1c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Feb 20 09:56:13 np0005625203.localdomain systemd[1]: libpod-conmon-067811e4b05141e5e95ece26d39c60ffe61e4c8c608caa58c5d90fb93c64cf1c.scope: Deactivated successfully.
Feb 20 09:56:14 np0005625203.localdomain podman[319360]: 2026-02-20 09:56:14.053393077 +0000 UTC m=+0.139379653 container remove 067811e4b05141e5e95ece26d39c60ffe61e4c8c608caa58c5d90fb93c64cf1c (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 20 09:56:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:14Z|00285|binding|INFO|Releasing lport 1aa91995-3c2d-4cd2-9686-c296dd8b2b89 from this chassis (sb_readonly=0)
Feb 20 09:56:14 np0005625203.localdomain kernel: device tap1aa91995-3c left promiscuous mode
Feb 20 09:56:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:14.066 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:14 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:14Z|00286|binding|INFO|Setting lport 1aa91995-3c2d-4cd2-9686-c296dd8b2b89 down in Southbound
Feb 20 09:56:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:14.076 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec0:1aa2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': ''}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=1aa91995-3c2d-4cd2-9686-c296dd8b2b89) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:14.079 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 1aa91995-3c2d-4cd2-9686-c296dd8b2b89 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:56:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:14.083 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:14.084 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[e0fb8d0d-e7ff-450b-a05e-1ff016223f97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:14.088 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ec3b06e0-45bf-4e34-99f4-1c4bd5601e66", "new_size": 2147483648, "format": "json"}]: dispatch
Feb 20 09:56:14 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:56:14 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:14.364 2 INFO neutron.agent.securitygroups_rpc [None req-6fbfa532-f4c6-42e9-b707-63e0a42ce0d3 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['446482cb-8c18-450e-acf7-2fbe583929b8']
Feb 20 09:56:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-25ef9a0337096fb462cabaad0885cd5761b8bfd6a8f5a3b6be05ddb6e10483a8-merged.mount: Deactivated successfully.
Feb 20 09:56:14 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-067811e4b05141e5e95ece26d39c60ffe61e4c8c608caa58c5d90fb93c64cf1c-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:14.552 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:14 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:56:15 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:15.063 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.3 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:15 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:15.065 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated
Feb 20 09:56:15 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:15.069 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:15 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:15.070 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[669feec6-7143-4f21-8511-739f54bba0a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:15 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:15.107 262775 INFO neutron.agent.linux.ip_lib [None req-d6c70f6d-80b3-4949-ae5e-360b3e2711a7 - - - - - -] Device tap29ffb8f5-0d cannot be used as it has no MAC address
Feb 20 09:56:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:15.130 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:15 np0005625203.localdomain kernel: device tap29ffb8f5-0d entered promiscuous mode
Feb 20 09:56:15 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581375.1356] manager: (tap29ffb8f5-0d): new Generic device (/org/freedesktop/NetworkManager/Devices/57)
Feb 20 09:56:15 np0005625203.localdomain systemd-udevd[319424]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:15.149 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:15.175 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:15 np0005625203.localdomain ceph-mon[296066]: pgmap v299: 177 pgs: 177 active+clean; 146 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 33 op/s
Feb 20 09:56:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:15.203 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:15.222 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:15 np0005625203.localdomain podman[319462]: 
Feb 20 09:56:15 np0005625203.localdomain podman[319462]: 2026-02-20 09:56:15.49763254 +0000 UTC m=+0.096892209 container create c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:56:15 np0005625203.localdomain systemd[1]: Started libpod-conmon-c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20.scope.
Feb 20 09:56:15 np0005625203.localdomain podman[319462]: 2026-02-20 09:56:15.449955345 +0000 UTC m=+0.049215034 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:15 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:15 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d3a6dec96972202a10f29d32ce897df4a1756a91c8c179ea0dcc481e40d82f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:15 np0005625203.localdomain podman[319462]: 2026-02-20 09:56:15.578184191 +0000 UTC m=+0.177443850 container init c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:56:15 np0005625203.localdomain podman[319462]: 2026-02-20 09:56:15.588103738 +0000 UTC m=+0.187363457 container start c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:56:15 np0005625203.localdomain dnsmasq[319488]: started, version 2.85 cachesize 150
Feb 20 09:56:15 np0005625203.localdomain dnsmasq[319488]: DNS service limited to local subnets
Feb 20 09:56:15 np0005625203.localdomain dnsmasq[319488]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:15 np0005625203.localdomain dnsmasq[319488]: warning: no upstream servers configured
Feb 20 09:56:15 np0005625203.localdomain dnsmasq-dhcp[319488]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Feb 20 09:56:15 np0005625203.localdomain dnsmasq-dhcp[319488]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:56:15 np0005625203.localdomain dnsmasq[319488]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/addn_hosts - 1 addresses
Feb 20 09:56:15 np0005625203.localdomain dnsmasq-dhcp[319488]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/host
Feb 20 09:56:15 np0005625203.localdomain dnsmasq-dhcp[319488]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/opts
Feb 20 09:56:15 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:15.691 262775 INFO neutron.agent.dhcp.agent [None req-d6a25e16-e537-4e37-8a31-2d378987d7bb - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:10Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c8edf0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c8e6d0>], id=9e6826ae-e6f2-4808-bb4f-cc4bd2ae5a89, ip_allocation=immediate, mac_address=fa:16:3e:5c:a5:f6, name=tempest-PortsIpV6TestJSON-1357984751, network_id=519ed234-0c28-4a63-b6ed-1122a8d9dfc9, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['446482cb-8c18-450e-acf7-2fbe583929b8'], standard_attr_id=2263, status=DOWN, tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:56:13Z on network 519ed234-0c28-4a63-b6ed-1122a8d9dfc9
Feb 20 09:56:15 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:15.860 262775 INFO neutron.agent.dhcp.agent [None req-688c336b-717e-448b-b42b-877a38b626f4 - - - - - -] DHCP configuration for ports {'8b912c3c-4bb6-4ee1-afd5-eacd5c98ea0f', '9e6826ae-e6f2-4808-bb4f-cc4bd2ae5a89', '4ae9bb75-db03-4abb-b744-8dfda71e8f04', 'ce08eb39-e0b4-4e5d-a2ce-d53a15770744'} is completed
Feb 20 09:56:15 np0005625203.localdomain dnsmasq[319488]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/addn_hosts - 1 addresses
Feb 20 09:56:15 np0005625203.localdomain dnsmasq-dhcp[319488]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/host
Feb 20 09:56:15 np0005625203.localdomain dnsmasq-dhcp[319488]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/opts
Feb 20 09:56:15 np0005625203.localdomain podman[319514]: 2026-02-20 09:56:15.880267957 +0000 UTC m=+0.053317551 container kill c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:15.995 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:16 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:16.227 262775 INFO neutron.agent.dhcp.agent [None req-613e999c-cc93-41d5-9bf6-8b491c81ace3 - - - - - -] DHCP configuration for ports {'9e6826ae-e6f2-4808-bb4f-cc4bd2ae5a89'} is completed
Feb 20 09:56:16 np0005625203.localdomain dnsmasq[319488]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/addn_hosts - 0 addresses
Feb 20 09:56:16 np0005625203.localdomain dnsmasq-dhcp[319488]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/host
Feb 20 09:56:16 np0005625203.localdomain dnsmasq-dhcp[319488]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/opts
Feb 20 09:56:16 np0005625203.localdomain podman[319567]: 2026-02-20 09:56:16.297271128 +0000 UTC m=+0.105486285 container kill c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb 20 09:56:16 np0005625203.localdomain podman[319585]: 
Feb 20 09:56:16 np0005625203.localdomain podman[319585]: 2026-02-20 09:56:16.362084283 +0000 UTC m=+0.105009229 container create 45b9774ba5e527c1e23f5da932880193d3da759e894933b3b7f4739ab31276e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:56:16 np0005625203.localdomain systemd[1]: Started libpod-conmon-45b9774ba5e527c1e23f5da932880193d3da759e894933b3b7f4739ab31276e7.scope.
Feb 20 09:56:16 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:16 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b80eceee2c0edb9cc4052cabca54974af1df48606792bc625b33eb1b441080ef/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:16 np0005625203.localdomain podman[319585]: 2026-02-20 09:56:16.416498457 +0000 UTC m=+0.159423403 container init 45b9774ba5e527c1e23f5da932880193d3da759e894933b3b7f4739ab31276e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:16 np0005625203.localdomain podman[319585]: 2026-02-20 09:56:16.323275073 +0000 UTC m=+0.066200089 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:16 np0005625203.localdomain podman[319585]: 2026-02-20 09:56:16.425912368 +0000 UTC m=+0.168837334 container start 45b9774ba5e527c1e23f5da932880193d3da759e894933b3b7f4739ab31276e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:56:16 np0005625203.localdomain dnsmasq[319610]: started, version 2.85 cachesize 150
Feb 20 09:56:16 np0005625203.localdomain dnsmasq[319610]: DNS service limited to local subnets
Feb 20 09:56:16 np0005625203.localdomain dnsmasq[319610]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:16 np0005625203.localdomain dnsmasq[319610]: warning: no upstream servers configured
Feb 20 09:56:16 np0005625203.localdomain dnsmasq-dhcp[319610]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:56:16 np0005625203.localdomain dnsmasq[319610]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:16 np0005625203.localdomain dnsmasq-dhcp[319610]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:16 np0005625203.localdomain dnsmasq-dhcp[319610]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:16 np0005625203.localdomain sshd[319612]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:56:16 np0005625203.localdomain kernel: device tap29ffb8f5-0d left promiscuous mode
Feb 20 09:56:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:16.523 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:16.539 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:16 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:16.577 262775 INFO neutron.agent.dhcp.agent [None req-8ddc263c-3604-4ded-a3de-f8d931c6fd2d - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:16 np0005625203.localdomain dnsmasq[319488]: exiting on receipt of SIGTERM
Feb 20 09:56:16 np0005625203.localdomain podman[319633]: 2026-02-20 09:56:16.716988724 +0000 UTC m=+0.053577949 container kill c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 09:56:16 np0005625203.localdomain systemd[1]: libpod-c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20.scope: Deactivated successfully.
Feb 20 09:56:16 np0005625203.localdomain podman[319647]: 2026-02-20 09:56:16.769625522 +0000 UTC m=+0.042959720 container died c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 20 09:56:16 np0005625203.localdomain systemd[1]: tmp-crun.YZWck9.mount: Deactivated successfully.
Feb 20 09:56:16 np0005625203.localdomain podman[319647]: 2026-02-20 09:56:16.870370859 +0000 UTC m=+0.143705017 container cleanup c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:16 np0005625203.localdomain systemd[1]: libpod-conmon-c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20.scope: Deactivated successfully.
Feb 20 09:56:16 np0005625203.localdomain podman[319652]: 2026-02-20 09:56:16.894797074 +0000 UTC m=+0.156473112 container remove c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:56:17 np0005625203.localdomain ceph-mon[296066]: pgmap v300: 177 pgs: 177 active+clean; 146 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 26 KiB/s wr, 35 op/s
Feb 20 09:56:17 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4d3a6dec96972202a10f29d32ce897df4a1756a91c8c179ea0dcc481e40d82f9-merged.mount: Deactivated successfully.
Feb 20 09:56:17 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9a384fb47328b8a169187e40d378c1d34c76e21502c1c025e476407768d2f20-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:17 np0005625203.localdomain sshd[319612]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:56:17 np0005625203.localdomain podman[319729]: 
Feb 20 09:56:17 np0005625203.localdomain podman[319729]: 2026-02-20 09:56:17.862584276 +0000 UTC m=+0.073885548 container create 6336e9beb186b16fe29402116db1d31a4643aaa6db6fea151a2cc8c528e2ddcb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:17 np0005625203.localdomain systemd[1]: Started libpod-conmon-6336e9beb186b16fe29402116db1d31a4643aaa6db6fea151a2cc8c528e2ddcb.scope.
Feb 20 09:56:17 np0005625203.localdomain podman[319729]: 2026-02-20 09:56:17.82068756 +0000 UTC m=+0.031988862 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:17 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:17 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1161f239c97a2365fa357f550e4e49ada96713aa4d1d09af81e74bfb83c408d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:17 np0005625203.localdomain podman[319729]: 2026-02-20 09:56:17.950235447 +0000 UTC m=+0.161536719 container init 6336e9beb186b16fe29402116db1d31a4643aaa6db6fea151a2cc8c528e2ddcb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:17 np0005625203.localdomain podman[319729]: 2026-02-20 09:56:17.957341778 +0000 UTC m=+0.168643060 container start 6336e9beb186b16fe29402116db1d31a4643aaa6db6fea151a2cc8c528e2ddcb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Feb 20 09:56:17 np0005625203.localdomain dnsmasq[319747]: started, version 2.85 cachesize 150
Feb 20 09:56:17 np0005625203.localdomain dnsmasq[319747]: DNS service limited to local subnets
Feb 20 09:56:17 np0005625203.localdomain dnsmasq[319747]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:17 np0005625203.localdomain dnsmasq[319747]: warning: no upstream servers configured
Feb 20 09:56:17 np0005625203.localdomain dnsmasq-dhcp[319747]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Feb 20 09:56:17 np0005625203.localdomain dnsmasq[319747]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/addn_hosts - 0 addresses
Feb 20 09:56:17 np0005625203.localdomain dnsmasq-dhcp[319747]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/host
Feb 20 09:56:17 np0005625203.localdomain dnsmasq-dhcp[319747]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/opts
Feb 20 09:56:18 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ec3b06e0-45bf-4e34-99f4-1c4bd5601e66", "format": "json"}]: dispatch
Feb 20 09:56:18 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ec3b06e0-45bf-4e34-99f4-1c4bd5601e66", "force": true, "format": "json"}]: dispatch
Feb 20 09:56:18 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2119990363' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:18 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2119990363' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:18 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:18.255 262775 INFO neutron.agent.dhcp.agent [None req-e3e23a79-1da0-4e2d-9054-119b21e33001 - - - - - -] DHCP configuration for ports {'8b912c3c-4bb6-4ee1-afd5-eacd5c98ea0f', '4ae9bb75-db03-4abb-b744-8dfda71e8f04', 'ce08eb39-e0b4-4e5d-a2ce-d53a15770744'} is completed
Feb 20 09:56:18 np0005625203.localdomain dnsmasq[319747]: exiting on receipt of SIGTERM
Feb 20 09:56:18 np0005625203.localdomain podman[319765]: 2026-02-20 09:56:18.427575785 +0000 UTC m=+0.062795294 container kill 6336e9beb186b16fe29402116db1d31a4643aaa6db6fea151a2cc8c528e2ddcb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:56:18 np0005625203.localdomain systemd[1]: libpod-6336e9beb186b16fe29402116db1d31a4643aaa6db6fea151a2cc8c528e2ddcb.scope: Deactivated successfully.
Feb 20 09:56:18 np0005625203.localdomain podman[319779]: 2026-02-20 09:56:18.499849381 +0000 UTC m=+0.060472201 container died 6336e9beb186b16fe29402116db1d31a4643aaa6db6fea151a2cc8c528e2ddcb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:56:18 np0005625203.localdomain systemd[1]: tmp-crun.1H3oZl.mount: Deactivated successfully.
Feb 20 09:56:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6336e9beb186b16fe29402116db1d31a4643aaa6db6fea151a2cc8c528e2ddcb-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-1161f239c97a2365fa357f550e4e49ada96713aa4d1d09af81e74bfb83c408d3-merged.mount: Deactivated successfully.
Feb 20 09:56:18 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:18.543 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:18 np0005625203.localdomain podman[319779]: 2026-02-20 09:56:18.546169154 +0000 UTC m=+0.106791944 container cleanup 6336e9beb186b16fe29402116db1d31a4643aaa6db6fea151a2cc8c528e2ddcb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:18 np0005625203.localdomain systemd[1]: libpod-conmon-6336e9beb186b16fe29402116db1d31a4643aaa6db6fea151a2cc8c528e2ddcb.scope: Deactivated successfully.
Feb 20 09:56:18 np0005625203.localdomain podman[319786]: 2026-02-20 09:56:18.629294505 +0000 UTC m=+0.177558013 container remove 6336e9beb186b16fe29402116db1d31a4643aaa6db6fea151a2cc8c528e2ddcb (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:56:19 np0005625203.localdomain ceph-mon[296066]: pgmap v301: 177 pgs: 177 active+clean; 146 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 13 KiB/s wr, 18 op/s
Feb 20 09:56:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:19.246 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2 2001:db8::f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:19.248 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated
Feb 20 09:56:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:19.251 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:19 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:19.252 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[afc84218-c2d0-47ce-86fe-fc8bf02d31b0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:56:19 np0005625203.localdomain sshd[319832]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:56:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:19.603 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:19 np0005625203.localdomain sshd[319832]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:56:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:56:19 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:56:19 np0005625203.localdomain dnsmasq[319610]: exiting on receipt of SIGTERM
Feb 20 09:56:19 np0005625203.localdomain podman[319865]: 2026-02-20 09:56:19.880019541 +0000 UTC m=+0.115789173 container kill 45b9774ba5e527c1e23f5da932880193d3da759e894933b3b7f4739ab31276e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 20 09:56:19 np0005625203.localdomain systemd[1]: libpod-45b9774ba5e527c1e23f5da932880193d3da759e894933b3b7f4739ab31276e7.scope: Deactivated successfully.
Feb 20 09:56:19 np0005625203.localdomain podman[319886]: 
Feb 20 09:56:19 np0005625203.localdomain podman[319886]: 2026-02-20 09:56:19.92298488 +0000 UTC m=+0.082486913 container create c878b84f8dc82d0c1c53797e4d2c04a3404292a25731a8598bc917cd01a04aa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:19 np0005625203.localdomain podman[319898]: 2026-02-20 09:56:19.952291867 +0000 UTC m=+0.060736940 container died 45b9774ba5e527c1e23f5da932880193d3da759e894933b3b7f4739ab31276e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:56:19 np0005625203.localdomain systemd[1]: Started libpod-conmon-c878b84f8dc82d0c1c53797e4d2c04a3404292a25731a8598bc917cd01a04aa7.scope.
Feb 20 09:56:19 np0005625203.localdomain podman[319898]: 2026-02-20 09:56:19.984939386 +0000 UTC m=+0.093384379 container cleanup 45b9774ba5e527c1e23f5da932880193d3da759e894933b3b7f4739ab31276e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:19 np0005625203.localdomain podman[319886]: 2026-02-20 09:56:19.889502254 +0000 UTC m=+0.049004317 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:19 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:19 np0005625203.localdomain systemd[1]: libpod-conmon-45b9774ba5e527c1e23f5da932880193d3da759e894933b3b7f4739ab31276e7.scope: Deactivated successfully.
Feb 20 09:56:20 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b51fe5866a5112512317433462a7de08cf24fd10565fe93f4077c66acced9569/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:20 np0005625203.localdomain podman[319900]: 2026-02-20 09:56:20.009675172 +0000 UTC m=+0.102862043 container remove 45b9774ba5e527c1e23f5da932880193d3da759e894933b3b7f4739ab31276e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:56:20 np0005625203.localdomain podman[319886]: 2026-02-20 09:56:20.01443983 +0000 UTC m=+0.173941933 container init c878b84f8dc82d0c1c53797e4d2c04a3404292a25731a8598bc917cd01a04aa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:20 np0005625203.localdomain podman[319886]: 2026-02-20 09:56:20.025013747 +0000 UTC m=+0.184515810 container start c878b84f8dc82d0c1c53797e4d2c04a3404292a25731a8598bc917cd01a04aa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:20 np0005625203.localdomain dnsmasq[319935]: started, version 2.85 cachesize 150
Feb 20 09:56:20 np0005625203.localdomain dnsmasq[319935]: DNS service limited to local subnets
Feb 20 09:56:20 np0005625203.localdomain dnsmasq[319935]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:20 np0005625203.localdomain dnsmasq[319935]: warning: no upstream servers configured
Feb 20 09:56:20 np0005625203.localdomain dnsmasq-dhcp[319935]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Feb 20 09:56:20 np0005625203.localdomain dnsmasq-dhcp[319935]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:56:20 np0005625203.localdomain dnsmasq[319935]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/addn_hosts - 0 addresses
Feb 20 09:56:20 np0005625203.localdomain dnsmasq-dhcp[319935]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/host
Feb 20 09:56:20 np0005625203.localdomain dnsmasq-dhcp[319935]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/opts
Feb 20 09:56:20 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:20.082 262775 INFO neutron.agent.linux.ip_lib [None req-4827f097-225f-40bf-af7c-2c2e26264df4 - - - - - -] Device tap29ffb8f5-0d cannot be used as it has no MAC address
Feb 20 09:56:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:20.112 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:20 np0005625203.localdomain kernel: device tap29ffb8f5-0d entered promiscuous mode
Feb 20 09:56:20 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581380.1219] manager: (tap29ffb8f5-0d): new Generic device (/org/freedesktop/NetworkManager/Devices/58)
Feb 20 09:56:20 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:20Z|00287|binding|INFO|Claiming lport 29ffb8f5-0d7b-4528-acdb-ba2f31b7d0f9 for this chassis.
Feb 20 09:56:20 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:20Z|00288|binding|INFO|29ffb8f5-0d7b-4528-acdb-ba2f31b7d0f9: Claiming unknown
Feb 20 09:56:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:20.124 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:20 np0005625203.localdomain systemd-udevd[319942]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:20.136 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:feb0:ac/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': ''}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=29ffb8f5-0d7b-4528-acdb-ba2f31b7d0f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:20.137 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 29ffb8f5-0d7b-4528-acdb-ba2f31b7d0f9 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:56:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:20.139 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 5be732b6-f7ce-4d5b-b6eb-afe2af740ca7 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:56:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:20.139 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:20.140 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[865b4941-b41e-433c-8381-0fd05b088cd0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:20 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:20Z|00289|binding|INFO|Setting lport 29ffb8f5-0d7b-4528-acdb-ba2f31b7d0f9 ovn-installed in OVS
Feb 20 09:56:20 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:20Z|00290|binding|INFO|Setting lport 29ffb8f5-0d7b-4528-acdb-ba2f31b7d0f9 up in Southbound
Feb 20 09:56:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:20.153 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:20.154 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:20.187 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:20.209 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:20 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:56:20 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:20.264 262775 INFO neutron.agent.dhcp.agent [None req-21dd5018-eb18-4640-b571-3e4982c68a9c - - - - - -] DHCP configuration for ports {'8b912c3c-4bb6-4ee1-afd5-eacd5c98ea0f', '4ae9bb75-db03-4abb-b744-8dfda71e8f04', 'ce08eb39-e0b4-4e5d-a2ce-d53a15770744'} is completed
Feb 20 09:56:20 np0005625203.localdomain dnsmasq[316498]: read /var/lib/neutron/dhcp/03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6/addn_hosts - 0 addresses
Feb 20 09:56:20 np0005625203.localdomain dnsmasq-dhcp[316498]: read /var/lib/neutron/dhcp/03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6/host
Feb 20 09:56:20 np0005625203.localdomain dnsmasq-dhcp[316498]: read /var/lib/neutron/dhcp/03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6/opts
Feb 20 09:56:20 np0005625203.localdomain podman[319971]: 2026-02-20 09:56:20.346753911 +0000 UTC m=+0.064911890 container kill 9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:56:20 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:20.350 2 INFO neutron.agent.securitygroups_rpc [None req-27e863e6-abb7-4d79-8929-35ee419d3ab5 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:20.593 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:20 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:20Z|00291|binding|INFO|Releasing lport 7babcff6-d71d-4d28-bebf-6a33d9fdbcc9 from this chassis (sb_readonly=0)
Feb 20 09:56:20 np0005625203.localdomain kernel: device tap7babcff6-d7 left promiscuous mode
Feb 20 09:56:20 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:20Z|00292|binding|INFO|Setting lport 7babcff6-d71d-4d28-bebf-6a33d9fdbcc9 down in Southbound
Feb 20 09:56:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:20.607 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1c44e13adebb4610b7c0cd2fdc62a5b7', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad5a72c2-1fc9-4380-90dc-c87395348424, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=7babcff6-d71d-4d28-bebf-6a33d9fdbcc9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:20.608 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 7babcff6-d71d-4d28-bebf-6a33d9fdbcc9 in datapath 03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6 unbound from our chassis
Feb 20 09:56:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:20.610 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:56:20 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:20.610 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[415eabb3-7794-40e5-a7ad-2809a93692f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:20.616 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:20 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:20.845 2 INFO neutron.agent.securitygroups_rpc [None req-5bc16860-c455-4be4-9017-f7ba050a5b1d f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-b80eceee2c0edb9cc4052cabca54974af1df48606792bc625b33eb1b441080ef-merged.mount: Deactivated successfully.
Feb 20 09:56:20 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45b9774ba5e527c1e23f5da932880193d3da759e894933b3b7f4739ab31276e7-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:20 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:20.947 2 INFO neutron.agent.securitygroups_rpc [None req-3d50957a-c50d-404e-a697-bd588426aa5b 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['9c894fef-e625-4d2d-ad79-9f0215b19661']
Feb 20 09:56:20 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:20.997 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:21 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:21.034 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:20Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c8b910>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ed1af0>], id=ed62ab7d-6845-4612-9bd7-27b11c74b983, ip_allocation=immediate, mac_address=fa:16:3e:8d:1d:1c, name=tempest-PortsIpV6TestJSON-858965588, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:55:04Z, description=, dns_domain=, id=519ed234-0c28-4a63-b6ed-1122a8d9dfc9, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-PortsIpV6TestJSON-test-network-1956785713, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=21486, qos_policy_id=None, revision_number=5, router:external=False, shared=False, standard_attr_id=1951, status=ACTIVE, subnets=['0a1a3512-4558-4da5-9f9e-6ef8d84eede5', 'fcfda9fa-fb7f-460d-ba45-4c2fef5ec704'], tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:56:16Z, vlan_transparent=None, network_id=519ed234-0c28-4a63-b6ed-1122a8d9dfc9, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9c894fef-e625-4d2d-ad79-9f0215b19661'], standard_attr_id=2295, status=DOWN, tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:56:20Z on network 519ed234-0c28-4a63-b6ed-1122a8d9dfc9
Feb 20 09:56:21 np0005625203.localdomain podman[320037]: 
Feb 20 09:56:21 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:21.133 2 INFO neutron.agent.securitygroups_rpc [None req-ab2c767f-db90-4059-9416-3c9c50626a18 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:21 np0005625203.localdomain podman[320037]: 2026-02-20 09:56:21.13766463 +0000 UTC m=+0.095386282 container create 5d14f1c1bb2132242ccbfbc89fb43f1a9990e7540cc712199c0ea635581841b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:56:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:56:21 np0005625203.localdomain systemd[1]: Started libpod-conmon-5d14f1c1bb2132242ccbfbc89fb43f1a9990e7540cc712199c0ea635581841b3.scope.
Feb 20 09:56:21 np0005625203.localdomain podman[320037]: 2026-02-20 09:56:21.094899847 +0000 UTC m=+0.052621529 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:21 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:21 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97604ea027f9bd7ab703ac04b8dd3cad55ebff4df44b7fba07ebe41b5f34168d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:21 np0005625203.localdomain podman[320037]: 2026-02-20 09:56:21.208100709 +0000 UTC m=+0.165822361 container init 5d14f1c1bb2132242ccbfbc89fb43f1a9990e7540cc712199c0ea635581841b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:21 np0005625203.localdomain podman[320037]: 2026-02-20 09:56:21.219386048 +0000 UTC m=+0.177107670 container start 5d14f1c1bb2132242ccbfbc89fb43f1a9990e7540cc712199c0ea635581841b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 09:56:21 np0005625203.localdomain dnsmasq[320101]: started, version 2.85 cachesize 150
Feb 20 09:56:21 np0005625203.localdomain dnsmasq[320101]: DNS service limited to local subnets
Feb 20 09:56:21 np0005625203.localdomain dnsmasq[320101]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:21 np0005625203.localdomain dnsmasq[320101]: warning: no upstream servers configured
Feb 20 09:56:21 np0005625203.localdomain dnsmasq-dhcp[320101]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:56:21 np0005625203.localdomain dnsmasq-dhcp[320101]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:56:21 np0005625203.localdomain dnsmasq[320101]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:21 np0005625203.localdomain dnsmasq-dhcp[320101]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:21 np0005625203.localdomain dnsmasq-dhcp[320101]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:21 np0005625203.localdomain ceph-mon[296066]: pgmap v302: 177 pgs: 177 active+clean; 146 MiB data, 801 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 18 KiB/s wr, 20 op/s
Feb 20 09:56:21 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "15ed10cc-a478-4127-a35f-dcce0e8ec529", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:56:21 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "15ed10cc-a478-4127-a35f-dcce0e8ec529", "format": "json"}]: dispatch
Feb 20 09:56:21 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/772936112' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:21 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/772936112' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:21 np0005625203.localdomain dnsmasq[319935]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/addn_hosts - 1 addresses
Feb 20 09:56:21 np0005625203.localdomain dnsmasq-dhcp[319935]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/host
Feb 20 09:56:21 np0005625203.localdomain podman[320100]: 2026-02-20 09:56:21.284309966 +0000 UTC m=+0.056013264 container kill c878b84f8dc82d0c1c53797e4d2c04a3404292a25731a8598bc917cd01a04aa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 20 09:56:21 np0005625203.localdomain dnsmasq-dhcp[319935]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/opts
Feb 20 09:56:21 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:21.285 262775 INFO neutron.agent.dhcp.agent [None req-5ef4360f-ffe5-458e-9ef5-15292387503c - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:20Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ddb340>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4eac250>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ddbeb0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ddb3d0>], id=d49779d2-4e4e-4b33-aed7-a56d6ba97876, ip_allocation=immediate, mac_address=fa:16:3e:e8:e0:fa, name=tempest-NetworksTestDHCPv6-2047695306, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=35, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['59f3ec03-44ef-4b61-9f78-7fe126967c3c', 'd73b4e1c-ee6c-4b09-adfd-6f7981e06de2'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:16Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=2294, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:20Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:56:21 np0005625203.localdomain podman[320067]: 2026-02-20 09:56:21.278967981 +0000 UTC m=+0.097571940 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:56:21 np0005625203.localdomain podman[320068]: 2026-02-20 09:56:21.329831974 +0000 UTC m=+0.136532044 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:56:21 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:21.408 262775 INFO neutron.agent.dhcp.agent [None req-f5438b67-5749-4285-a266-f2d34640af1e - - - - - -] DHCP configuration for ports {'29ffb8f5-0d7b-4528-acdb-ba2f31b7d0f9', 'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:21 np0005625203.localdomain podman[320067]: 2026-02-20 09:56:21.412641807 +0000 UTC m=+0.231245806 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:56:21 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:56:21 np0005625203.localdomain dnsmasq[316498]: exiting on receipt of SIGTERM
Feb 20 09:56:21 np0005625203.localdomain podman[320068]: 2026-02-20 09:56:21.466218264 +0000 UTC m=+0.272918314 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:56:21 np0005625203.localdomain systemd[1]: libpod-9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64.scope: Deactivated successfully.
Feb 20 09:56:21 np0005625203.localdomain podman[320159]: 2026-02-20 09:56:21.46801765 +0000 UTC m=+0.068796800 container kill 9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:21 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:56:21 np0005625203.localdomain dnsmasq[320101]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 2 addresses
Feb 20 09:56:21 np0005625203.localdomain dnsmasq-dhcp[320101]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:21 np0005625203.localdomain podman[320174]: 2026-02-20 09:56:21.489858056 +0000 UTC m=+0.052751373 container kill 5d14f1c1bb2132242ccbfbc89fb43f1a9990e7540cc712199c0ea635581841b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:21 np0005625203.localdomain dnsmasq-dhcp[320101]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:21 np0005625203.localdomain podman[320188]: 2026-02-20 09:56:21.528931395 +0000 UTC m=+0.048312727 container died 9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:21 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:21.545 262775 INFO neutron.agent.dhcp.agent [None req-50f748ef-22b4-482c-9af0-ac35c74b6694 - - - - - -] DHCP configuration for ports {'ed62ab7d-6845-4612-9bd7-27b11c74b983'} is completed
Feb 20 09:56:21 np0005625203.localdomain podman[320188]: 2026-02-20 09:56:21.572432 +0000 UTC m=+0.091813312 container cleanup 9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 20 09:56:21 np0005625203.localdomain systemd[1]: libpod-conmon-9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64.scope: Deactivated successfully.
Feb 20 09:56:21 np0005625203.localdomain podman[320192]: 2026-02-20 09:56:21.604892275 +0000 UTC m=+0.116797025 container remove 9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-03f6e0ef-938e-46ab-a75d-e6de6dfcc7e6, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:21 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:21.640 262775 INFO neutron.agent.dhcp.agent [None req-d7257218-3f64-4ac6-8e79-4e6540291892 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:21 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:21.691 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:21 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:21.772 262775 INFO neutron.agent.dhcp.agent [None req-36364557-2221-4941-8075-5d2c803f8f93 - - - - - -] DHCP configuration for ports {'d49779d2-4e4e-4b33-aed7-a56d6ba97876'} is completed
Feb 20 09:56:21 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:21.823 2 INFO neutron.agent.securitygroups_rpc [None req-325d197d-f2bb-472d-a6df-be02729b4a1c 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-5f6b1d487e944fb77880f4c9e4816e3aa82cfa8f817bb7e1f56ee3b41f22a107-merged.mount: Deactivated successfully.
Feb 20 09:56:21 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9c660ba0e49500da38d3194af2e80e02c22b44047ca0daffe22a4a9e22f37f64-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:21 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d03f6e0ef\x2d938e\x2d46ab\x2da75d\x2de6de6dfcc7e6.mount: Deactivated successfully.
Feb 20 09:56:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:21.945 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:21 np0005625203.localdomain dnsmasq[320101]: exiting on receipt of SIGTERM
Feb 20 09:56:21 np0005625203.localdomain podman[320246]: 2026-02-20 09:56:21.970043901 +0000 UTC m=+0.074199866 container kill 5d14f1c1bb2132242ccbfbc89fb43f1a9990e7540cc712199c0ea635581841b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb 20 09:56:21 np0005625203.localdomain systemd[1]: libpod-5d14f1c1bb2132242ccbfbc89fb43f1a9990e7540cc712199c0ea635581841b3.scope: Deactivated successfully.
Feb 20 09:56:22 np0005625203.localdomain podman[320261]: 2026-02-20 09:56:22.042298037 +0000 UTC m=+0.054981462 container died 5d14f1c1bb2132242ccbfbc89fb43f1a9990e7540cc712199c0ea635581841b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:56:22 np0005625203.localdomain podman[320261]: 2026-02-20 09:56:22.069265151 +0000 UTC m=+0.081948526 container cleanup 5d14f1c1bb2132242ccbfbc89fb43f1a9990e7540cc712199c0ea635581841b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:56:22 np0005625203.localdomain systemd[1]: libpod-conmon-5d14f1c1bb2132242ccbfbc89fb43f1a9990e7540cc712199c0ea635581841b3.scope: Deactivated successfully.
Feb 20 09:56:22 np0005625203.localdomain podman[320262]: 2026-02-20 09:56:22.122775277 +0000 UTC m=+0.131338984 container remove 5d14f1c1bb2132242ccbfbc89fb43f1a9990e7540cc712199c0ea635581841b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:56:22 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:22.607 2 INFO neutron.agent.securitygroups_rpc [None req-521cfad5-05c2-4b59-9313-296ec36811c0 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:22 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-97604ea027f9bd7ab703ac04b8dd3cad55ebff4df44b7fba07ebe41b5f34168d-merged.mount: Deactivated successfully.
Feb 20 09:56:22 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d14f1c1bb2132242ccbfbc89fb43f1a9990e7540cc712199c0ea635581841b3-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:22 np0005625203.localdomain podman[320339]: 
Feb 20 09:56:22 np0005625203.localdomain podman[320339]: 2026-02-20 09:56:22.977754098 +0000 UTC m=+0.088028585 container create 952e8f49360a774ed0324b05539c1b18d9538335347a0c0c43261617fee5215d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:23 np0005625203.localdomain systemd[1]: Started libpod-conmon-952e8f49360a774ed0324b05539c1b18d9538335347a0c0c43261617fee5215d.scope.
Feb 20 09:56:23 np0005625203.localdomain systemd[1]: tmp-crun.uDtFC1.mount: Deactivated successfully.
Feb 20 09:56:23 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:23 np0005625203.localdomain podman[320339]: 2026-02-20 09:56:22.938276087 +0000 UTC m=+0.048550604 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:23 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac75d5534093d813f5e2666a1aa2703d638b40b64cf723e72785eacc2225dd3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:23 np0005625203.localdomain podman[320339]: 2026-02-20 09:56:23.047484645 +0000 UTC m=+0.157759132 container init 952e8f49360a774ed0324b05539c1b18d9538335347a0c0c43261617fee5215d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 20 09:56:23 np0005625203.localdomain podman[320339]: 2026-02-20 09:56:23.056231836 +0000 UTC m=+0.166506323 container start 952e8f49360a774ed0324b05539c1b18d9538335347a0c0c43261617fee5215d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:56:23 np0005625203.localdomain dnsmasq[320357]: started, version 2.85 cachesize 150
Feb 20 09:56:23 np0005625203.localdomain dnsmasq[320357]: DNS service limited to local subnets
Feb 20 09:56:23 np0005625203.localdomain dnsmasq[320357]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:23 np0005625203.localdomain dnsmasq[320357]: warning: no upstream servers configured
Feb 20 09:56:23 np0005625203.localdomain dnsmasq-dhcp[320357]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:56:23 np0005625203.localdomain dnsmasq[320357]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:23 np0005625203.localdomain dnsmasq-dhcp[320357]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:23 np0005625203.localdomain dnsmasq-dhcp[320357]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:23 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:23.116 2 INFO neutron.agent.securitygroups_rpc [None req-264095f7-8549-4a1d-9c14-cf140323ad0c 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:23 np0005625203.localdomain ceph-mon[296066]: pgmap v303: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 13 KiB/s rd, 12 KiB/s wr, 21 op/s
Feb 20 09:56:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:23.305 262775 INFO neutron.agent.dhcp.agent [None req-579e0ba8-4ab9-4149-b1e6-920f1bdee33d - - - - - -] DHCP configuration for ports {'29ffb8f5-0d7b-4528-acdb-ba2f31b7d0f9', 'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:23 np0005625203.localdomain dnsmasq[320357]: exiting on receipt of SIGTERM
Feb 20 09:56:23 np0005625203.localdomain podman[320375]: 2026-02-20 09:56:23.405374417 +0000 UTC m=+0.061995319 container kill 952e8f49360a774ed0324b05539c1b18d9538335347a0c0c43261617fee5215d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:23 np0005625203.localdomain systemd[1]: libpod-952e8f49360a774ed0324b05539c1b18d9538335347a0c0c43261617fee5215d.scope: Deactivated successfully.
Feb 20 09:56:23 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:23.458 2 INFO neutron.agent.securitygroups_rpc [None req-8e5e38ae-f36c-4a7a-929b-4d665cde8908 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:23 np0005625203.localdomain podman[320389]: 2026-02-20 09:56:23.491744259 +0000 UTC m=+0.070782070 container died 952e8f49360a774ed0324b05539c1b18d9538335347a0c0c43261617fee5215d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:56:23 np0005625203.localdomain podman[320389]: 2026-02-20 09:56:23.520450007 +0000 UTC m=+0.099487788 container cleanup 952e8f49360a774ed0324b05539c1b18d9538335347a0c0c43261617fee5215d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:23 np0005625203.localdomain systemd[1]: libpod-conmon-952e8f49360a774ed0324b05539c1b18d9538335347a0c0c43261617fee5215d.scope: Deactivated successfully.
Feb 20 09:56:23 np0005625203.localdomain podman[320390]: 2026-02-20 09:56:23.558221546 +0000 UTC m=+0.128421784 container remove 952e8f49360a774ed0324b05539c1b18d9538335347a0c0c43261617fee5215d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 20 09:56:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:23.582 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:23 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:23Z|00293|binding|INFO|Releasing lport 29ffb8f5-0d7b-4528-acdb-ba2f31b7d0f9 from this chassis (sb_readonly=0)
Feb 20 09:56:23 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:23Z|00294|binding|INFO|Setting lport 29ffb8f5-0d7b-4528-acdb-ba2f31b7d0f9 down in Southbound
Feb 20 09:56:23 np0005625203.localdomain kernel: device tap29ffb8f5-0d left promiscuous mode
Feb 20 09:56:23 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:23.591 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8::f816:3eff:feb0:ac/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': ''}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=29ffb8f5-0d7b-4528-acdb-ba2f31b7d0f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:23 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:23.593 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 29ffb8f5-0d7b-4528-acdb-ba2f31b7d0f9 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:56:23 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:23.596 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:23 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:23.598 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd90de6-78e7-4da6-943d-a9448bd7ac6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:23 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:23.600 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:23 np0005625203.localdomain dnsmasq[319935]: exiting on receipt of SIGTERM
Feb 20 09:56:23 np0005625203.localdomain podman[320437]: 2026-02-20 09:56:23.753279391 +0000 UTC m=+0.079519002 container kill c878b84f8dc82d0c1c53797e4d2c04a3404292a25731a8598bc917cd01a04aa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:56:23 np0005625203.localdomain systemd[1]: libpod-c878b84f8dc82d0c1c53797e4d2c04a3404292a25731a8598bc917cd01a04aa7.scope: Deactivated successfully.
Feb 20 09:56:23 np0005625203.localdomain podman[320451]: 2026-02-20 09:56:23.825157804 +0000 UTC m=+0.055313582 container died c878b84f8dc82d0c1c53797e4d2c04a3404292a25731a8598bc917cd01a04aa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 20 09:56:23 np0005625203.localdomain podman[320451]: 2026-02-20 09:56:23.867933717 +0000 UTC m=+0.098089445 container remove c878b84f8dc82d0c1c53797e4d2c04a3404292a25731a8598bc917cd01a04aa7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb 20 09:56:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-9ac75d5534093d813f5e2666a1aa2703d638b40b64cf723e72785eacc2225dd3-merged.mount: Deactivated successfully.
Feb 20 09:56:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-952e8f49360a774ed0324b05539c1b18d9538335347a0c0c43261617fee5215d-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-b51fe5866a5112512317433462a7de08cf24fd10565fe93f4077c66acced9569-merged.mount: Deactivated successfully.
Feb 20 09:56:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c878b84f8dc82d0c1c53797e4d2c04a3404292a25731a8598bc917cd01a04aa7-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:23 np0005625203.localdomain systemd[1]: libpod-conmon-c878b84f8dc82d0c1c53797e4d2c04a3404292a25731a8598bc917cd01a04aa7.scope: Deactivated successfully.
Feb 20 09:56:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:23.938 262775 INFO neutron.agent.dhcp.agent [None req-a1e8e9e2-86f3-4c17-8961-c023b758dcfd - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:23.939 262775 INFO neutron.agent.dhcp.agent [None req-a1e8e9e2-86f3-4c17-8961-c023b758dcfd - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:23 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:56:24 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "15ed10cc-a478-4127-a35f-dcce0e8ec529", "format": "json"}]: dispatch
Feb 20 09:56:24 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "15ed10cc-a478-4127-a35f-dcce0e8ec529", "force": true, "format": "json"}]: dispatch
Feb 20 09:56:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:56:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:24.605 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:24 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:24.671 2 INFO neutron.agent.securitygroups_rpc [None req-7c70538f-1d84-485c-beb6-53999b2ce1d2 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['9c894fef-e625-4d2d-ad79-9f0215b19661', '6e36724b-9ab8-4bfe-9f74-069d82055697', '5fe0aa03-55bd-43ef-a38b-499c4a5e8b30']
Feb 20 09:56:25 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:25.115 2 INFO neutron.agent.securitygroups_rpc [None req-22f50294-5f51-4ab3-8b7c-31c2f02c0d3d 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:25 np0005625203.localdomain podman[320526]: 
Feb 20 09:56:25 np0005625203.localdomain podman[320526]: 2026-02-20 09:56:25.261382458 +0000 UTC m=+0.088827799 container create d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:25 np0005625203.localdomain systemd[1]: Started libpod-conmon-d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0.scope.
Feb 20 09:56:25 np0005625203.localdomain systemd[1]: tmp-crun.lltbSL.mount: Deactivated successfully.
Feb 20 09:56:25 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:25 np0005625203.localdomain ceph-mon[296066]: pgmap v304: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 12 KiB/s wr, 17 op/s
Feb 20 09:56:25 np0005625203.localdomain podman[320526]: 2026-02-20 09:56:25.21877584 +0000 UTC m=+0.046221221 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:25 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29953ee1a408656efc79dbfa72883785d27e5e226656571aa6c3b1f8ccc31ded/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:25 np0005625203.localdomain podman[320526]: 2026-02-20 09:56:25.328759503 +0000 UTC m=+0.156204814 container init d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Feb 20 09:56:25 np0005625203.localdomain podman[320526]: 2026-02-20 09:56:25.336994058 +0000 UTC m=+0.164439369 container start d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 20 09:56:25 np0005625203.localdomain dnsmasq[320543]: started, version 2.85 cachesize 150
Feb 20 09:56:25 np0005625203.localdomain dnsmasq[320543]: DNS service limited to local subnets
Feb 20 09:56:25 np0005625203.localdomain dnsmasq[320543]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:25 np0005625203.localdomain dnsmasq[320543]: warning: no upstream servers configured
Feb 20 09:56:25 np0005625203.localdomain dnsmasq-dhcp[320543]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Feb 20 09:56:25 np0005625203.localdomain dnsmasq-dhcp[320543]: DHCPv6, static leases only on 2001:db8:0:2::, lease time 1d
Feb 20 09:56:25 np0005625203.localdomain dnsmasq-dhcp[320543]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:56:25 np0005625203.localdomain dnsmasq[320543]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/addn_hosts - 1 addresses
Feb 20 09:56:25 np0005625203.localdomain dnsmasq-dhcp[320543]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/host
Feb 20 09:56:25 np0005625203.localdomain dnsmasq-dhcp[320543]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/opts
Feb 20 09:56:25 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:25.417 262775 INFO neutron.agent.dhcp.agent [None req-17dc2da5-3487-4d6d-aef0-38ae5c0c8549 - - - - - -] Trigger reload_allocations for port admin_state_up=False, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:20Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c87970>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c877f0>], id=ed62ab7d-6845-4612-9bd7-27b11c74b983, ip_allocation=immediate, mac_address=fa:16:3e:8d:1d:1c, name=tempest-PortsIpV6TestJSON-1869455769, network_id=519ed234-0c28-4a63-b6ed-1122a8d9dfc9, port_security_enabled=True, project_id=3a09b3407c3143c8ae0948ccb18c1e61, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['5fe0aa03-55bd-43ef-a38b-499c4a5e8b30', '6e36724b-9ab8-4bfe-9f74-069d82055697'], standard_attr_id=2295, status=DOWN, tags=[], tenant_id=3a09b3407c3143c8ae0948ccb18c1e61, updated_at=2026-02-20T09:56:24Z on network 519ed234-0c28-4a63-b6ed-1122a8d9dfc9
Feb 20 09:56:25 np0005625203.localdomain dnsmasq[320543]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/addn_hosts - 1 addresses
Feb 20 09:56:25 np0005625203.localdomain dnsmasq-dhcp[320543]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/host
Feb 20 09:56:25 np0005625203.localdomain dnsmasq-dhcp[320543]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/opts
Feb 20 09:56:25 np0005625203.localdomain podman[320560]: 2026-02-20 09:56:25.607032192 +0000 UTC m=+0.060790201 container kill d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 20 09:56:25 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:25.641 262775 INFO neutron.agent.dhcp.agent [None req-38c9723c-dbec-4ccc-b3b4-411b3a25fcf6 - - - - - -] DHCP configuration for ports {'8b912c3c-4bb6-4ee1-afd5-eacd5c98ea0f', 'ed62ab7d-6845-4612-9bd7-27b11c74b983', '4ae9bb75-db03-4abb-b744-8dfda71e8f04', 'ce08eb39-e0b4-4e5d-a2ce-d53a15770744'} is completed
Feb 20 09:56:25 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:25.788 2 INFO neutron.agent.securitygroups_rpc [None req-f3d891d9-b12f-41a6-9c43-2a59a14444d4 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['6e36724b-9ab8-4bfe-9f74-069d82055697', '5fe0aa03-55bd-43ef-a38b-499c4a5e8b30']
Feb 20 09:56:25 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:25.860 262775 INFO neutron.agent.dhcp.agent [None req-2e264db5-df9a-43b0-9f2d-a233bb07279c - - - - - -] DHCP configuration for ports {'ed62ab7d-6845-4612-9bd7-27b11c74b983'} is completed
Feb 20 09:56:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:26.001 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:26 np0005625203.localdomain dnsmasq[320543]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/addn_hosts - 0 addresses
Feb 20 09:56:26 np0005625203.localdomain dnsmasq-dhcp[320543]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/host
Feb 20 09:56:26 np0005625203.localdomain dnsmasq-dhcp[320543]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/opts
Feb 20 09:56:26 np0005625203.localdomain podman[320597]: 2026-02-20 09:56:26.012207587 +0000 UTC m=+0.060059679 container kill d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 20 09:56:26 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:56:26 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:56:26 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:26.625 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 2001:db8::f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 10.100.0.2 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:26 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:26.629 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated
Feb 20 09:56:26 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:26.632 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:26 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:26.633 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[63907ca3-bc2e-451f-9697-13dbfffff483]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:27 np0005625203.localdomain ceph-mon[296066]: pgmap v305: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 21 KiB/s wr, 32 op/s
Feb 20 09:56:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "da4b4a91-1d53-4396-80dd-4a8521885337", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:56:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "da4b4a91-1d53-4396-80dd-4a8521885337", "format": "json"}]: dispatch
Feb 20 09:56:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:56:27 np0005625203.localdomain dnsmasq[320543]: exiting on receipt of SIGTERM
Feb 20 09:56:27 np0005625203.localdomain podman[320635]: 2026-02-20 09:56:27.446403539 +0000 UTC m=+0.060087401 container kill d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:56:27 np0005625203.localdomain systemd[1]: libpod-d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0.scope: Deactivated successfully.
Feb 20 09:56:27 np0005625203.localdomain podman[320650]: 2026-02-20 09:56:27.518064765 +0000 UTC m=+0.055835187 container died d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:27 np0005625203.localdomain podman[320650]: 2026-02-20 09:56:27.548333152 +0000 UTC m=+0.086103524 container cleanup d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 20 09:56:27 np0005625203.localdomain systemd[1]: libpod-conmon-d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0.scope: Deactivated successfully.
Feb 20 09:56:27 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:27.558 2 INFO neutron.agent.securitygroups_rpc [None req-a6f56626-9080-4b48-8909-d5cbdaffd977 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:27 np0005625203.localdomain podman[320651]: 2026-02-20 09:56:27.591096306 +0000 UTC m=+0.123375789 container remove d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:56:27 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:27.801 262775 INFO neutron.agent.linux.ip_lib [None req-48a61a76-e0cb-4b99-95d6-dd1b9da7b210 - - - - - -] Device tapd723685c-b1 cannot be used as it has no MAC address
Feb 20 09:56:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:27.842 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:27 np0005625203.localdomain kernel: device tapd723685c-b1 entered promiscuous mode
Feb 20 09:56:27 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581387.8529] manager: (tapd723685c-b1): new Generic device (/org/freedesktop/NetworkManager/Devices/59)
Feb 20 09:56:27 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:27Z|00295|binding|INFO|Claiming lport d723685c-b133-4caa-a02b-6a566284e980 for this chassis.
Feb 20 09:56:27 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:27Z|00296|binding|INFO|d723685c-b133-4caa-a02b-6a566284e980: Claiming unknown
Feb 20 09:56:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:27.853 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:27 np0005625203.localdomain systemd-udevd[320704]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:27 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:27.863 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3f:99c1/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=d723685c-b133-4caa-a02b-6a566284e980) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:27 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:27Z|00297|binding|INFO|Setting lport d723685c-b133-4caa-a02b-6a566284e980 up in Southbound
Feb 20 09:56:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:27.864 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:27 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:27Z|00298|binding|INFO|Setting lport d723685c-b133-4caa-a02b-6a566284e980 ovn-installed in OVS
Feb 20 09:56:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:27.866 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:27 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:27.867 161112 INFO neutron.agent.ovn.metadata.agent [-] Port d723685c-b133-4caa-a02b-6a566284e980 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:56:27 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:27.870 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 059df2e6-b9ad-4499-848a-ccb5e56bef8b IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:56:27 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:27.871 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:27.871 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:27 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:27.872 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[12f9dc0e-edaa-440c-ae8c-8053b7cb2263]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:27.909 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:27.948 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:28 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:28.029 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-29953ee1a408656efc79dbfa72883785d27e5e226656571aa6c3b1f8ccc31ded-merged.mount: Deactivated successfully.
Feb 20 09:56:28 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d06e9195aff711c363c2af0d5e4dc91de7224b4ced7eb5c5535934dbf76a3cb0-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:28 np0005625203.localdomain podman[320769]: 
Feb 20 09:56:28 np0005625203.localdomain podman[320769]: 2026-02-20 09:56:28.582207858 +0000 UTC m=+0.092848454 container create 361501599a162147fa8d1368b21bd4a705d996972399b87c4dae31331f16aa78 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:56:28 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:28.591 2 INFO neutron.agent.securitygroups_rpc [None req-6a8272e4-f5a1-42d2-a801-cea63c76a8af f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:28 np0005625203.localdomain systemd[1]: Started libpod-conmon-361501599a162147fa8d1368b21bd4a705d996972399b87c4dae31331f16aa78.scope.
Feb 20 09:56:28 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:28 np0005625203.localdomain podman[320769]: 2026-02-20 09:56:28.543943564 +0000 UTC m=+0.054584190 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:28 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d41e57b21901058a8b39fb256d8c5a07775321e5e922505e34509dc5af15a5a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:28 np0005625203.localdomain podman[320769]: 2026-02-20 09:56:28.657534008 +0000 UTC m=+0.168174594 container init 361501599a162147fa8d1368b21bd4a705d996972399b87c4dae31331f16aa78 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:56:28 np0005625203.localdomain podman[320769]: 2026-02-20 09:56:28.666179966 +0000 UTC m=+0.176820562 container start 361501599a162147fa8d1368b21bd4a705d996972399b87c4dae31331f16aa78 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 20 09:56:28 np0005625203.localdomain dnsmasq[320791]: started, version 2.85 cachesize 150
Feb 20 09:56:28 np0005625203.localdomain dnsmasq[320791]: DNS service limited to local subnets
Feb 20 09:56:28 np0005625203.localdomain dnsmasq[320791]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:28 np0005625203.localdomain dnsmasq[320791]: warning: no upstream servers configured
Feb 20 09:56:28 np0005625203.localdomain dnsmasq-dhcp[320791]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Feb 20 09:56:28 np0005625203.localdomain dnsmasq-dhcp[320791]: DHCPv6, static leases only on 2001:db8:0:2::, lease time 1d
Feb 20 09:56:28 np0005625203.localdomain dnsmasq[320791]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/addn_hosts - 0 addresses
Feb 20 09:56:28 np0005625203.localdomain dnsmasq-dhcp[320791]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/host
Feb 20 09:56:28 np0005625203.localdomain dnsmasq-dhcp[320791]: read /var/lib/neutron/dhcp/519ed234-0c28-4a63-b6ed-1122a8d9dfc9/opts
Feb 20 09:56:28 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:28.895 2 INFO neutron.agent.securitygroups_rpc [None req-222fdb52-3334-45ab-8f45-945b32b8d031 809f6e2027f9442d8bd2b94b11475b17 3a09b3407c3143c8ae0948ccb18c1e61 - - default default] Security group member updated ['b2e5856c-f1df-4bbc-8f9c-41698aa249c6']
Feb 20 09:56:28 np0005625203.localdomain podman[320814]: 
Feb 20 09:56:28 np0005625203.localdomain podman[320814]: 2026-02-20 09:56:28.950780551 +0000 UTC m=+0.084109893 container create a2653c0466942bb4038389eeaca1b25661310fc3eb4cdd397d859eb718eee9c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 20 09:56:28 np0005625203.localdomain systemd[1]: Started libpod-conmon-a2653c0466942bb4038389eeaca1b25661310fc3eb4cdd397d859eb718eee9c7.scope.
Feb 20 09:56:28 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:28.981 262775 INFO neutron.agent.dhcp.agent [None req-cb2847d3-c8b4-4202-8807-315ed75fd3aa - - - - - -] DHCP configuration for ports {'8b912c3c-4bb6-4ee1-afd5-eacd5c98ea0f', '4ae9bb75-db03-4abb-b744-8dfda71e8f04', 'ce08eb39-e0b4-4e5d-a2ce-d53a15770744'} is completed
Feb 20 09:56:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:56:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:56:28 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:29 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c3b7a939a98d944bc9501cb770e2848186132627d4da61be676d3d958a9e989/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:29 np0005625203.localdomain podman[320814]: 2026-02-20 09:56:28.908205374 +0000 UTC m=+0.041534766 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:29 np0005625203.localdomain podman[320814]: 2026-02-20 09:56:29.115803836 +0000 UTC m=+0.249133208 container init a2653c0466942bb4038389eeaca1b25661310fc3eb4cdd397d859eb718eee9c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:56:29 np0005625203.localdomain dnsmasq[320861]: started, version 2.85 cachesize 150
Feb 20 09:56:29 np0005625203.localdomain dnsmasq[320861]: DNS service limited to local subnets
Feb 20 09:56:29 np0005625203.localdomain dnsmasq[320861]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:29 np0005625203.localdomain dnsmasq[320861]: warning: no upstream servers configured
Feb 20 09:56:29 np0005625203.localdomain dnsmasq[320861]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:29 np0005625203.localdomain dnsmasq[320791]: exiting on receipt of SIGTERM
Feb 20 09:56:29 np0005625203.localdomain podman[320848]: 2026-02-20 09:56:29.137994093 +0000 UTC m=+0.051798994 container kill 361501599a162147fa8d1368b21bd4a705d996972399b87c4dae31331f16aa78 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Feb 20 09:56:29 np0005625203.localdomain systemd[1]: libpod-361501599a162147fa8d1368b21bd4a705d996972399b87c4dae31331f16aa78.scope: Deactivated successfully.
Feb 20 09:56:29 np0005625203.localdomain podman[320814]: 2026-02-20 09:56:29.176094221 +0000 UTC m=+0.309423563 container start a2653c0466942bb4038389eeaca1b25661310fc3eb4cdd397d859eb718eee9c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:56:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159526 "" "Go-http-client/1.1"
Feb 20 09:56:29 np0005625203.localdomain podman[240359]: 2026-02-20 09:56:29.205663736 +0000 UTC m=+1829.428335907 container died 361501599a162147fa8d1368b21bd4a705d996972399b87c4dae31331f16aa78 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 20 09:56:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19125 "" "Go-http-client/1.1"
Feb 20 09:56:29 np0005625203.localdomain podman[320865]: 2026-02-20 09:56:29.315196895 +0000 UTC m=+0.156865064 container remove 361501599a162147fa8d1368b21bd4a705d996972399b87c4dae31331f16aa78 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-519ed234-0c28-4a63-b6ed-1122a8d9dfc9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 09:56:29 np0005625203.localdomain systemd[1]: libpod-conmon-361501599a162147fa8d1368b21bd4a705d996972399b87c4dae31331f16aa78.scope: Deactivated successfully.
Feb 20 09:56:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:29.329 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:29 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:29Z|00299|binding|INFO|Releasing lport ce08eb39-e0b4-4e5d-a2ce-d53a15770744 from this chassis (sb_readonly=0)
Feb 20 09:56:29 np0005625203.localdomain kernel: device tapce08eb39-e0 left promiscuous mode
Feb 20 09:56:29 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:29Z|00300|binding|INFO|Setting lport ce08eb39-e0b4-4e5d-a2ce-d53a15770744 down in Southbound
Feb 20 09:56:29 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:29.332 262775 INFO neutron.agent.dhcp.agent [None req-b5105f70-cce5-483b-9228-de87a73e5d9f - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:29 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:29.338 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1::2/64 2001:db8:0:2::2/64 2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-519ed234-0c28-4a63-b6ed-1122a8d9dfc9', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-519ed234-0c28-4a63-b6ed-1122a8d9dfc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3a09b3407c3143c8ae0948ccb18c1e61', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d5ca6d31-d3f7-4561-ade0-690f710982c4, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=ce08eb39-e0b4-4e5d-a2ce-d53a15770744) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:29 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:29.340 161112 INFO neutron.agent.ovn.metadata.agent [-] Port ce08eb39-e0b4-4e5d-a2ce-d53a15770744 in datapath 519ed234-0c28-4a63-b6ed-1122a8d9dfc9 unbound from our chassis
Feb 20 09:56:29 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:29.342 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 519ed234-0c28-4a63-b6ed-1122a8d9dfc9 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:56:29 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:29.343 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[429a365d-36aa-4661-933d-f346b0be52b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:29 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:56:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:29.353 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:29 np0005625203.localdomain systemd[1]: tmp-crun.wL93Gv.mount: Deactivated successfully.
Feb 20 09:56:29 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d41e57b21901058a8b39fb256d8c5a07775321e5e922505e34509dc5af15a5a4-merged.mount: Deactivated successfully.
Feb 20 09:56:29 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-361501599a162147fa8d1368b21bd4a705d996972399b87c4dae31331f16aa78-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:29 np0005625203.localdomain ceph-mon[296066]: pgmap v306: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 31 op/s
Feb 20 09:56:29 np0005625203.localdomain podman[320909]: 2026-02-20 09:56:29.465080872 +0000 UTC m=+0.056856530 container kill a2653c0466942bb4038389eeaca1b25661310fc3eb4cdd397d859eb718eee9c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:56:29 np0005625203.localdomain dnsmasq[320861]: exiting on receipt of SIGTERM
Feb 20 09:56:29 np0005625203.localdomain systemd[1]: libpod-a2653c0466942bb4038389eeaca1b25661310fc3eb4cdd397d859eb718eee9c7.scope: Deactivated successfully.
Feb 20 09:56:29 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d519ed234\x2d0c28\x2d4a63\x2db6ed\x2d1122a8d9dfc9.mount: Deactivated successfully.
Feb 20 09:56:29 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:29.532 262775 INFO neutron.agent.dhcp.agent [None req-a4660573-1b8c-40ef-aa70-ab2a69e19c7f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:29 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:29.532 262775 INFO neutron.agent.dhcp.agent [None req-a4660573-1b8c-40ef-aa70-ab2a69e19c7f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:29 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:29.535 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:29 np0005625203.localdomain podman[320921]: 2026-02-20 09:56:29.535827621 +0000 UTC m=+0.055702304 container died a2653c0466942bb4038389eeaca1b25661310fc3eb4cdd397d859eb718eee9c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:29 np0005625203.localdomain podman[320921]: 2026-02-20 09:56:29.566951364 +0000 UTC m=+0.086826007 container cleanup a2653c0466942bb4038389eeaca1b25661310fc3eb4cdd397d859eb718eee9c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:29 np0005625203.localdomain systemd[1]: libpod-conmon-a2653c0466942bb4038389eeaca1b25661310fc3eb4cdd397d859eb718eee9c7.scope: Deactivated successfully.
Feb 20 09:56:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:29.607 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:29 np0005625203.localdomain podman[320923]: 2026-02-20 09:56:29.629388605 +0000 UTC m=+0.143357016 container remove a2653c0466942bb4038389eeaca1b25661310fc3eb4cdd397d859eb718eee9c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:29.642 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:29 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:29Z|00301|binding|INFO|Releasing lport d723685c-b133-4caa-a02b-6a566284e980 from this chassis (sb_readonly=0)
Feb 20 09:56:29 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:29Z|00302|binding|INFO|Setting lport d723685c-b133-4caa-a02b-6a566284e980 down in Southbound
Feb 20 09:56:29 np0005625203.localdomain kernel: device tapd723685c-b1 left promiscuous mode
Feb 20 09:56:29 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:29.653 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe3f:99c1/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=d723685c-b133-4caa-a02b-6a566284e980) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:29 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:29.655 161112 INFO neutron.agent.ovn.metadata.agent [-] Port d723685c-b133-4caa-a02b-6a566284e980 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:56:29 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:29.658 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:29 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:29.659 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[54775b76-c936-4e4d-b05b-b5ca5cabef1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:29.666 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:29 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:29.907 262775 INFO neutron.agent.dhcp.agent [None req-cd1a905f-dce3-4fe4-a14b-0978048d6f97 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:29 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:29.908 262775 INFO neutron.agent.dhcp.agent [None req-cd1a905f-dce3-4fe4-a14b-0978048d6f97 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:29 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:29.947 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:30.337 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:56:30 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-3c3b7a939a98d944bc9501cb770e2848186132627d4da61be676d3d958a9e989-merged.mount: Deactivated successfully.
Feb 20 09:56:30 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2653c0466942bb4038389eeaca1b25661310fc3eb4cdd397d859eb718eee9c7-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:30 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:56:30 np0005625203.localdomain systemd[1]: tmp-crun.9piC8h.mount: Deactivated successfully.
Feb 20 09:56:30 np0005625203.localdomain podman[320951]: 2026-02-20 09:56:30.519147032 +0000 UTC m=+0.086071364 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 20 09:56:30 np0005625203.localdomain podman[320951]: 2026-02-20 09:56:30.552270057 +0000 UTC m=+0.119194339 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:30 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:56:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:31.004 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:31 np0005625203.localdomain ceph-mon[296066]: pgmap v307: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 32 op/s
Feb 20 09:56:31 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "da4b4a91-1d53-4396-80dd-4a8521885337", "format": "json"}]: dispatch
Feb 20 09:56:31 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "da4b4a91-1d53-4396-80dd-4a8521885337", "force": true, "format": "json"}]: dispatch
Feb 20 09:56:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:31.629 262775 INFO neutron.agent.linux.ip_lib [None req-4356702b-0197-441c-85b0-74e51b8e7fe3 - - - - - -] Device tapc617abb1-d9 cannot be used as it has no MAC address
Feb 20 09:56:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:31.680 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:83:b0 10.100.0.18 10.100.0.3'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.3/28', 'neutron:device_id': 'ovnmeta-34dc61c2-2cd5-48a1-a54d-350e15f73770', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34dc61c2-2cd5-48a1-a54d-350e15f73770', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1b4d5592-ecf2-48cc-b3b1-c6ba46f9e5e6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=dee4bf28-462f-4e5a-bb37-08fba06228d7) old=Port_Binding(mac=['fa:16:3e:ce:83:b0 10.100.0.3'], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ovnmeta-34dc61c2-2cd5-48a1-a54d-350e15f73770', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-34dc61c2-2cd5-48a1-a54d-350e15f73770', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:31.681 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port dee4bf28-462f-4e5a-bb37-08fba06228d7 in datapath 34dc61c2-2cd5-48a1-a54d-350e15f73770 updated
Feb 20 09:56:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:31.683 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 34dc61c2-2cd5-48a1-a54d-350e15f73770, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:31.696 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[901eadfd-1307-44b2-9251-d4577a23d415]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:31.700 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:31 np0005625203.localdomain kernel: device tapc617abb1-d9 entered promiscuous mode
Feb 20 09:56:31 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581391.7064] manager: (tapc617abb1-d9): new Generic device (/org/freedesktop/NetworkManager/Devices/60)
Feb 20 09:56:31 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:31Z|00303|binding|INFO|Claiming lport c617abb1-d92f-42cf-bb62-df84bc39db2d for this chassis.
Feb 20 09:56:31 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:31Z|00304|binding|INFO|c617abb1-d92f-42cf-bb62-df84bc39db2d: Claiming unknown
Feb 20 09:56:31 np0005625203.localdomain systemd-udevd[320981]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:31.710 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:31.715 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea5:9c69/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=c617abb1-d92f-42cf-bb62-df84bc39db2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:31.716 161112 INFO neutron.agent.ovn.metadata.agent [-] Port c617abb1-d92f-42cf-bb62-df84bc39db2d in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:56:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:31.718 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 8113be45-2bc7-41b6-81c3-13790d75db64 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:56:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:31.718 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:31 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:31.718 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[ac889f15-ff9d-4e63-8036-34972eed72d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:31 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:31Z|00305|binding|INFO|Setting lport c617abb1-d92f-42cf-bb62-df84bc39db2d ovn-installed in OVS
Feb 20 09:56:31 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:31Z|00306|binding|INFO|Setting lport c617abb1-d92f-42cf-bb62-df84bc39db2d up in Southbound
Feb 20 09:56:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:31.744 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:31.784 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:31 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:31.795 2 INFO neutron.agent.securitygroups_rpc [None req-c16a47d9-8c3c-4273-8f35-4d2edcf8a46b f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:31.813 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:32 np0005625203.localdomain podman[321036]: 
Feb 20 09:56:32 np0005625203.localdomain podman[321036]: 2026-02-20 09:56:32.647301512 +0000 UTC m=+0.089920233 container create 4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:32 np0005625203.localdomain systemd[1]: Started libpod-conmon-4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc.scope.
Feb 20 09:56:32 np0005625203.localdomain systemd[1]: tmp-crun.zty5k7.mount: Deactivated successfully.
Feb 20 09:56:32 np0005625203.localdomain podman[321036]: 2026-02-20 09:56:32.604621232 +0000 UTC m=+0.047239993 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:32 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:32 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:56:32 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3147010759' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:56:32 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcd0e0212077765175598cd0e24e6af040e85c218e2ee1a1206aeaffe959e4f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:32 np0005625203.localdomain podman[321036]: 2026-02-20 09:56:32.722020064 +0000 UTC m=+0.164638775 container init 4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:32 np0005625203.localdomain podman[321036]: 2026-02-20 09:56:32.734533031 +0000 UTC m=+0.177151742 container start 4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:56:32 np0005625203.localdomain ceph-mon[296066]: pgmap v308: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 30 op/s
Feb 20 09:56:32 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3147010759' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:56:32 np0005625203.localdomain dnsmasq[321054]: started, version 2.85 cachesize 150
Feb 20 09:56:32 np0005625203.localdomain dnsmasq[321054]: DNS service limited to local subnets
Feb 20 09:56:32 np0005625203.localdomain dnsmasq[321054]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:32 np0005625203.localdomain dnsmasq[321054]: warning: no upstream servers configured
Feb 20 09:56:32 np0005625203.localdomain dnsmasq-dhcp[321054]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:56:32 np0005625203.localdomain dnsmasq[321054]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:32 np0005625203.localdomain dnsmasq-dhcp[321054]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:32 np0005625203.localdomain dnsmasq-dhcp[321054]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:32 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:32.794 262775 INFO neutron.agent.dhcp.agent [None req-4356702b-0197-441c-85b0-74e51b8e7fe3 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:31Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4cc0280>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e9c8b0>], id=36f7772e-8fbe-4e2d-9f3e-669f7cba33ef, ip_allocation=immediate, mac_address=fa:16:3e:3f:dd:81, name=tempest-NetworksTestDHCPv6-580251622, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=40, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['a94c992f-52a8-42d3-91ba-79b3b351c78a'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:29Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=2332, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:31Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:56:32 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:32.905 262775 INFO neutron.agent.dhcp.agent [None req-57e9d34e-3681-4756-8c56-a9b583a1ac6c - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:32 np0005625203.localdomain dnsmasq[321054]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:56:32 np0005625203.localdomain dnsmasq-dhcp[321054]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:32 np0005625203.localdomain dnsmasq-dhcp[321054]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:32 np0005625203.localdomain podman[321072]: 2026-02-20 09:56:32.954403813 +0000 UTC m=+0.048498971 container kill 4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 20 09:56:33 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:33.110 2 INFO neutron.agent.securitygroups_rpc [None req-b75820d6-6baf-4494-b7b1-8acd63dcbbd9 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:33 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:33.144 2 INFO neutron.agent.securitygroups_rpc [None req-8fddf0ed-4d67-47fd-a98a-ec6a15c12895 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:33 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:33.225 262775 INFO neutron.agent.dhcp.agent [None req-a84cff0c-9236-4b48-877a-dcdaf84b7d0c - - - - - -] DHCP configuration for ports {'36f7772e-8fbe-4e2d-9f3e-669f7cba33ef'} is completed
Feb 20 09:56:33 np0005625203.localdomain dnsmasq[321054]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:33 np0005625203.localdomain dnsmasq-dhcp[321054]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:33 np0005625203.localdomain podman[321109]: 2026-02-20 09:56:33.330943032 +0000 UTC m=+0.062440403 container kill 4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 20 09:56:33 np0005625203.localdomain dnsmasq-dhcp[321054]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e153 e153: 6 total, 6 up, 6 in
Feb 20 09:56:33 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "81bbf3cf-dd05-49ca-a680-c731aa18f72b", "format": "json"}]: dispatch
Feb 20 09:56:34 np0005625203.localdomain dnsmasq[321054]: exiting on receipt of SIGTERM
Feb 20 09:56:34 np0005625203.localdomain systemd[1]: libpod-4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc.scope: Deactivated successfully.
Feb 20 09:56:34 np0005625203.localdomain podman[321148]: 2026-02-20 09:56:34.066984394 +0000 UTC m=+0.069286505 container kill 4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:34 np0005625203.localdomain podman[321161]: 2026-02-20 09:56:34.132557493 +0000 UTC m=+0.054358533 container died 4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:56:34 np0005625203.localdomain systemd[1]: tmp-crun.4vxy9Z.mount: Deactivated successfully.
Feb 20 09:56:34 np0005625203.localdomain podman[321161]: 2026-02-20 09:56:34.17127816 +0000 UTC m=+0.093079150 container cleanup 4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:56:34 np0005625203.localdomain systemd[1]: libpod-conmon-4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc.scope: Deactivated successfully.
Feb 20 09:56:34 np0005625203.localdomain podman[321169]: 2026-02-20 09:56:34.222791264 +0000 UTC m=+0.129351012 container remove 4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:34.252 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:34 np0005625203.localdomain kernel: device tapc617abb1-d9 left promiscuous mode
Feb 20 09:56:34 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:34Z|00307|binding|INFO|Releasing lport c617abb1-d92f-42cf-bb62-df84bc39db2d from this chassis (sb_readonly=0)
Feb 20 09:56:34 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:34Z|00308|binding|INFO|Setting lport c617abb1-d92f-42cf-bb62-df84bc39db2d down in Southbound
Feb 20 09:56:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:34.261 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fea5:9c69/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=c617abb1-d92f-42cf-bb62-df84bc39db2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:34.263 161112 INFO neutron.agent.ovn.metadata.agent [-] Port c617abb1-d92f-42cf-bb62-df84bc39db2d in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:56:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:34.265 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:34.266 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6219eb-377c-4760-8ed0-3ab8411660de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:34.275 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:56:34 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:34.444 2 INFO neutron.agent.securitygroups_rpc [None req-ceb6fbf0-e236-46d5-ab31-4b9208acd398 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:34 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:34.484 262775 INFO neutron.agent.dhcp.agent [None req-93b24e44-228a-4da5-8ee2-b4f5217b8693 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:34.611 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:34 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-fbcd0e0212077765175598cd0e24e6af040e85c218e2ee1a1206aeaffe959e4f-merged.mount: Deactivated successfully.
Feb 20 09:56:34 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4cfdad6d362afb63859f895d6e88d6e1bb02cc759bac2b39dd5cd1c04729e0bc-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:34 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:56:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e154 e154: 6 total, 6 up, 6 in
Feb 20 09:56:34 np0005625203.localdomain ceph-mon[296066]: pgmap v309: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 14 KiB/s wr, 16 op/s
Feb 20 09:56:34 np0005625203.localdomain ceph-mon[296066]: osdmap e153: 6 total, 6 up, 6 in
Feb 20 09:56:35 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:35.742 2 INFO neutron.agent.securitygroups_rpc [None req-21b5cfcb-ef7f-4dc6-82f5-46fe7ab7fc9a 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:35 np0005625203.localdomain ceph-mon[296066]: osdmap e154: 6 total, 6 up, 6 in
Feb 20 09:56:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e155 e155: 6 total, 6 up, 6 in
Feb 20 09:56:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:36.007 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:56:36 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:36.675 2 INFO neutron.agent.securitygroups_rpc [None req-5f75f844-b31b-4010-9a93-efcf0b2c4eb8 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:36 np0005625203.localdomain podman[321192]: 2026-02-20 09:56:36.769649439 +0000 UTC m=+0.084005980 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Feb 20 09:56:36 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:36.808 262775 INFO neutron.agent.linux.ip_lib [None req-14c7b02c-459d-4c70-af5b-3f0fba264970 - - - - - -] Device tap8a95f8e8-aa cannot be used as it has no MAC address
Feb 20 09:56:36 np0005625203.localdomain podman[321192]: 2026-02-20 09:56:36.849387616 +0000 UTC m=+0.163744157 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller)
Feb 20 09:56:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:36.859 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:36 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:56:36 np0005625203.localdomain kernel: device tap8a95f8e8-aa entered promiscuous mode
Feb 20 09:56:36 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581396.8689] manager: (tap8a95f8e8-aa): new Generic device (/org/freedesktop/NetworkManager/Devices/61)
Feb 20 09:56:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:36.870 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:36 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:36Z|00309|binding|INFO|Claiming lport 8a95f8e8-aaa7-42b4-90e6-7ae43289d6aa for this chassis.
Feb 20 09:56:36 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:36Z|00310|binding|INFO|8a95f8e8-aaa7-42b4-90e6-7ae43289d6aa: Claiming unknown
Feb 20 09:56:36 np0005625203.localdomain systemd-udevd[321227]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:36 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:36Z|00311|binding|INFO|Setting lport 8a95f8e8-aaa7-42b4-90e6-7ae43289d6aa ovn-installed in OVS
Feb 20 09:56:36 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:36Z|00312|binding|INFO|Setting lport 8a95f8e8-aaa7-42b4-90e6-7ae43289d6aa up in Southbound
Feb 20 09:56:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:36.879 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe81:7e1c/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=8a95f8e8-aaa7-42b4-90e6-7ae43289d6aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:36.880 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:36.881 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 8a95f8e8-aaa7-42b4-90e6-7ae43289d6aa in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:56:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:36.884 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 9cf25ab8-565c-4b8d-bbf5-f4d91b919eef IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:56:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:36.884 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:36.885 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:36 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:36.885 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[6b48b727-fb86-4524-ad76-da12aae5fc08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:36 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:36.899 2 INFO neutron.agent.securitygroups_rpc [None req-a5e98a26-c124-4fdc-9abc-b12558eae8ef f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:36.910 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:36 np0005625203.localdomain ceph-mon[296066]: pgmap v312: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 4.6 KiB/s rd, 18 KiB/s wr, 11 op/s
Feb 20 09:56:36 np0005625203.localdomain ceph-mon[296066]: osdmap e155: 6 total, 6 up, 6 in
Feb 20 09:56:36 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e156 e156: 6 total, 6 up, 6 in
Feb 20 09:56:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:36.957 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:36.991 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:56:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:56:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:56:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:56:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:56:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:56:37 np0005625203.localdomain podman[321282]: 
Feb 20 09:56:37 np0005625203.localdomain podman[321282]: 2026-02-20 09:56:37.833917195 +0000 UTC m=+0.088402456 container create 0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:56:37 np0005625203.localdomain systemd[1]: Started libpod-conmon-0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f.scope.
Feb 20 09:56:37 np0005625203.localdomain podman[321282]: 2026-02-20 09:56:37.791916046 +0000 UTC m=+0.046401357 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:37 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:37 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6af144045be0048ba6688455d6b7f68079e4560438aa124ea22d3ba91e2deb5a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:37 np0005625203.localdomain podman[321282]: 2026-02-20 09:56:37.905252002 +0000 UTC m=+0.159737263 container init 0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:56:37 np0005625203.localdomain podman[321282]: 2026-02-20 09:56:37.917691117 +0000 UTC m=+0.172176378 container start 0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 20 09:56:37 np0005625203.localdomain dnsmasq[321300]: started, version 2.85 cachesize 150
Feb 20 09:56:37 np0005625203.localdomain dnsmasq[321300]: DNS service limited to local subnets
Feb 20 09:56:37 np0005625203.localdomain dnsmasq[321300]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:37 np0005625203.localdomain dnsmasq[321300]: warning: no upstream servers configured
Feb 20 09:56:37 np0005625203.localdomain dnsmasq[321300]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:37 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "81bbf3cf-dd05-49ca-a680-c731aa18f72b_083f2f39-54db-4760-baba-9aefd6c5b6fc", "force": true, "format": "json"}]: dispatch
Feb 20 09:56:37 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "81bbf3cf-dd05-49ca-a680-c731aa18f72b", "force": true, "format": "json"}]: dispatch
Feb 20 09:56:37 np0005625203.localdomain ceph-mon[296066]: osdmap e156: 6 total, 6 up, 6 in
Feb 20 09:56:37 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:37.976 262775 INFO neutron.agent.dhcp.agent [None req-14c7b02c-459d-4c70-af5b-3f0fba264970 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:36Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c85b20>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c854c0>], id=e835b8fc-84c6-4414-b63a-da6484eca50b, ip_allocation=immediate, mac_address=fa:16:3e:b7:3a:65, name=tempest-NetworksTestDHCPv6-1303858437, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=42, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['67168df8-9d01-4b5e-af47-d1eaff0ec9dd'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:34Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=2347, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:36Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:56:38 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:38.026 2 INFO neutron.agent.securitygroups_rpc [None req-a827ab5a-214a-4a1d-a84d-cac050b991d6 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:38 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:38.098 262775 INFO neutron.agent.dhcp.agent [None req-660701d0-f6b9-40d6-a2b9-79433d816a21 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:38 np0005625203.localdomain dnsmasq[321300]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:56:38 np0005625203.localdomain podman[321319]: 2026-02-20 09:56:38.18379996 +0000 UTC m=+0.063494186 container kill 0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:56:38 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:38.415 262775 INFO neutron.agent.dhcp.agent [None req-3f15aaf1-cc8a-4a10-99ed-e998b17d2a1e - - - - - -] DHCP configuration for ports {'e835b8fc-84c6-4414-b63a-da6484eca50b'} is completed
Feb 20 09:56:38 np0005625203.localdomain dnsmasq[321300]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:38 np0005625203.localdomain podman[321356]: 2026-02-20 09:56:38.537397999 +0000 UTC m=+0.061432981 container kill 0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 20 09:56:38 np0005625203.localdomain systemd[1]: tmp-crun.dcGwDk.mount: Deactivated successfully.
Feb 20 09:56:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:56:38 np0005625203.localdomain systemd[1]: tmp-crun.zR3cVB.mount: Deactivated successfully.
Feb 20 09:56:38 np0005625203.localdomain ceph-mon[296066]: pgmap v315: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 8.7 KiB/s rd, 23 KiB/s wr, 18 op/s
Feb 20 09:56:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9ac42b7-680c-41fc-8784-6176baa738f7", "format": "json"}]: dispatch
Feb 20 09:56:38 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2694845083' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:38 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2694845083' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:38 np0005625203.localdomain podman[321388]: 2026-02-20 09:56:38.964025808 +0000 UTC m=+0.097941961 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, release=1770267347, distribution-scope=public, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-type=git, name=ubi9/ubi-minimal, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, version=9.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter)
Feb 20 09:56:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e157 e157: 6 total, 6 up, 6 in
Feb 20 09:56:38 np0005625203.localdomain podman[321388]: 2026-02-20 09:56:38.983702827 +0000 UTC m=+0.117618990 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, vcs-type=git, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.created=2026-02-05T04:57:10Z, architecture=x86_64, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible)
Feb 20 09:56:39 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:56:39 np0005625203.localdomain dnsmasq[321300]: exiting on receipt of SIGTERM
Feb 20 09:56:39 np0005625203.localdomain podman[321406]: 2026-02-20 09:56:39.052155575 +0000 UTC m=+0.121969965 container kill 0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:56:39 np0005625203.localdomain systemd[1]: libpod-0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f.scope: Deactivated successfully.
Feb 20 09:56:39 np0005625203.localdomain podman[321425]: 2026-02-20 09:56:39.101843841 +0000 UTC m=+0.037173650 container died 0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:56:39 np0005625203.localdomain podman[321428]: 2026-02-20 09:56:39.159269919 +0000 UTC m=+0.080858653 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 20 09:56:39 np0005625203.localdomain podman[321428]: 2026-02-20 09:56:39.195491849 +0000 UTC m=+0.117080563 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:39 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:56:39 np0005625203.localdomain podman[321425]: 2026-02-20 09:56:39.237582831 +0000 UTC m=+0.172912640 container cleanup 0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:56:39 np0005625203.localdomain systemd[1]: libpod-conmon-0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f.scope: Deactivated successfully.
Feb 20 09:56:39 np0005625203.localdomain podman[321427]: 2026-02-20 09:56:39.263862114 +0000 UTC m=+0.188468781 container remove 0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 20 09:56:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:39.277 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:39 np0005625203.localdomain kernel: device tap8a95f8e8-aa left promiscuous mode
Feb 20 09:56:39 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:39Z|00313|binding|INFO|Releasing lport 8a95f8e8-aaa7-42b4-90e6-7ae43289d6aa from this chassis (sb_readonly=0)
Feb 20 09:56:39 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:39Z|00314|binding|INFO|Setting lport 8a95f8e8-aaa7-42b4-90e6-7ae43289d6aa down in Southbound
Feb 20 09:56:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:39.288 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe81:7e1c/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=8a95f8e8-aaa7-42b4-90e6-7ae43289d6aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:39.290 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 8a95f8e8-aaa7-42b4-90e6-7ae43289d6aa in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:56:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:39.293 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:39.294 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[22ed4c41-0df5-477a-a41b-b35b1e192157]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:39.305 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:56:39 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:39.606 262775 INFO neutron.agent.dhcp.agent [None req-e6fb0829-ff60-4818-a763-319212c944e1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:39.637 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:39 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-6af144045be0048ba6688455d6b7f68079e4560438aa124ea22d3ba91e2deb5a-merged.mount: Deactivated successfully.
Feb 20 09:56:39 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0884119295e40478ed363076751aebe53e32a85496fb348d0023039d6483e75f-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:39 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:56:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e158 e158: 6 total, 6 up, 6 in
Feb 20 09:56:40 np0005625203.localdomain ceph-mon[296066]: osdmap e157: 6 total, 6 up, 6 in
Feb 20 09:56:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:56:40 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3055324932' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:56:40 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3055324932' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:40 np0005625203.localdomain sudo[321475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:56:40 np0005625203.localdomain sudo[321475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:56:40 np0005625203.localdomain sudo[321475]: pam_unix(sudo:session): session closed for user root
Feb 20 09:56:40 np0005625203.localdomain sudo[321493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:56:40 np0005625203.localdomain sudo[321493]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.009 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:41 np0005625203.localdomain ceph-mon[296066]: pgmap v317: 177 pgs: 177 active+clean; 146 MiB data, 802 MiB used, 41 GiB / 42 GiB avail; 45 KiB/s rd, 16 KiB/s wr, 65 op/s
Feb 20 09:56:41 np0005625203.localdomain ceph-mon[296066]: osdmap e158: 6 total, 6 up, 6 in
Feb 20 09:56:41 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3055324932' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:41 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3055324932' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:41 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:41.043 262775 INFO neutron.agent.linux.ip_lib [None req-8fa19d55-a7a5-4927-9f7c-5c0bd05aef13 - - - - - -] Device tap49355dc0-f6 cannot be used as it has no MAC address
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.074 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:41 np0005625203.localdomain kernel: device tap49355dc0-f6 entered promiscuous mode
Feb 20 09:56:41 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581401.0844] manager: (tap49355dc0-f6): new Generic device (/org/freedesktop/NetworkManager/Devices/62)
Feb 20 09:56:41 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:41Z|00315|binding|INFO|Claiming lport 49355dc0-f6bd-454d-a2e6-9ac69d15327f for this chassis.
Feb 20 09:56:41 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:41Z|00316|binding|INFO|49355dc0-f6bd-454d-a2e6-9ac69d15327f: Claiming unknown
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.084 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:41 np0005625203.localdomain systemd-udevd[321523]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:41 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:41.098 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-a7a7943f-debf-416d-a866-6eb948d4f229', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7a7943f-debf-416d-a866-6eb948d4f229', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62f842a102bd4d84b1f4d275ec6dbea2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cb90d5-6c68-4a6f-a44d-1123464d8b4b, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=49355dc0-f6bd-454d-a2e6-9ac69d15327f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:41 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:41.099 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 49355dc0-f6bd-454d-a2e6-9ac69d15327f in datapath a7a7943f-debf-416d-a866-6eb948d4f229 bound to our chassis
Feb 20 09:56:41 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:41.101 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a7a7943f-debf-416d-a866-6eb948d4f229 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:56:41 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:41.102 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[2676fb36-2906-4ce2-83c9-217b70751323]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:41 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:41Z|00317|binding|INFO|Setting lport 49355dc0-f6bd-454d-a2e6-9ac69d15327f ovn-installed in OVS
Feb 20 09:56:41 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:41Z|00318|binding|INFO|Setting lport 49355dc0-f6bd-454d-a2e6-9ac69d15327f up in Southbound
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.134 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.181 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.214 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:41 np0005625203.localdomain sudo[321493]: pam_unix(sudo:session): session closed for user root
Feb 20 09:56:41 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:41.642 262775 INFO neutron.agent.linux.ip_lib [None req-05c41aae-dc3c-453a-88d5-79622399597d - - - - - -] Device tapbc0f3430-1d cannot be used as it has no MAC address
Feb 20 09:56:41 np0005625203.localdomain sudo[321583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:56:41 np0005625203.localdomain sudo[321583]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.680 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:41 np0005625203.localdomain sudo[321583]: pam_unix(sudo:session): session closed for user root
Feb 20 09:56:41 np0005625203.localdomain kernel: device tapbc0f3430-1d entered promiscuous mode
Feb 20 09:56:41 np0005625203.localdomain systemd-udevd[321530]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:41 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581401.6917] manager: (tapbc0f3430-1d): new Generic device (/org/freedesktop/NetworkManager/Devices/63)
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.716 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:41 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:41Z|00319|binding|INFO|Claiming lport bc0f3430-1d19-4550-8f85-9f3db8b9a823 for this chassis.
Feb 20 09:56:41 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:41Z|00320|binding|INFO|bc0f3430-1d19-4550-8f85-9f3db8b9a823: Claiming unknown
Feb 20 09:56:41 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:41.722 2 INFO neutron.agent.securitygroups_rpc [None req-13f83c28-0ec5-483d-8133-f11a853f0aba f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:41 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:41Z|00321|binding|INFO|Setting lport bc0f3430-1d19-4550-8f85-9f3db8b9a823 ovn-installed in OVS
Feb 20 09:56:41 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:41Z|00322|binding|INFO|Setting lport bc0f3430-1d19-4550-8f85-9f3db8b9a823 up in Southbound
Feb 20 09:56:41 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:41.726 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8f:c28a/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=bc0f3430-1d19-4550-8f85-9f3db8b9a823) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:41 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:41.728 161112 INFO neutron.agent.ovn.metadata.agent [-] Port bc0f3430-1d19-4550-8f85-9f3db8b9a823 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.728 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:41 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:41.730 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port fcb78b50-f970-4c1a-8354-2a091e141816 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:56:41 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:41.730 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:41 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:41.731 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2d1660-eb9d-4dcf-b0ee-f7d3b565a9be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.733 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.763 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.817 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:41.846 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:56:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:56:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:56:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:56:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:56:42 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:56:42 np0005625203.localdomain podman[321656]: 
Feb 20 09:56:42 np0005625203.localdomain podman[321656]: 2026-02-20 09:56:42.263646431 +0000 UTC m=+0.084127024 container create c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a7a7943f-debf-416d-a866-6eb948d4f229, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:42 np0005625203.localdomain systemd[1]: Started libpod-conmon-c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34.scope.
Feb 20 09:56:42 np0005625203.localdomain podman[321656]: 2026-02-20 09:56:42.218574196 +0000 UTC m=+0.039054799 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:42 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:42 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/440af7b1eae268cd10cc14626d115762c1d457f06318c86a7b1ee50825ad6eb5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:42 np0005625203.localdomain podman[321656]: 2026-02-20 09:56:42.345623646 +0000 UTC m=+0.166104179 container init c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a7a7943f-debf-416d-a866-6eb948d4f229, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:56:42 np0005625203.localdomain podman[321656]: 2026-02-20 09:56:42.355016748 +0000 UTC m=+0.175497291 container start c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a7a7943f-debf-416d-a866-6eb948d4f229, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:56:42 np0005625203.localdomain dnsmasq[321682]: started, version 2.85 cachesize 150
Feb 20 09:56:42 np0005625203.localdomain dnsmasq[321682]: DNS service limited to local subnets
Feb 20 09:56:42 np0005625203.localdomain dnsmasq[321682]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:42 np0005625203.localdomain dnsmasq[321682]: warning: no upstream servers configured
Feb 20 09:56:42 np0005625203.localdomain dnsmasq-dhcp[321682]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:56:42 np0005625203.localdomain dnsmasq[321682]: read /var/lib/neutron/dhcp/a7a7943f-debf-416d-a866-6eb948d4f229/addn_hosts - 0 addresses
Feb 20 09:56:42 np0005625203.localdomain dnsmasq-dhcp[321682]: read /var/lib/neutron/dhcp/a7a7943f-debf-416d-a866-6eb948d4f229/host
Feb 20 09:56:42 np0005625203.localdomain dnsmasq-dhcp[321682]: read /var/lib/neutron/dhcp/a7a7943f-debf-416d-a866-6eb948d4f229/opts
Feb 20 09:56:42 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:42.410 262775 INFO neutron.agent.dhcp.agent [None req-8fa19d55-a7a5-4927-9f7c-5c0bd05aef13 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:41Z, description=, device_id=1d3f0830-6050-4ebf-baaf-b9d8b4a1ed67, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c56b80>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c56af0>], id=a346e9a6-e99b-4f92-ac78-52291a53d08d, ip_allocation=immediate, mac_address=fa:16:3e:20:ec:7b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:56:36Z, description=, dns_domain=, id=a7a7943f-debf-416d-a866-6eb948d4f229, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--1934753555, port_security_enabled=True, project_id=62f842a102bd4d84b1f4d275ec6dbea2, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50561, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2349, status=ACTIVE, subnets=['68cfc6ff-490a-484f-a48d-10774d18859c'], tags=[], tenant_id=62f842a102bd4d84b1f4d275ec6dbea2, updated_at=2026-02-20T09:56:39Z, vlan_transparent=None, network_id=a7a7943f-debf-416d-a866-6eb948d4f229, port_security_enabled=False, project_id=62f842a102bd4d84b1f4d275ec6dbea2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2375, status=DOWN, tags=[], tenant_id=62f842a102bd4d84b1f4d275ec6dbea2, updated_at=2026-02-20T09:56:41Z on network a7a7943f-debf-416d-a866-6eb948d4f229
Feb 20 09:56:42 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:42.575 262775 INFO neutron.agent.dhcp.agent [None req-dca66ccf-3951-4d35-8e6d-5368db1e3bda - - - - - -] DHCP configuration for ports {'88e203ea-b4ab-4dd6-97b5-3fb95dca4844'} is completed
Feb 20 09:56:42 np0005625203.localdomain dnsmasq[321682]: read /var/lib/neutron/dhcp/a7a7943f-debf-416d-a866-6eb948d4f229/addn_hosts - 1 addresses
Feb 20 09:56:42 np0005625203.localdomain dnsmasq-dhcp[321682]: read /var/lib/neutron/dhcp/a7a7943f-debf-416d-a866-6eb948d4f229/host
Feb 20 09:56:42 np0005625203.localdomain podman[321702]: 2026-02-20 09:56:42.606006782 +0000 UTC m=+0.055709374 container kill c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a7a7943f-debf-416d-a866-6eb948d4f229, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 20 09:56:42 np0005625203.localdomain dnsmasq-dhcp[321682]: read /var/lib/neutron/dhcp/a7a7943f-debf-416d-a866-6eb948d4f229/opts
Feb 20 09:56:42 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:42.617 2 INFO neutron.agent.securitygroups_rpc [None req-92a7b9d6-6b07-465f-9755-118a416fc381 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:42 np0005625203.localdomain podman[321739]: 
Feb 20 09:56:42 np0005625203.localdomain podman[321739]: 2026-02-20 09:56:42.774611109 +0000 UTC m=+0.094973459 container create e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 20 09:56:42 np0005625203.localdomain systemd[1]: Started libpod-conmon-e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514.scope.
Feb 20 09:56:42 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:42.823 262775 INFO neutron.agent.dhcp.agent [None req-eda78b36-254a-4928-9671-05d17c9399b3 - - - - - -] DHCP configuration for ports {'a346e9a6-e99b-4f92-ac78-52291a53d08d'} is completed
Feb 20 09:56:42 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:42 np0005625203.localdomain podman[321739]: 2026-02-20 09:56:42.726952654 +0000 UTC m=+0.047315004 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:42 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb0e4ea9318548fabfad20db7f60f05d77ce778c3f47fffd5d9b0df665d4d9c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:42 np0005625203.localdomain podman[321739]: 2026-02-20 09:56:42.840226949 +0000 UTC m=+0.160589299 container init e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 20 09:56:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e159 e159: 6 total, 6 up, 6 in
Feb 20 09:56:42 np0005625203.localdomain podman[321739]: 2026-02-20 09:56:42.849930429 +0000 UTC m=+0.170292779 container start e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Feb 20 09:56:42 np0005625203.localdomain dnsmasq[321763]: started, version 2.85 cachesize 150
Feb 20 09:56:42 np0005625203.localdomain dnsmasq[321763]: DNS service limited to local subnets
Feb 20 09:56:42 np0005625203.localdomain dnsmasq[321763]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:42 np0005625203.localdomain dnsmasq[321763]: warning: no upstream servers configured
Feb 20 09:56:42 np0005625203.localdomain dnsmasq-dhcp[321763]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:56:42 np0005625203.localdomain dnsmasq[321763]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:42 np0005625203.localdomain dnsmasq-dhcp[321763]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:42 np0005625203.localdomain dnsmasq-dhcp[321763]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:42 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:42.911 262775 INFO neutron.agent.dhcp.agent [None req-05c41aae-dc3c-453a-88d5-79622399597d - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:41Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4cb35b0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e606a0>], id=e0165930-534d-448c-aae9-84f1abff93f9, ip_allocation=immediate, mac_address=fa:16:3e:26:3f:19, name=tempest-NetworksTestDHCPv6-1598402651, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=44, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['8f3b5049-91ca-4715-83fe-e5a7b82b8517'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:39Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=2368, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:41Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:56:42 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:42.981 262775 INFO neutron.agent.dhcp.agent [None req-f2c6cf26-9ca1-4993-bdbd-cb93f2701b1d - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:43 np0005625203.localdomain ceph-mon[296066]: pgmap v319: 177 pgs: 177 active+clean; 146 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 94 KiB/s rd, 16 KiB/s wr, 132 op/s
Feb 20 09:56:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f9ac42b7-680c-41fc-8784-6176baa738f7", "format": "json"}]: dispatch
Feb 20 09:56:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:56:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "8b7ef8b3-43af-41f4-8a24-ef7ee4fe0934", "format": "json"}]: dispatch
Feb 20 09:56:43 np0005625203.localdomain ceph-mon[296066]: osdmap e159: 6 total, 6 up, 6 in
Feb 20 09:56:43 np0005625203.localdomain dnsmasq[321763]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:56:43 np0005625203.localdomain dnsmasq-dhcp[321763]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:43 np0005625203.localdomain dnsmasq-dhcp[321763]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:43 np0005625203.localdomain podman[321781]: 2026-02-20 09:56:43.11434462 +0000 UTC m=+0.060728731 container kill e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:56:43 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:43.396 262775 INFO neutron.agent.dhcp.agent [None req-482feadf-913b-4c5f-a518-8e13bfaf40cc - - - - - -] DHCP configuration for ports {'e0165930-534d-448c-aae9-84f1abff93f9'} is completed
Feb 20 09:56:43 np0005625203.localdomain dnsmasq[321763]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:43 np0005625203.localdomain dnsmasq-dhcp[321763]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:43 np0005625203.localdomain podman[321820]: 2026-02-20 09:56:43.486044308 +0000 UTC m=+0.057264542 container kill e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:43 np0005625203.localdomain dnsmasq-dhcp[321763]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:43 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:43.496 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:41Z, description=, device_id=1d3f0830-6050-4ebf-baaf-b9d8b4a1ed67, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c79070>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c79610>], id=a346e9a6-e99b-4f92-ac78-52291a53d08d, ip_allocation=immediate, mac_address=fa:16:3e:20:ec:7b, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:56:36Z, description=, dns_domain=, id=a7a7943f-debf-416d-a866-6eb948d4f229, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--1934753555, port_security_enabled=True, project_id=62f842a102bd4d84b1f4d275ec6dbea2, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=50561, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2349, status=ACTIVE, subnets=['68cfc6ff-490a-484f-a48d-10774d18859c'], tags=[], tenant_id=62f842a102bd4d84b1f4d275ec6dbea2, updated_at=2026-02-20T09:56:39Z, vlan_transparent=None, network_id=a7a7943f-debf-416d-a866-6eb948d4f229, port_security_enabled=False, project_id=62f842a102bd4d84b1f4d275ec6dbea2, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2375, status=DOWN, tags=[], tenant_id=62f842a102bd4d84b1f4d275ec6dbea2, updated_at=2026-02-20T09:56:41Z on network a7a7943f-debf-416d-a866-6eb948d4f229
Feb 20 09:56:43 np0005625203.localdomain sshd[321866]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:56:43 np0005625203.localdomain dnsmasq[321682]: read /var/lib/neutron/dhcp/a7a7943f-debf-416d-a866-6eb948d4f229/addn_hosts - 1 addresses
Feb 20 09:56:43 np0005625203.localdomain dnsmasq-dhcp[321682]: read /var/lib/neutron/dhcp/a7a7943f-debf-416d-a866-6eb948d4f229/host
Feb 20 09:56:43 np0005625203.localdomain podman[321855]: 2026-02-20 09:56:43.70975177 +0000 UTC m=+0.074958500 container kill c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a7a7943f-debf-416d-a866-6eb948d4f229, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:56:43 np0005625203.localdomain dnsmasq-dhcp[321682]: read /var/lib/neutron/dhcp/a7a7943f-debf-416d-a866-6eb948d4f229/opts
Feb 20 09:56:43 np0005625203.localdomain systemd[1]: tmp-crun.KBKSSX.mount: Deactivated successfully.
Feb 20 09:56:43 np0005625203.localdomain sshd[321866]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:56:43 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:43.959 262775 INFO neutron.agent.dhcp.agent [None req-4838f249-f068-46a1-9007-5a209d7a5b5f - - - - - -] DHCP configuration for ports {'a346e9a6-e99b-4f92-ac78-52291a53d08d'} is completed
Feb 20 09:56:43 np0005625203.localdomain dnsmasq[321763]: exiting on receipt of SIGTERM
Feb 20 09:56:43 np0005625203.localdomain podman[321899]: 2026-02-20 09:56:43.987944057 +0000 UTC m=+0.062855956 container kill e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:56:43 np0005625203.localdomain systemd[1]: libpod-e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514.scope: Deactivated successfully.
Feb 20 09:56:44 np0005625203.localdomain podman[321913]: 2026-02-20 09:56:44.05983504 +0000 UTC m=+0.057408596 container died e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 20 09:56:44 np0005625203.localdomain podman[321913]: 2026-02-20 09:56:44.093315416 +0000 UTC m=+0.090888932 container cleanup e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 20 09:56:44 np0005625203.localdomain systemd[1]: libpod-conmon-e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514.scope: Deactivated successfully.
Feb 20 09:56:44 np0005625203.localdomain podman[321915]: 2026-02-20 09:56:44.141212148 +0000 UTC m=+0.130571500 container remove e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:44 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:44.143 2 INFO neutron.agent.securitygroups_rpc [None req-ec1ba1f0-724c-41a9-85b6-8188470faaf7 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:44.153 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:44 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:44Z|00323|binding|INFO|Releasing lport bc0f3430-1d19-4550-8f85-9f3db8b9a823 from this chassis (sb_readonly=0)
Feb 20 09:56:44 np0005625203.localdomain kernel: device tapbc0f3430-1d left promiscuous mode
Feb 20 09:56:44 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:44Z|00324|binding|INFO|Setting lport bc0f3430-1d19-4550-8f85-9f3db8b9a823 down in Southbound
Feb 20 09:56:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:44.161 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8f:c28a/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=bc0f3430-1d19-4550-8f85-9f3db8b9a823) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:44.163 161112 INFO neutron.agent.ovn.metadata.agent [-] Port bc0f3430-1d19-4550-8f85-9f3db8b9a823 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:56:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:44.166 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:44 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:44.167 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[f04f77bb-61be-4e53-a18a-d2666eaed586]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:44.179 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-cb0e4ea9318548fabfad20db7f60f05d77ce778c3f47fffd5d9b0df665d4d9c4-merged.mount: Deactivated successfully.
Feb 20 09:56:44 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e7546aeaa026f42a6df3cae9f0c62786d771869e05272e13d81bb96f1eda3514-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:44 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:56:44 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:56:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:44.638 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:44 np0005625203.localdomain ceph-mon[296066]: pgmap v321: 177 pgs: 177 active+clean; 146 MiB data, 820 MiB used, 41 GiB / 42 GiB avail; 88 KiB/s rd, 15 KiB/s wr, 123 op/s
Feb 20 09:56:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:56:44 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:44.888 2 INFO neutron.agent.securitygroups_rpc [None req-868e4387-1930-45d7-9199-5bcd1f2558e0 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:45 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:45.122 262775 INFO neutron.agent.linux.ip_lib [None req-e1f4aff5-f05a-4bdd-a622-005f4816d278 - - - - - -] Device tap4ff45158-66 cannot be used as it has no MAC address
Feb 20 09:56:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:45.155 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:45 np0005625203.localdomain kernel: device tap4ff45158-66 entered promiscuous mode
Feb 20 09:56:45 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581405.1644] manager: (tap4ff45158-66): new Generic device (/org/freedesktop/NetworkManager/Devices/64)
Feb 20 09:56:45 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:45Z|00325|binding|INFO|Claiming lport 4ff45158-660e-469b-af31-cdc5c9bd48a1 for this chassis.
Feb 20 09:56:45 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:45Z|00326|binding|INFO|4ff45158-660e-469b-af31-cdc5c9bd48a1: Claiming unknown
Feb 20 09:56:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:45.163 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:45 np0005625203.localdomain systemd-udevd[321968]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:45.173 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=4ff45158-660e-469b-af31-cdc5c9bd48a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:45.175 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 4ff45158-660e-469b-af31-cdc5c9bd48a1 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:56:45 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:45.175 2 INFO neutron.agent.securitygroups_rpc [None req-d92a2777-d32e-4211-954b-8d8918f6f596 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:45 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:45Z|00327|binding|INFO|Setting lport 4ff45158-660e-469b-af31-cdc5c9bd48a1 ovn-installed in OVS
Feb 20 09:56:45 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:45Z|00328|binding|INFO|Setting lport 4ff45158-660e-469b-af31-cdc5c9bd48a1 up in Southbound
Feb 20 09:56:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:45.178 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port bc2262b5-3dc0-4492-a25e-6416c7e3d3ef IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:56:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:45.179 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:45.180 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:45.180 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[b92070b8-4654-4b32-8441-ad9dd109c62b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:45.182 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:45 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4ff45158-66: No such device
Feb 20 09:56:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:45.206 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:45 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4ff45158-66: No such device
Feb 20 09:56:45 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4ff45158-66: No such device
Feb 20 09:56:45 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4ff45158-66: No such device
Feb 20 09:56:45 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4ff45158-66: No such device
Feb 20 09:56:45 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4ff45158-66: No such device
Feb 20 09:56:45 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4ff45158-66: No such device
Feb 20 09:56:45 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4ff45158-66: No such device
Feb 20 09:56:45 np0005625203.localdomain podman[321971]: 2026-02-20 09:56:45.254261414 +0000 UTC m=+0.063779005 container kill c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a7a7943f-debf-416d-a866-6eb948d4f229, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:45 np0005625203.localdomain dnsmasq[321682]: read /var/lib/neutron/dhcp/a7a7943f-debf-416d-a866-6eb948d4f229/addn_hosts - 0 addresses
Feb 20 09:56:45 np0005625203.localdomain dnsmasq-dhcp[321682]: read /var/lib/neutron/dhcp/a7a7943f-debf-416d-a866-6eb948d4f229/host
Feb 20 09:56:45 np0005625203.localdomain systemd[1]: tmp-crun.aQlETn.mount: Deactivated successfully.
Feb 20 09:56:45 np0005625203.localdomain dnsmasq-dhcp[321682]: read /var/lib/neutron/dhcp/a7a7943f-debf-416d-a866-6eb948d4f229/opts
Feb 20 09:56:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:45.263 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:45.295 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:45 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:45Z|00329|binding|INFO|Releasing lport 49355dc0-f6bd-454d-a2e6-9ac69d15327f from this chassis (sb_readonly=0)
Feb 20 09:56:45 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:45Z|00330|binding|INFO|Setting lport 49355dc0-f6bd-454d-a2e6-9ac69d15327f down in Southbound
Feb 20 09:56:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:45.548 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:45 np0005625203.localdomain kernel: device tap49355dc0-f6 left promiscuous mode
Feb 20 09:56:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:45.560 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-a7a7943f-debf-416d-a866-6eb948d4f229', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a7a7943f-debf-416d-a866-6eb948d4f229', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62f842a102bd4d84b1f4d275ec6dbea2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=58cb90d5-6c68-4a6f-a44d-1123464d8b4b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=49355dc0-f6bd-454d-a2e6-9ac69d15327f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:45.561 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 49355dc0-f6bd-454d-a2e6-9ac69d15327f in datapath a7a7943f-debf-416d-a866-6eb948d4f229 unbound from our chassis
Feb 20 09:56:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:45.563 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network a7a7943f-debf-416d-a866-6eb948d4f229 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:56:45 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:45.564 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[96ab0b19-7a7d-4ed2-8598-b58c32ad2145]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:45.575 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:46.013 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:46 np0005625203.localdomain podman[322061]: 
Feb 20 09:56:46 np0005625203.localdomain podman[322061]: 2026-02-20 09:56:46.279258845 +0000 UTC m=+0.084313380 container create f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:46 np0005625203.localdomain systemd[1]: Started libpod-conmon-f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7.scope.
Feb 20 09:56:46 np0005625203.localdomain podman[322061]: 2026-02-20 09:56:46.244318424 +0000 UTC m=+0.049373049 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:46 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:46 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0dfb26e7cf431d8d4ee9164a1bb294135c244d2a836890d984d432c246c66f2f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:46 np0005625203.localdomain podman[322061]: 2026-02-20 09:56:46.367448853 +0000 UTC m=+0.172503408 container init f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:46 np0005625203.localdomain podman[322061]: 2026-02-20 09:56:46.376116151 +0000 UTC m=+0.181170706 container start f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:46 np0005625203.localdomain dnsmasq[322080]: started, version 2.85 cachesize 150
Feb 20 09:56:46 np0005625203.localdomain dnsmasq[322080]: DNS service limited to local subnets
Feb 20 09:56:46 np0005625203.localdomain dnsmasq[322080]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:46 np0005625203.localdomain dnsmasq[322080]: warning: no upstream servers configured
Feb 20 09:56:46 np0005625203.localdomain dnsmasq-dhcp[322080]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:56:46 np0005625203.localdomain dnsmasq[322080]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:46 np0005625203.localdomain dnsmasq-dhcp[322080]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:46 np0005625203.localdomain dnsmasq-dhcp[322080]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:46 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:46.439 262775 INFO neutron.agent.dhcp.agent [None req-e1f4aff5-f05a-4bdd-a622-005f4816d278 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:44Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e49700>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e491f0>], id=5a7c5289-8c7a-4af9-abee-48ed35e7fd5d, ip_allocation=immediate, mac_address=fa:16:3e:58:f7:22, name=tempest-NetworksTestDHCPv6-1637986499, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=46, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['7da0835f-1175-4048-8c12-8cec7ebcace0'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:44Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=2381, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:44Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:56:46 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:46.486 2 INFO neutron.agent.securitygroups_rpc [None req-d647a860-3cfb-47b1-bd0e-3817969b125e f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:46 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:46.533 262775 INFO neutron.agent.dhcp.agent [None req-134706df-fbbd-4698-b38d-7eeb765e0a33 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:46 np0005625203.localdomain ceph-mon[296066]: pgmap v322: 177 pgs: 177 active+clean; 146 MiB data, 821 MiB used, 41 GiB / 42 GiB avail; 1.3 MiB/s rd, 26 KiB/s wr, 158 op/s
Feb 20 09:56:46 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "877bc86e-3ae3-4788-a4cd-ecc7ec5a94e2", "format": "json"}]: dispatch
Feb 20 09:56:46 np0005625203.localdomain dnsmasq[322080]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 1 addresses
Feb 20 09:56:46 np0005625203.localdomain dnsmasq-dhcp[322080]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:46 np0005625203.localdomain dnsmasq-dhcp[322080]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:46 np0005625203.localdomain podman[322099]: 2026-02-20 09:56:46.667105544 +0000 UTC m=+0.063538257 container kill f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 20 09:56:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:46.888 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:46Z|00331|binding|INFO|Releasing lport 4ff45158-660e-469b-af31-cdc5c9bd48a1 from this chassis (sb_readonly=0)
Feb 20 09:56:46 np0005625203.localdomain kernel: device tap4ff45158-66 left promiscuous mode
Feb 20 09:56:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:46Z|00332|binding|INFO|Setting lport 4ff45158-660e-469b-af31-cdc5c9bd48a1 down in Southbound
Feb 20 09:56:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:46.901 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=4ff45158-660e-469b-af31-cdc5c9bd48a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:46.903 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 4ff45158-660e-469b-af31-cdc5c9bd48a1 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:56:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:46.906 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:46 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:46.907 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[ab36b5c7-b9c5-48b0-b662-81ed691b8855]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:46.916 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:46 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:46.956 262775 INFO neutron.agent.dhcp.agent [None req-6a2f01f9-9e67-43a8-b965-ddf075311aa3 - - - - - -] DHCP configuration for ports {'5a7c5289-8c7a-4af9-abee-48ed35e7fd5d'} is completed
Feb 20 09:56:47 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:47.004 2 INFO neutron.agent.securitygroups_rpc [None req-5e0be3b9-12ef-421f-8325-abea826190b6 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:47 np0005625203.localdomain dnsmasq[322080]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:47 np0005625203.localdomain dnsmasq-dhcp[322080]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:47 np0005625203.localdomain podman[322141]: 2026-02-20 09:56:47.117669753 +0000 UTC m=+0.068608704 container kill f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:47 np0005625203.localdomain dnsmasq-dhcp[322080]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent [None req-e1f4aff5-f05a-4bdd-a622-005f4816d278 - - - - - -] Unable to reload_allocations dhcp for 811e2462-6872-485d-9c09-d2dd9cb25273.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap4ff45158-66 not found in namespace qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273.
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     return fut.result()
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     raise self._exception
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap4ff45158-66 not found in namespace qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273.
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.147 262775 ERROR neutron.agent.dhcp.agent 
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.153 262775 INFO neutron.agent.dhcp.agent [None req-c1b5e4a0-7fa7-4a63-869b-563b7322a229 - - - - - -] Synchronizing state
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.406 262775 INFO neutron.agent.dhcp.agent [None req-cac4eebb-3bb1-4db7-8118-9ccc92c6f503 - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.407 262775 INFO neutron.agent.dhcp.agent [-] Starting network 71cd1515-da32-498e-ba64-00f812b850e2 dhcp configuration
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.408 262775 INFO neutron.agent.dhcp.agent [-] Finished network 71cd1515-da32-498e-ba64-00f812b850e2 dhcp configuration
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.409 262775 INFO neutron.agent.dhcp.agent [-] Starting network 811e2462-6872-485d-9c09-d2dd9cb25273 dhcp configuration
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.409 262775 INFO neutron.agent.dhcp.agent [-] Finished network 811e2462-6872-485d-9c09-d2dd9cb25273 dhcp configuration
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.410 262775 INFO neutron.agent.dhcp.agent [None req-cac4eebb-3bb1-4db7-8118-9ccc92c6f503 - - - - - -] Synchronizing state complete
Feb 20 09:56:47 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:47.531 262775 INFO neutron.agent.dhcp.agent [None req-6c4b1aee-c750-4187-90b5-117b50bbb84b - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:47 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:47.578 2 INFO neutron.agent.securitygroups_rpc [None req-c2e08264-5759-4dbe-9f11-1020e63a5df8 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:47 np0005625203.localdomain dnsmasq[322080]: exiting on receipt of SIGTERM
Feb 20 09:56:47 np0005625203.localdomain podman[322173]: 2026-02-20 09:56:47.684987574 +0000 UTC m=+0.062985759 container kill f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:47 np0005625203.localdomain systemd[1]: libpod-f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7.scope: Deactivated successfully.
Feb 20 09:56:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f9ac42b7-680c-41fc-8784-6176baa738f7", "format": "json"}]: dispatch
Feb 20 09:56:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f9ac42b7-680c-41fc-8784-6176baa738f7", "force": true, "format": "json"}]: dispatch
Feb 20 09:56:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4040843744' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4040843744' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:47 np0005625203.localdomain podman[322185]: 2026-02-20 09:56:47.737814649 +0000 UTC m=+0.038062248 container died f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 20 09:56:47 np0005625203.localdomain podman[322185]: 2026-02-20 09:56:47.769263431 +0000 UTC m=+0.069511020 container cleanup f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:56:47 np0005625203.localdomain systemd[1]: libpod-conmon-f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7.scope: Deactivated successfully.
Feb 20 09:56:47 np0005625203.localdomain podman[322187]: 2026-02-20 09:56:47.845935284 +0000 UTC m=+0.136290378 container remove f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:56:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e160 e160: 6 total, 6 up, 6 in
Feb 20 09:56:48 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-0dfb26e7cf431d8d4ee9164a1bb294135c244d2a836890d984d432c246c66f2f-merged.mount: Deactivated successfully.
Feb 20 09:56:48 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f6a48930abddc8c6a76c0da4310f5b7fcefe864f693d11a8059649fe05e5d4a7-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:48 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:56:48 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:48.491 262775 INFO neutron.agent.dhcp.agent [None req-8acdbaa7-8b58-47db-aa46-6af2eb319fc7 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:48 np0005625203.localdomain ceph-mon[296066]: pgmap v323: 177 pgs: 177 active+clean; 146 MiB data, 821 MiB used, 41 GiB / 42 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 91 op/s
Feb 20 09:56:48 np0005625203.localdomain ceph-mon[296066]: osdmap e160: 6 total, 6 up, 6 in
Feb 20 09:56:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/10291474' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/10291474' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:48 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:48.888 262775 INFO neutron.agent.linux.ip_lib [None req-36d12053-a987-4a50-9291-5132e673c2b9 - - - - - -] Device tap4892d08c-bf cannot be used as it has no MAC address
Feb 20 09:56:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:48.956 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:48 np0005625203.localdomain kernel: device tap4892d08c-bf entered promiscuous mode
Feb 20 09:56:48 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:48Z|00333|binding|INFO|Claiming lport 4892d08c-bf99-4061-a176-87f9d1ab059a for this chassis.
Feb 20 09:56:48 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:48Z|00334|binding|INFO|4892d08c-bf99-4061-a176-87f9d1ab059a: Claiming unknown
Feb 20 09:56:48 np0005625203.localdomain systemd-udevd[322223]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:48.971 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:48 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581408.9752] manager: (tap4892d08c-bf): new Generic device (/org/freedesktop/NetworkManager/Devices/65)
Feb 20 09:56:48 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:48.979 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-71cd1515-da32-498e-ba64-00f812b850e2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71cd1515-da32-498e-ba64-00f812b850e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62f842a102bd4d84b1f4d275ec6dbea2', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba019825-4fb3-4b05-b9d8-6facfda87191, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=4892d08c-bf99-4061-a176-87f9d1ab059a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:48 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:48.981 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 4892d08c-bf99-4061-a176-87f9d1ab059a in datapath 71cd1515-da32-498e-ba64-00f812b850e2 bound to our chassis
Feb 20 09:56:48 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:48.983 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 71cd1515-da32-498e-ba64-00f812b850e2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:56:48 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:48.984 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[3da5a378-a565-486c-9ba6-9f96dfd687d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4892d08c-bf: No such device
Feb 20 09:56:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:49Z|00335|binding|INFO|Setting lport 4892d08c-bf99-4061-a176-87f9d1ab059a ovn-installed in OVS
Feb 20 09:56:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:49Z|00336|binding|INFO|Setting lport 4892d08c-bf99-4061-a176-87f9d1ab059a up in Southbound
Feb 20 09:56:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:49.013 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4892d08c-bf: No such device
Feb 20 09:56:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4892d08c-bf: No such device
Feb 20 09:56:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4892d08c-bf: No such device
Feb 20 09:56:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4892d08c-bf: No such device
Feb 20 09:56:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4892d08c-bf: No such device
Feb 20 09:56:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4892d08c-bf: No such device
Feb 20 09:56:49 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap4892d08c-bf: No such device
Feb 20 09:56:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:49.064 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:49.100 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:56:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:49.641 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:49 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:49.703 262775 INFO neutron.agent.linux.ip_lib [None req-9caafb90-3eed-4817-81e0-366305eff443 - - - - - -] Device tapade2f4d8-f3 cannot be used as it has no MAC address
Feb 20 09:56:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:49.737 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:49 np0005625203.localdomain kernel: device tapade2f4d8-f3 entered promiscuous mode
Feb 20 09:56:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:49.743 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:49 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581409.7442] manager: (tapade2f4d8-f3): new Generic device (/org/freedesktop/NetworkManager/Devices/66)
Feb 20 09:56:49 np0005625203.localdomain systemd-udevd[322225]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:49Z|00337|binding|INFO|Claiming lport ade2f4d8-f328-43df-991c-effca3df13ad for this chassis.
Feb 20 09:56:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:49Z|00338|binding|INFO|ade2f4d8-f328-43df-991c-effca3df13ad: Claiming unknown
Feb 20 09:56:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:49Z|00339|binding|INFO|Setting lport ade2f4d8-f328-43df-991c-effca3df13ad ovn-installed in OVS
Feb 20 09:56:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:49.756 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:49 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:49Z|00340|binding|INFO|Setting lport ade2f4d8-f328-43df-991c-effca3df13ad up in Southbound
Feb 20 09:56:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:49.761 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:49.766 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fecd:5e5d/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=ade2f4d8-f328-43df-991c-effca3df13ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:49.769 161112 INFO neutron.agent.ovn.metadata.agent [-] Port ade2f4d8-f328-43df-991c-effca3df13ad in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:56:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:49.776 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 0580c8b1-60c2-485b-a1eb-266a7047f9f4 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:56:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:49.777 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:49.777 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:49 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:49.778 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[df62d152-3368-48a4-bdf0-e3e6f1368fe5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:49.834 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:49.876 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:50 np0005625203.localdomain podman[322313]: 
Feb 20 09:56:50 np0005625203.localdomain podman[322313]: 2026-02-20 09:56:50.059975862 +0000 UTC m=+0.103466462 container create ba8e33b33c69a5462c68402d5fbf697d3b03176ea2ce7b62d7b400ec7fa58d24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71cd1515-da32-498e-ba64-00f812b850e2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:56:50 np0005625203.localdomain systemd[1]: Started libpod-conmon-ba8e33b33c69a5462c68402d5fbf697d3b03176ea2ce7b62d7b400ec7fa58d24.scope.
Feb 20 09:56:50 np0005625203.localdomain podman[322313]: 2026-02-20 09:56:50.006627411 +0000 UTC m=+0.050118061 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:50 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:50 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aadfb25eeaf10993e7e6a2a3cebd2a3683cf6a92ef13669d11bf7d0721b6fcf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:50 np0005625203.localdomain podman[322313]: 2026-02-20 09:56:50.131629849 +0000 UTC m=+0.175120439 container init ba8e33b33c69a5462c68402d5fbf697d3b03176ea2ce7b62d7b400ec7fa58d24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71cd1515-da32-498e-ba64-00f812b850e2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:56:50 np0005625203.localdomain podman[322313]: 2026-02-20 09:56:50.141928657 +0000 UTC m=+0.185419247 container start ba8e33b33c69a5462c68402d5fbf697d3b03176ea2ce7b62d7b400ec7fa58d24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71cd1515-da32-498e-ba64-00f812b850e2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127)
Feb 20 09:56:50 np0005625203.localdomain dnsmasq[322338]: started, version 2.85 cachesize 150
Feb 20 09:56:50 np0005625203.localdomain dnsmasq[322338]: DNS service limited to local subnets
Feb 20 09:56:50 np0005625203.localdomain dnsmasq[322338]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:50 np0005625203.localdomain dnsmasq[322338]: warning: no upstream servers configured
Feb 20 09:56:50 np0005625203.localdomain dnsmasq-dhcp[322338]: DHCPv6, static leases only on 2001:db8:1::, lease time 1d
Feb 20 09:56:50 np0005625203.localdomain dnsmasq[322338]: read /var/lib/neutron/dhcp/71cd1515-da32-498e-ba64-00f812b850e2/addn_hosts - 0 addresses
Feb 20 09:56:50 np0005625203.localdomain dnsmasq-dhcp[322338]: read /var/lib/neutron/dhcp/71cd1515-da32-498e-ba64-00f812b850e2/host
Feb 20 09:56:50 np0005625203.localdomain dnsmasq-dhcp[322338]: read /var/lib/neutron/dhcp/71cd1515-da32-498e-ba64-00f812b850e2/opts
Feb 20 09:56:50 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:50.283 262775 INFO neutron.agent.dhcp.agent [None req-8d99a049-b892-4c62-9d52-87f188125e61 - - - - - -] DHCP configuration for ports {'33f589f3-de24-4888-b7ca-976f9b18801e'} is completed
Feb 20 09:56:50 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:50.727 2 INFO neutron.agent.securitygroups_rpc [None req-6b7b9439-974f-45e5-a614-5a8be0850c72 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:50 np0005625203.localdomain podman[322375]: 
Feb 20 09:56:50 np0005625203.localdomain podman[322375]: 2026-02-20 09:56:50.851938394 +0000 UTC m=+0.088347945 container create 010a07409a3c44064d192f798811204beedd8221cccd443f44f2b0965670907e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "f96ea30b-5993-4393-8f64-efd08707fd5f", "format": "json"}]: dispatch
Feb 20 09:56:50 np0005625203.localdomain ceph-mon[296066]: pgmap v325: 177 pgs: 177 active+clean; 167 MiB data, 825 MiB used, 41 GiB / 42 GiB avail; 3.1 MiB/s rd, 892 KiB/s wr, 95 op/s
Feb 20 09:56:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "snap_name": "da20d4b5-2abc-49bc-a78c-39ce3cdadf16_977615dd-c728-40c5-bc37-c455d4274398", "force": true, "format": "json"}]: dispatch
Feb 20 09:56:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "snap_name": "da20d4b5-2abc-49bc-a78c-39ce3cdadf16", "force": true, "format": "json"}]: dispatch
Feb 20 09:56:50 np0005625203.localdomain systemd[1]: Started libpod-conmon-010a07409a3c44064d192f798811204beedd8221cccd443f44f2b0965670907e.scope.
Feb 20 09:56:50 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:50 np0005625203.localdomain podman[322375]: 2026-02-20 09:56:50.813515805 +0000 UTC m=+0.049925566 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:50 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eab061035edda88b827045819c2445dbe1ddf8a941231f8b80ddb55a397afae9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:50 np0005625203.localdomain podman[322375]: 2026-02-20 09:56:50.926924164 +0000 UTC m=+0.163333715 container init 010a07409a3c44064d192f798811204beedd8221cccd443f44f2b0965670907e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 20 09:56:50 np0005625203.localdomain podman[322375]: 2026-02-20 09:56:50.936084797 +0000 UTC m=+0.172494348 container start 010a07409a3c44064d192f798811204beedd8221cccd443f44f2b0965670907e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb 20 09:56:50 np0005625203.localdomain dnsmasq[322393]: started, version 2.85 cachesize 150
Feb 20 09:56:50 np0005625203.localdomain dnsmasq[322393]: DNS service limited to local subnets
Feb 20 09:56:50 np0005625203.localdomain dnsmasq[322393]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:50 np0005625203.localdomain dnsmasq[322393]: warning: no upstream servers configured
Feb 20 09:56:50 np0005625203.localdomain dnsmasq[322393]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:51.015 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:51 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:51.181 262775 INFO neutron.agent.dhcp.agent [None req-275dd5b6-decd-4ddc-aeaf-6483b34e0e72 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:51 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:51.312 2 INFO neutron.agent.securitygroups_rpc [None req-780282fe-fede-41a8-980f-26511e126244 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:51 np0005625203.localdomain dnsmasq[322393]: exiting on receipt of SIGTERM
Feb 20 09:56:51 np0005625203.localdomain podman[322410]: 2026-02-20 09:56:51.357325109 +0000 UTC m=+0.067957523 container kill 010a07409a3c44064d192f798811204beedd8221cccd443f44f2b0965670907e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:51 np0005625203.localdomain systemd[1]: libpod-010a07409a3c44064d192f798811204beedd8221cccd443f44f2b0965670907e.scope: Deactivated successfully.
Feb 20 09:56:51 np0005625203.localdomain podman[322424]: 2026-02-20 09:56:51.430699479 +0000 UTC m=+0.059141201 container died 010a07409a3c44064d192f798811204beedd8221cccd443f44f2b0965670907e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:56:51 np0005625203.localdomain podman[322424]: 2026-02-20 09:56:51.462156373 +0000 UTC m=+0.090598045 container cleanup 010a07409a3c44064d192f798811204beedd8221cccd443f44f2b0965670907e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:56:51 np0005625203.localdomain systemd[1]: libpod-conmon-010a07409a3c44064d192f798811204beedd8221cccd443f44f2b0965670907e.scope: Deactivated successfully.
Feb 20 09:56:51 np0005625203.localdomain podman[322426]: 2026-02-20 09:56:51.509630961 +0000 UTC m=+0.127725573 container remove 010a07409a3c44064d192f798811204beedd8221cccd443f44f2b0965670907e (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:56:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-eab061035edda88b827045819c2445dbe1ddf8a941231f8b80ddb55a397afae9-merged.mount: Deactivated successfully.
Feb 20 09:56:51 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-010a07409a3c44064d192f798811204beedd8221cccd443f44f2b0965670907e-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:51 np0005625203.localdomain podman[322450]: 2026-02-20 09:56:51.579469192 +0000 UTC m=+0.095286859 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:56:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:56:51 np0005625203.localdomain podman[322450]: 2026-02-20 09:56:51.621471811 +0000 UTC m=+0.137289488 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:56:51 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:56:51 np0005625203.localdomain podman[322475]: 2026-02-20 09:56:51.780718537 +0000 UTC m=+0.179539135 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:56:51 np0005625203.localdomain podman[322475]: 2026-02-20 09:56:51.815350249 +0000 UTC m=+0.214170727 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:56:51 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:56:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e161 e161: 6 total, 6 up, 6 in
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent [None req-a514b5b2-a572-4e9e-8c5b-0a727af1bbe3 - - - - - -] Unable to restart dhcp for 811e2462-6872-485d-9c09-d2dd9cb25273.: oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet bc476e4a-8f0f-4f66-999b-f5a97039a23c: This subnet is being modified by another concurrent operation.
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n    res = self.dispatcher.dispatch(message)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n    result = func(ctxt, **new_args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 329, in update_dhcp_port\n    return self._port_action(plugin, context, port, \'update_port\')\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 120, in _port_action\n    return plugin.update_port(context, port[\'id\'], port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n    return f_with_retry(*args, **kwargs,\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1868, in update_port\n    updated_port = super(Ml2Plugin, self).update_port(context, id,\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 224, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/db_base_plugin_v2.py", line 1557, in update_port\n    self.ipam.update_port(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 729, in update_port\n    changes = self.update_port_with_ips(context,\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 455, in update_port_with_ips\n    changes = self._update_ips_for_port(context,\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 379, in _update_ips_for_port\n    subnets = self._ipam_get_subnets(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 686, in _ipam_get_subnets\n    subnet.read_lock_register(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/models_v2.py", line 81, in read_lock_register\n    raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet bc476e4a-8f0f-4f66-999b-f5a97039a23c: This subnet is being modified by another concurrent operation.\n'].
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 207, in restart
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     self.enable()
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 324, in enable
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     common_utils.wait_until_true(self._enable, timeout=300)
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 744, in wait_until_true
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     while not predicate():
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 336, in _enable
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     interface_name = self.device_manager.setup(
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1825, in setup
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     self.cleanup_stale_devices(network, dhcp_port=None)
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     self.force_reraise()
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     raise self.value
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1820, in setup
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     port = self.setup_dhcp_port(network, segment)
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1755, in setup_dhcp_port
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     dhcp_port = setup_method(network, device_id, dhcp_subnets)
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1660, in _setup_existing_dhcp_port
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     port = self.plugin.update_dhcp_port(
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 901, in update_dhcp_port
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     port = cctxt.call(self.context, 'update_dhcp_port',
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron_lib/rpc.py", line 157, in call
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     return self._original_context.call(ctxt, method, **kwargs)
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     result = self.transport._send(
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     return self._driver.send(target, ctxt, message,
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     return self._send(target, ctxt, message, wait_for_reply, timeout,
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent     raise result
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent oslo_messaging.rpc.client.RemoteError: Remote error: SubnetInUse Unable to complete operation on subnet bc476e4a-8f0f-4f66-999b-f5a97039a23c: This subnet is being modified by another concurrent operation.
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming\n    res = self.dispatcher.dispatch(message)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch\n    return self._do_dispatch(endpoint, method, ctxt, args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch\n    result = func(ctxt, **new_args)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 244, in inner\n    return func(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 329, in update_dhcp_port\n    return self._port_action(plugin, context, port, \'update_port\')\n', '  File "/usr/lib/python3.9/site-packages/neutron/api/rpc/handlers/dhcp_rpc.py", line 120, in _port_action\n    return plugin.update_port(context, port[\'id\'], port)\n', '  File "/usr/lib/python3.9/site-packages/neutron/common/utils.py", line 728, in inner\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 226, in wrapped\n    return f_with_retry(*args, **kwargs,\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 142, in wrapped\n    setattr(e, \'_RETRY_EXCEEDED\', True)\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 138, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 190, in wrapped\n    context_reference.session.rollback()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 184, in wrapped\n    return f(*dup_args, **dup_kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/plugins/ml2/plugin.py", line 1868, in update_port\n    updated_port = super(Ml2Plugin, self).update_port(context, id,\n', '  File "/usr/lib/python3.9/site-packages/neutron_lib/db/api.py", line 224, in wrapped\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/db_base_plugin_v2.py", line 1557, in update_port\n    self.ipam.update_port(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 729, in update_port\n    changes = self.update_port_with_ips(context,\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 455, in update_port_with_ips\n    changes = self._update_ips_for_port(context,\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_pluggable_backend.py", line 379, in _update_ips_for_port\n    subnets = self._ipam_get_subnets(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/ipam_backend_mixin.py", line 686, in _ipam_get_subnets\n    subnet.read_lock_register(\n', '  File "/usr/lib/python3.9/site-packages/neutron/db/models_v2.py", line 81, in read_lock_register\n    raise exception\n', 'neutron_lib.exceptions.SubnetInUse: Unable to complete operation on subnet bc476e4a-8f0f-4f66-999b-f5a97039a23c: This subnet is being modified by another concurrent operation.\n'].
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.053 262775 ERROR neutron.agent.dhcp.agent 
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.057 262775 INFO neutron.agent.dhcp.agent [None req-cac4eebb-3bb1-4db7-8118-9ccc92c6f503 - - - - - -] Synchronizing state
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.267 262775 INFO neutron.agent.dhcp.agent [None req-9da2ae79-4614-4cd3-9a31-490cc1129a07 - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.268 262775 INFO neutron.agent.dhcp.agent [-] Starting network 811e2462-6872-485d-9c09-d2dd9cb25273 dhcp configuration
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.269 262775 INFO neutron.agent.dhcp.agent [-] Finished network 811e2462-6872-485d-9c09-d2dd9cb25273 dhcp configuration
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.270 262775 INFO neutron.agent.dhcp.agent [None req-9da2ae79-4614-4cd3-9a31-490cc1129a07 - - - - - -] Synchronizing state complete
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.307 262775 INFO neutron.agent.dhcp.agent [None req-6917427d-fe8d-43d3-a921-f446b4a296a9 - - - - - -] DHCP configuration for ports {'99df4d08-dcc6-4fa8-a367-eaee6c86a57c', 'cebd560e-7047-4cc1-9642-f5b7ec377d58', 'ade2f4d8-f328-43df-991c-effca3df13ad'} is completed
Feb 20 09:56:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:52.382 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:52 np0005625203.localdomain kernel: device tapade2f4d8-f3 left promiscuous mode
Feb 20 09:56:52 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:52Z|00341|binding|INFO|Releasing lport ade2f4d8-f328-43df-991c-effca3df13ad from this chassis (sb_readonly=0)
Feb 20 09:56:52 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:52Z|00342|binding|INFO|Setting lport ade2f4d8-f328-43df-991c-effca3df13ad down in Southbound
Feb 20 09:56:52 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:52.388 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fecd:5e5d/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=ade2f4d8-f328-43df-991c-effca3df13ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:52 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:52.390 161112 INFO neutron.agent.ovn.metadata.agent [-] Port ade2f4d8-f328-43df-991c-effca3df13ad in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:56:52 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:52.392 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:52 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:52.393 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[3f399237-9b60-4a0d-b81d-996ad95c7621]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:52.401 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.450 262775 INFO neutron.agent.dhcp.agent [None req-716e216a-2106-4186-afb9-3314c2a33142 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.577 262775 INFO neutron.agent.dhcp.agent [None req-ded79c10-4a88-411a-8ff6-5df768572c41 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:52.578 262775 INFO neutron.agent.dhcp.agent [None req-ded79c10-4a88-411a-8ff6-5df768572c41 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:52 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:56:52 np0005625203.localdomain ceph-mon[296066]: pgmap v326: 177 pgs: 177 active+clean; 203 MiB data, 859 MiB used, 41 GiB / 42 GiB avail; 4.8 MiB/s rd, 2.6 MiB/s wr, 146 op/s
Feb 20 09:56:52 np0005625203.localdomain ceph-mon[296066]: osdmap e161: 6 total, 6 up, 6 in
Feb 20 09:56:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4023810761' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4023810761' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:53 np0005625203.localdomain dnsmasq[322338]: exiting on receipt of SIGTERM
Feb 20 09:56:53 np0005625203.localdomain systemd[1]: libpod-ba8e33b33c69a5462c68402d5fbf697d3b03176ea2ce7b62d7b400ec7fa58d24.scope: Deactivated successfully.
Feb 20 09:56:53 np0005625203.localdomain podman[322517]: 2026-02-20 09:56:53.12802139 +0000 UTC m=+0.049571674 container kill ba8e33b33c69a5462c68402d5fbf697d3b03176ea2ce7b62d7b400ec7fa58d24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71cd1515-da32-498e-ba64-00f812b850e2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 20 09:56:53 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:53Z|00343|binding|INFO|Removing iface tap4892d08c-bf ovn-installed in OVS
Feb 20 09:56:53 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:53Z|00344|binding|INFO|Removing lport 4892d08c-bf99-4061-a176-87f9d1ab059a ovn-installed in OVS
Feb 20 09:56:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:53.145 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 2f27546f-8ab1-4da4-8477-6ded912ab90d with type ""
Feb 20 09:56:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:53.146 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:1::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-71cd1515-da32-498e-ba64-00f812b850e2', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71cd1515-da32-498e-ba64-00f812b850e2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62f842a102bd4d84b1f4d275ec6dbea2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba019825-4fb3-4b05-b9d8-6facfda87191, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=4892d08c-bf99-4061-a176-87f9d1ab059a) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:53.147 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:53.149 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 4892d08c-bf99-4061-a176-87f9d1ab059a in datapath 71cd1515-da32-498e-ba64-00f812b850e2 unbound from our chassis
Feb 20 09:56:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:53.151 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 71cd1515-da32-498e-ba64-00f812b850e2 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:56:53 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:53.152 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[9250749c-6c18-4f04-9d51-3f6ae137679c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:53.154 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:53 np0005625203.localdomain podman[322536]: 2026-02-20 09:56:53.203423933 +0000 UTC m=+0.052908358 container died ba8e33b33c69a5462c68402d5fbf697d3b03176ea2ce7b62d7b400ec7fa58d24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71cd1515-da32-498e-ba64-00f812b850e2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 20 09:56:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba8e33b33c69a5462c68402d5fbf697d3b03176ea2ce7b62d7b400ec7fa58d24-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-4aadfb25eeaf10993e7e6a2a3cebd2a3683cf6a92ef13669d11bf7d0721b6fcf-merged.mount: Deactivated successfully.
Feb 20 09:56:53 np0005625203.localdomain podman[322536]: 2026-02-20 09:56:53.301380713 +0000 UTC m=+0.150865088 container remove ba8e33b33c69a5462c68402d5fbf697d3b03176ea2ce7b62d7b400ec7fa58d24 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-71cd1515-da32-498e-ba64-00f812b850e2, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 20 09:56:53 np0005625203.localdomain systemd[1]: libpod-conmon-ba8e33b33c69a5462c68402d5fbf697d3b03176ea2ce7b62d7b400ec7fa58d24.scope: Deactivated successfully.
Feb 20 09:56:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:53.315 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:53 np0005625203.localdomain kernel: device tap4892d08c-bf left promiscuous mode
Feb 20 09:56:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:53.331 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:53 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:53.344 262775 INFO neutron.agent.dhcp.agent [None req-2033fe82-0686-4b29-8086-3694166db24c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:53 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:53.345 262775 INFO neutron.agent.dhcp.agent [None req-2033fe82-0686-4b29-8086-3694166db24c - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:53.554 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:53 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d71cd1515\x2dda32\x2d498e\x2dba64\x2d00f812b850e2.mount: Deactivated successfully.
Feb 20 09:56:53 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:53.865 2 INFO neutron.agent.securitygroups_rpc [None req-db944330-da94-4d69-be05-7c7a8491a44e 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "2b637ee9-db13-447d-b623-b978babd5cfe", "format": "json"}]: dispatch
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "format": "json"}]: dispatch
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5d6cbe8b-e61d-44af-be13-78bc30a91a96", "force": true, "format": "json"}]: dispatch
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:53.936973) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581413937094, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2683, "num_deletes": 265, "total_data_size": 5071826, "memory_usage": 5129296, "flush_reason": "Manual Compaction"}
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581413957371, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 3309686, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23365, "largest_seqno": 26043, "table_properties": {"data_size": 3299048, "index_size": 6823, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23780, "raw_average_key_size": 22, "raw_value_size": 3277368, "raw_average_value_size": 3043, "num_data_blocks": 288, "num_entries": 1077, "num_filter_entries": 1077, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581267, "oldest_key_time": 1771581267, "file_creation_time": 1771581413, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 20461 microseconds, and 9336 cpu microseconds.
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:53.957448) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 3309686 bytes OK
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:53.957483) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:53.959106) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:53.959156) EVENT_LOG_v1 {"time_micros": 1771581413959148, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:53.959190) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 5059584, prev total WAL file size 5059584, number of live WAL files 2.
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:53.960641) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003131373937' seq:72057594037927935, type:22 .. '7061786F73003132303439' seq:0, type:0; will stop at (end)
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(3232KB)], [39(15MB)]
Feb 20 09:56:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581413960710, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 19223447, "oldest_snapshot_seqno": -1}
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 12730 keys, 17974235 bytes, temperature: kUnknown
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581414039920, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 17974235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17900564, "index_size": 40738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31877, "raw_key_size": 340478, "raw_average_key_size": 26, "raw_value_size": 17682909, "raw_average_value_size": 1389, "num_data_blocks": 1552, "num_entries": 12730, "num_filter_entries": 12730, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581413, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:54.040234) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 17974235 bytes
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:54.042083) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 242.3 rd, 226.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 15.2 +0.0 blob) out(17.1 +0.0 blob), read-write-amplify(11.2) write-amplify(5.4) OK, records in: 13278, records dropped: 548 output_compression: NoCompression
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:54.042110) EVENT_LOG_v1 {"time_micros": 1771581414042098, "job": 22, "event": "compaction_finished", "compaction_time_micros": 79326, "compaction_time_cpu_micros": 49677, "output_level": 6, "num_output_files": 1, "total_output_size": 17974235, "num_input_records": 13278, "num_output_records": 12730, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581414042743, "job": 22, "event": "table_file_deletion", "file_number": 41}
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581414045335, "job": 22, "event": "table_file_deletion", "file_number": 39}
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:53.960454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:54.045437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:54.045446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:54.045450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:54.045454) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:56:54.045458) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:56:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:54.364 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:54.367 262775 INFO neutron.agent.linux.ip_lib [None req-15e4d6db-3795-4756-84b1-2e3a2e33b031 - - - - - -] Device tapcecf2d82-64 cannot be used as it has no MAC address
Feb 20 09:56:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:54.394 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:54 np0005625203.localdomain kernel: device tapcecf2d82-64 entered promiscuous mode
Feb 20 09:56:54 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581414.4025] manager: (tapcecf2d82-64): new Generic device (/org/freedesktop/NetworkManager/Devices/67)
Feb 20 09:56:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:54.405 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:54 np0005625203.localdomain systemd-udevd[322580]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:54 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:54Z|00345|binding|INFO|Claiming lport cecf2d82-64df-4464-ac0e-5036304106c3 for this chassis.
Feb 20 09:56:54 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:54Z|00346|binding|INFO|cecf2d82-64df-4464-ac0e-5036304106c3: Claiming unknown
Feb 20 09:56:54 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:54.416 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8d:3479/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=cecf2d82-64df-4464-ac0e-5036304106c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:54 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:54.419 161112 INFO neutron.agent.ovn.metadata.agent [-] Port cecf2d82-64df-4464-ac0e-5036304106c3 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:56:54 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:54.422 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port cd197e81-2c96-4b7c-b994-ff04d09e8a38 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:56:54 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:54.423 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:54 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:54.424 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd67684-4d9e-4f0b-b387-827c3c716561]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:54 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:54Z|00347|binding|INFO|Setting lport cecf2d82-64df-4464-ac0e-5036304106c3 ovn-installed in OVS
Feb 20 09:56:54 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:54Z|00348|binding|INFO|Setting lport cecf2d82-64df-4464-ac0e-5036304106c3 up in Southbound
Feb 20 09:56:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:54.438 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:54.483 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:54.525 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:54 np0005625203.localdomain podman[322589]: 2026-02-20 09:56:54.526057062 +0000 UTC m=+0.053538628 container kill c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a7a7943f-debf-416d-a866-6eb948d4f229, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 20 09:56:54 np0005625203.localdomain dnsmasq[321682]: exiting on receipt of SIGTERM
Feb 20 09:56:54 np0005625203.localdomain systemd[1]: libpod-c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34.scope: Deactivated successfully.
Feb 20 09:56:54 np0005625203.localdomain podman[322606]: 2026-02-20 09:56:54.60841801 +0000 UTC m=+0.068459029 container died c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a7a7943f-debf-416d-a866-6eb948d4f229, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:56:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:54 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-440af7b1eae268cd10cc14626d115762c1d457f06318c86a7b1ee50825ad6eb5-merged.mount: Deactivated successfully.
Feb 20 09:56:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:54.643 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:54 np0005625203.localdomain podman[322606]: 2026-02-20 09:56:54.647982884 +0000 UTC m=+0.108023853 container cleanup c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a7a7943f-debf-416d-a866-6eb948d4f229, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:56:54 np0005625203.localdomain systemd[1]: libpod-conmon-c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34.scope: Deactivated successfully.
Feb 20 09:56:54 np0005625203.localdomain podman[322608]: 2026-02-20 09:56:54.686237418 +0000 UTC m=+0.134669228 container remove c2626ef125cb78b4b04bfb7a75f85a462cab4374f04acb4accbfdbc1e9e5ec34 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a7a7943f-debf-416d-a866-6eb948d4f229, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:56:54 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:54.775 2 INFO neutron.agent.securitygroups_rpc [None req-2eacea7e-ffc4-4411-bf47-38f06768c1ff f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:54.893 262775 INFO neutron.agent.dhcp.agent [None req-f953adab-4588-4335-b0b7-3574ca26e2eb - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e162 e162: 6 total, 6 up, 6 in
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: pgmap v328: 177 pgs: 177 active+clean; 203 MiB data, 859 MiB used, 41 GiB / 42 GiB avail; 4.2 MiB/s rd, 2.8 MiB/s wr, 121 op/s
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2065491436' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2065491436' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1035388618' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:56:54 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1035388618' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:55 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:55.102 2 INFO neutron.agent.securitygroups_rpc [None req-fc9a7b92-030e-427f-b2ca-50d327cf6718 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:55 np0005625203.localdomain podman[322679]: 
Feb 20 09:56:55 np0005625203.localdomain podman[322679]: 2026-02-20 09:56:55.525780601 +0000 UTC m=+0.089825590 container create c14abf7f3447fb05ea4188fca73641ccc972747c7618d2c4b159d92b6ed1f66f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 20 09:56:55 np0005625203.localdomain systemd[1]: Started libpod-conmon-c14abf7f3447fb05ea4188fca73641ccc972747c7618d2c4b159d92b6ed1f66f.scope.
Feb 20 09:56:55 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:55 np0005625203.localdomain podman[322679]: 2026-02-20 09:56:55.484107432 +0000 UTC m=+0.048152451 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:55 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b8dd25d87d5c1c06648d2f455e6f97783816917887d22dee804dac29f5d499e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:55 np0005625203.localdomain podman[322679]: 2026-02-20 09:56:55.592826025 +0000 UTC m=+0.156871024 container init c14abf7f3447fb05ea4188fca73641ccc972747c7618d2c4b159d92b6ed1f66f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 20 09:56:55 np0005625203.localdomain podman[322679]: 2026-02-20 09:56:55.600923436 +0000 UTC m=+0.164968435 container start c14abf7f3447fb05ea4188fca73641ccc972747c7618d2c4b159d92b6ed1f66f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:56:55 np0005625203.localdomain dnsmasq[322697]: started, version 2.85 cachesize 150
Feb 20 09:56:55 np0005625203.localdomain dnsmasq[322697]: DNS service limited to local subnets
Feb 20 09:56:55 np0005625203.localdomain dnsmasq[322697]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:55 np0005625203.localdomain dnsmasq[322697]: warning: no upstream servers configured
Feb 20 09:56:55 np0005625203.localdomain dnsmasq-dhcp[322697]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:56:55 np0005625203.localdomain dnsmasq[322697]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:55 np0005625203.localdomain dnsmasq-dhcp[322697]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:55 np0005625203.localdomain dnsmasq-dhcp[322697]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:55 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2da7a7943f\x2ddebf\x2d416d\x2da866\x2d6eb948d4f229.mount: Deactivated successfully.
Feb 20 09:56:55 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:55.824 262775 INFO neutron.agent.dhcp.agent [None req-e610e86e-048b-42eb-8067-67fd905ae900 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:55 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:55.925 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:56:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e163 e163: 6 total, 6 up, 6 in
Feb 20 09:56:55 np0005625203.localdomain ceph-mon[296066]: osdmap e162: 6 total, 6 up, 6 in
Feb 20 09:56:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1035388618' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1035388618' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:55 np0005625203.localdomain ceph-mon[296066]: osdmap e163: 6 total, 6 up, 6 in
Feb 20 09:56:56 np0005625203.localdomain dnsmasq[322697]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:56 np0005625203.localdomain dnsmasq-dhcp[322697]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:56 np0005625203.localdomain dnsmasq-dhcp[322697]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:56 np0005625203.localdomain podman[322715]: 2026-02-20 09:56:56.011192928 +0000 UTC m=+0.075267968 container kill c14abf7f3447fb05ea4188fca73641ccc972747c7618d2c4b159d92b6ed1f66f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:56:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:56.047 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:56.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:56:56 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:56.351 262775 INFO neutron.agent.dhcp.agent [None req-92d6ae7d-53a1-4509-aad3-811df7cb61b9 - - - - - -] DHCP configuration for ports {'cecf2d82-64df-4464-ac0e-5036304106c3', 'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:56.358 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:56:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:56.359 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:56:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:56.359 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:56:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:56.360 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:56:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:56.361 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:56:56 np0005625203.localdomain dnsmasq[322697]: exiting on receipt of SIGTERM
Feb 20 09:56:56 np0005625203.localdomain podman[322754]: 2026-02-20 09:56:56.451068618 +0000 UTC m=+0.062827205 container kill c14abf7f3447fb05ea4188fca73641ccc972747c7618d2c4b159d92b6ed1f66f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:56 np0005625203.localdomain systemd[1]: libpod-c14abf7f3447fb05ea4188fca73641ccc972747c7618d2c4b159d92b6ed1f66f.scope: Deactivated successfully.
Feb 20 09:56:56 np0005625203.localdomain podman[322767]: 2026-02-20 09:56:56.527018788 +0000 UTC m=+0.062016961 container died c14abf7f3447fb05ea4188fca73641ccc972747c7618d2c4b159d92b6ed1f66f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 09:56:56 np0005625203.localdomain podman[322767]: 2026-02-20 09:56:56.55844326 +0000 UTC m=+0.093441433 container cleanup c14abf7f3447fb05ea4188fca73641ccc972747c7618d2c4b159d92b6ed1f66f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:56:56 np0005625203.localdomain systemd[1]: libpod-conmon-c14abf7f3447fb05ea4188fca73641ccc972747c7618d2c4b159d92b6ed1f66f.scope: Deactivated successfully.
Feb 20 09:56:56 np0005625203.localdomain podman[322769]: 2026-02-20 09:56:56.594497725 +0000 UTC m=+0.123877394 container remove c14abf7f3447fb05ea4188fca73641ccc972747c7618d2c4b159d92b6ed1f66f (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:56:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:56Z|00349|binding|INFO|Releasing lport cecf2d82-64df-4464-ac0e-5036304106c3 from this chassis (sb_readonly=0)
Feb 20 09:56:56 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:56Z|00350|binding|INFO|Setting lport cecf2d82-64df-4464-ac0e-5036304106c3 down in Southbound
Feb 20 09:56:56 np0005625203.localdomain kernel: device tapcecf2d82-64 left promiscuous mode
Feb 20 09:56:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:56.607 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:56.614 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe8d:3479/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=cecf2d82-64df-4464-ac0e-5036304106c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:56.616 161112 INFO neutron.agent.ovn.metadata.agent [-] Port cecf2d82-64df-4464-ac0e-5036304106c3 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:56:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:56.619 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:56 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-5b8dd25d87d5c1c06648d2f455e6f97783816917887d22dee804dac29f5d499e-merged.mount: Deactivated successfully.
Feb 20 09:56:56 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c14abf7f3447fb05ea4188fca73641ccc972747c7618d2c4b159d92b6ed1f66f-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:56 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:56.621 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[92ce2dfe-4b4c-472d-b25a-6be2f067556f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:56.626 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:56 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:56:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:56:56 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/981625636' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:56:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:56.862 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:56:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e164 e164: 6 total, 6 up, 6 in
Feb 20 09:56:57 np0005625203.localdomain ceph-mon[296066]: pgmap v330: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 4.4 MiB/s rd, 5.6 MiB/s wr, 243 op/s
Feb 20 09:56:57 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/981625636' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.101 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.103 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11707MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.103 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.104 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:56:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:57.226 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:57.228 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.227 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:57.267 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f0:4e:b0 2001:db8:0:1:f816:3eff:fef0:4eb0'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=cebd560e-7047-4cc1-9642-f5b7ec377d58) old=Port_Binding(mac=['fa:16:3e:f0:4e:b0 2001:db8::f816:3eff:fef0:4eb0'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef0:4eb0/64', 'neutron:device_id': 'ovnmeta-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:57.268 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port cebd560e-7047-4cc1-9642-f5b7ec377d58 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 updated
Feb 20 09:56:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:57.271 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:57.272 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[05f46f02-f8f0-443a-b65f-57c7e373564b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.361 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.362 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.427 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Refreshing inventories for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 20 09:56:57 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:57.482 2 INFO neutron.agent.securitygroups_rpc [None req-8da4750a-6dfd-49c6-9c80-6371938bf016 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.489 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Updating ProviderTree inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.489 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 09:56:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:57.490 262775 INFO neutron.agent.linux.ip_lib [None req-561b7173-b8be-4b9f-abd9-4a48ac3f7d13 - - - - - -] Device tapde723a3d-64 cannot be used as it has no MAC address
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.505 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Refreshing aggregate associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.513 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:57 np0005625203.localdomain kernel: device tapde723a3d-64 entered promiscuous mode
Feb 20 09:56:57 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581417.5218] manager: (tapde723a3d-64): new Generic device (/org/freedesktop/NetworkManager/Devices/68)
Feb 20 09:56:57 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:57Z|00351|binding|INFO|Claiming lport de723a3d-646d-47a1-8c6f-cb3dbeb07a15 for this chassis.
Feb 20 09:56:57 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:57Z|00352|binding|INFO|de723a3d-646d-47a1-8c6f-cb3dbeb07a15: Claiming unknown
Feb 20 09:56:57 np0005625203.localdomain systemd-udevd[322828]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.525 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.529 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Refreshing trait associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, traits: COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 20 09:56:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:57.535 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe87:3915/64 2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=de723a3d-646d-47a1-8c6f-cb3dbeb07a15) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.535 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:57 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:57Z|00353|binding|INFO|Setting lport de723a3d-646d-47a1-8c6f-cb3dbeb07a15 ovn-installed in OVS
Feb 20 09:56:57 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:56:57Z|00354|binding|INFO|Setting lport de723a3d-646d-47a1-8c6f-cb3dbeb07a15 up in Southbound
Feb 20 09:56:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:57.539 161112 INFO neutron.agent.ovn.metadata.agent [-] Port de723a3d-646d-47a1-8c6f-cb3dbeb07a15 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:56:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:57.542 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 7010d0bd-c77b-4ca1-a1d1-a10b0b3441ef IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:56:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:57.543 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:56:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:56:57.544 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef30302-2d73-464c-a106-f189020635fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.562 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.566 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.607 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:57.631 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:56:58 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/939907693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:56:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:58.020 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:56:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:58.026 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:56:58 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "0b99d555-2c83-4adc-81b2-c57674191232", "format": "json"}]: dispatch
Feb 20 09:56:58 np0005625203.localdomain ceph-mon[296066]: osdmap e164: 6 total, 6 up, 6 in
Feb 20 09:56:58 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/939907693' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:56:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e165 e165: 6 total, 6 up, 6 in
Feb 20 09:56:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:58.050 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:56:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:58.055 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:56:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:58.055 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:56:58 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:58.258 2 INFO neutron.agent.securitygroups_rpc [None req-23120728-b18f-4316-9b23-0bff49b361e7 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:58 np0005625203.localdomain podman[322905]: 
Feb 20 09:56:58 np0005625203.localdomain podman[322905]: 2026-02-20 09:56:58.426189174 +0000 UTC m=+0.089649685 container create 55805763a19aff2914985fa6a75a2372395204a1cd7414adda02f07bcacfd5ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:56:58 np0005625203.localdomain systemd[1]: Started libpod-conmon-55805763a19aff2914985fa6a75a2372395204a1cd7414adda02f07bcacfd5ff.scope.
Feb 20 09:56:58 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:56:58 np0005625203.localdomain podman[322905]: 2026-02-20 09:56:58.384829424 +0000 UTC m=+0.048289975 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:56:58 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d961631a8344fd1e3e598589bd47a757d81eb32e452eff0a20877952a0b9322/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:56:58 np0005625203.localdomain podman[322905]: 2026-02-20 09:56:58.49201025 +0000 UTC m=+0.155470811 container init 55805763a19aff2914985fa6a75a2372395204a1cd7414adda02f07bcacfd5ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true)
Feb 20 09:56:58 np0005625203.localdomain podman[322905]: 2026-02-20 09:56:58.500381799 +0000 UTC m=+0.163842310 container start 55805763a19aff2914985fa6a75a2372395204a1cd7414adda02f07bcacfd5ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 09:56:58 np0005625203.localdomain dnsmasq[322923]: started, version 2.85 cachesize 150
Feb 20 09:56:58 np0005625203.localdomain dnsmasq[322923]: DNS service limited to local subnets
Feb 20 09:56:58 np0005625203.localdomain dnsmasq[322923]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:56:58 np0005625203.localdomain dnsmasq[322923]: warning: no upstream servers configured
Feb 20 09:56:58 np0005625203.localdomain dnsmasq-dhcp[322923]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:56:58 np0005625203.localdomain dnsmasq[322923]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:56:58 np0005625203.localdomain dnsmasq-dhcp[322923]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:56:58 np0005625203.localdomain dnsmasq-dhcp[322923]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:56:58 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:56:58.641 262775 INFO neutron.agent.dhcp.agent [None req-d037e862-a73b-4971-a196-45da4db261db - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:56:58 np0005625203.localdomain dnsmasq[322923]: exiting on receipt of SIGTERM
Feb 20 09:56:58 np0005625203.localdomain podman[322941]: 2026-02-20 09:56:58.84952237 +0000 UTC m=+0.059715439 container kill 55805763a19aff2914985fa6a75a2372395204a1cd7414adda02f07bcacfd5ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:56:58 np0005625203.localdomain systemd[1]: libpod-55805763a19aff2914985fa6a75a2372395204a1cd7414adda02f07bcacfd5ff.scope: Deactivated successfully.
Feb 20 09:56:58 np0005625203.localdomain podman[322956]: 2026-02-20 09:56:58.925708667 +0000 UTC m=+0.054107514 container died 55805763a19aff2914985fa6a75a2372395204a1cd7414adda02f07bcacfd5ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:56:58 np0005625203.localdomain podman[322956]: 2026-02-20 09:56:58.968941664 +0000 UTC m=+0.097340441 container remove 55805763a19aff2914985fa6a75a2372395204a1cd7414adda02f07bcacfd5ff (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 20 09:56:58 np0005625203.localdomain systemd[1]: libpod-conmon-55805763a19aff2914985fa6a75a2372395204a1cd7414adda02f07bcacfd5ff.scope: Deactivated successfully.
Feb 20 09:56:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:56:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:56:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:56:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 09:56:59 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:56:59.044 2 INFO neutron.agent.securitygroups_rpc [None req-8dad7750-521e-4bcf-b6a0-3ff2b8fade37 f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:56:59 np0005625203.localdomain ceph-mon[296066]: pgmap v333: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 107 KiB/s rd, 3.6 MiB/s wr, 159 op/s
Feb 20 09:56:59 np0005625203.localdomain ceph-mon[296066]: osdmap e165: 6 total, 6 up, 6 in
Feb 20 09:56:59 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2324158721' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:56:59 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2859662458' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:59 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2859662458' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18300 "" "Go-http-client/1.1"
Feb 20 09:56:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:56:59 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3349450434' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:56:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:56:59 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3349450434' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:56:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:56:59 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-2d961631a8344fd1e3e598589bd47a757d81eb32e452eff0a20877952a0b9322-merged.mount: Deactivated successfully.
Feb 20 09:56:59 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-55805763a19aff2914985fa6a75a2372395204a1cd7414adda02f07bcacfd5ff-userdata-shm.mount: Deactivated successfully.
Feb 20 09:56:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:56:59.674 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:56:59 np0005625203.localdomain podman[323033]: 
Feb 20 09:56:59 np0005625203.localdomain podman[323033]: 2026-02-20 09:56:59.948967105 +0000 UTC m=+0.085104334 container create 89a680cb838eb4843a435b71f5555770322b7615604ee346fd7ad0f811855e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 20 09:56:59 np0005625203.localdomain systemd[1]: Started libpod-conmon-89a680cb838eb4843a435b71f5555770322b7615604ee346fd7ad0f811855e06.scope.
Feb 20 09:57:00 np0005625203.localdomain systemd[1]: tmp-crun.BHKP9i.mount: Deactivated successfully.
Feb 20 09:57:00 np0005625203.localdomain podman[323033]: 2026-02-20 09:56:59.9080593 +0000 UTC m=+0.044196559 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:57:00 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:57:00 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7145fa23f6b2ab0e0a7ee8de45e9f5755665c83b7a11f593706bf5f47e3d6efc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:57:00 np0005625203.localdomain podman[323033]: 2026-02-20 09:57:00.025958287 +0000 UTC m=+0.162095526 container init 89a680cb838eb4843a435b71f5555770322b7615604ee346fd7ad0f811855e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:57:00 np0005625203.localdomain podman[323033]: 2026-02-20 09:57:00.034855952 +0000 UTC m=+0.170993191 container start 89a680cb838eb4843a435b71f5555770322b7615604ee346fd7ad0f811855e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 20 09:57:00 np0005625203.localdomain dnsmasq[323051]: started, version 2.85 cachesize 150
Feb 20 09:57:00 np0005625203.localdomain dnsmasq[323051]: DNS service limited to local subnets
Feb 20 09:57:00 np0005625203.localdomain dnsmasq[323051]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:57:00 np0005625203.localdomain dnsmasq[323051]: warning: no upstream servers configured
Feb 20 09:57:00 np0005625203.localdomain dnsmasq-dhcp[323051]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:57:00 np0005625203.localdomain dnsmasq[323051]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 2 addresses
Feb 20 09:57:00 np0005625203.localdomain dnsmasq-dhcp[323051]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:57:00 np0005625203.localdomain dnsmasq-dhcp[323051]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:57:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3349450434' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3349450434' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1602955077' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:57:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2757870375' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2757870375' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:00.257 262775 INFO neutron.agent.dhcp.agent [None req-e73fd647-a912-4c1d-a982-a5abdef0ed8c - - - - - -] DHCP configuration for ports {'120fd476-f6b0-46c9-be05-abf8da3b3f10', 'cebd560e-7047-4cc1-9642-f5b7ec377d58', 'de723a3d-646d-47a1-8c6f-cb3dbeb07a15'} is completed
Feb 20 09:57:00 np0005625203.localdomain dnsmasq[323051]: exiting on receipt of SIGTERM
Feb 20 09:57:00 np0005625203.localdomain podman[323069]: 2026-02-20 09:57:00.4087841 +0000 UTC m=+0.053255778 container kill 89a680cb838eb4843a435b71f5555770322b7615604ee346fd7ad0f811855e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:57:00 np0005625203.localdomain systemd[1]: libpod-89a680cb838eb4843a435b71f5555770322b7615604ee346fd7ad0f811855e06.scope: Deactivated successfully.
Feb 20 09:57:00 np0005625203.localdomain podman[323084]: 2026-02-20 09:57:00.483912535 +0000 UTC m=+0.058484281 container died 89a680cb838eb4843a435b71f5555770322b7615604ee346fd7ad0f811855e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:57:00 np0005625203.localdomain systemd[1]: tmp-crun.NTC8xo.mount: Deactivated successfully.
Feb 20 09:57:00 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-89a680cb838eb4843a435b71f5555770322b7615604ee346fd7ad0f811855e06-userdata-shm.mount: Deactivated successfully.
Feb 20 09:57:00 np0005625203.localdomain podman[323084]: 2026-02-20 09:57:00.519656361 +0000 UTC m=+0.094228077 container cleanup 89a680cb838eb4843a435b71f5555770322b7615604ee346fd7ad0f811855e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:57:00 np0005625203.localdomain systemd[1]: libpod-conmon-89a680cb838eb4843a435b71f5555770322b7615604ee346fd7ad0f811855e06.scope: Deactivated successfully.
Feb 20 09:57:00 np0005625203.localdomain podman[323085]: 2026-02-20 09:57:00.559133162 +0000 UTC m=+0.129024063 container remove 89a680cb838eb4843a435b71f5555770322b7615604ee346fd7ad0f811855e06 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:57:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:00.571 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:00 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:00Z|00355|binding|INFO|Releasing lport de723a3d-646d-47a1-8c6f-cb3dbeb07a15 from this chassis (sb_readonly=0)
Feb 20 09:57:00 np0005625203.localdomain kernel: device tapde723a3d-64 left promiscuous mode
Feb 20 09:57:00 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:00Z|00356|binding|INFO|Setting lport de723a3d-646d-47a1-8c6f-cb3dbeb07a15 down in Southbound
Feb 20 09:57:00 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:00.581 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe87:3915/64 2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=de723a3d-646d-47a1-8c6f-cb3dbeb07a15) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:57:00 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:00.583 161112 INFO neutron.agent.ovn.metadata.agent [-] Port de723a3d-646d-47a1-8c6f-cb3dbeb07a15 in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:57:00 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:00.586 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:57:00 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:00.587 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[21b0eab5-281a-4cc1-b8b1-a9b4cc24be3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:57:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:00.590 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:00 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:57:00 np0005625203.localdomain podman[323113]: 2026-02-20 09:57:00.758810489 +0000 UTC m=+0.076457566 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 20 09:57:00 np0005625203.localdomain podman[323113]: 2026-02-20 09:57:00.769285693 +0000 UTC m=+0.086932820 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:57:00 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:57:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:57:00 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2571226302' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:57:00 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2571226302' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:01.050 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:01.056 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:57:01 np0005625203.localdomain ceph-mon[296066]: pgmap v335: 177 pgs: 177 active+clean; 193 MiB data, 913 MiB used, 41 GiB / 42 GiB avail; 96 KiB/s rd, 13 KiB/s wr, 124 op/s
Feb 20 09:57:01 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "0b99d555-2c83-4adc-81b2-c57674191232_67b12e3d-cabb-4489-8f7a-b787cf53ee59", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:01 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "0b99d555-2c83-4adc-81b2-c57674191232", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2571226302' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2571226302' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:01 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-7145fa23f6b2ab0e0a7ee8de45e9f5755665c83b7a11f593706bf5f47e3d6efc-merged.mount: Deactivated successfully.
Feb 20 09:57:01 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:57:01 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:01.526 262775 INFO neutron.agent.linux.ip_lib [None req-b3cdc15a-85cc-4276-b1a3-32a0f19579c5 - - - - - -] Device tapf76961cd-2c cannot be used as it has no MAC address
Feb 20 09:57:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:01.656 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:01 np0005625203.localdomain kernel: device tapf76961cd-2c entered promiscuous mode
Feb 20 09:57:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:01Z|00357|binding|INFO|Claiming lport f76961cd-2c6c-4d42-a0a0-e6169d2bc4bd for this chassis.
Feb 20 09:57:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:01.662 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:01 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581421.6632] manager: (tapf76961cd-2c): new Generic device (/org/freedesktop/NetworkManager/Devices/69)
Feb 20 09:57:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:01Z|00358|binding|INFO|f76961cd-2c6c-4d42-a0a0-e6169d2bc4bd: Claiming unknown
Feb 20 09:57:01 np0005625203.localdomain systemd-udevd[323139]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:57:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:01Z|00359|binding|INFO|Setting lport f76961cd-2c6c-4d42-a0a0-e6169d2bc4bd ovn-installed in OVS
Feb 20 09:57:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:01.669 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:01Z|00360|binding|INFO|Setting lport f76961cd-2c6c-4d42-a0a0-e6169d2bc4bd up in Southbound
Feb 20 09:57:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:01.673 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=f76961cd-2c6c-4d42-a0a0-e6169d2bc4bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:57:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:01.674 161112 INFO neutron.agent.ovn.metadata.agent [-] Port f76961cd-2c6c-4d42-a0a0-e6169d2bc4bd in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 bound to our chassis
Feb 20 09:57:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:01.677 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 7779d931-076e-4896-a03f-a7cadd32d2b0 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:57:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:01.677 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:57:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:01.678 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[d85b34e5-7965-4964-92da-7135707d1791]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:57:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:01.708 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:01.750 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:01.820 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:02.230 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:57:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:02.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:57:02 np0005625203.localdomain podman[323195]: 
Feb 20 09:57:02 np0005625203.localdomain podman[323195]: 2026-02-20 09:57:02.72893136 +0000 UTC m=+0.090376407 container create e57ec7b0b974d40fc4b29634f719d81952b888e7afcf6161ae8810e5c8e969b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:57:02 np0005625203.localdomain systemd[1]: Started libpod-conmon-e57ec7b0b974d40fc4b29634f719d81952b888e7afcf6161ae8810e5c8e969b4.scope.
Feb 20 09:57:02 np0005625203.localdomain systemd[1]: tmp-crun.DE1rEJ.mount: Deactivated successfully.
Feb 20 09:57:02 np0005625203.localdomain podman[323195]: 2026-02-20 09:57:02.688375896 +0000 UTC m=+0.049820953 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:57:02 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:57:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0593b8b6a3dd78935ac4f0da746ebacf029d84f2332d41556b5c14c3d6c8ccd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:57:02 np0005625203.localdomain podman[323195]: 2026-02-20 09:57:02.81033793 +0000 UTC m=+0.171782977 container init e57ec7b0b974d40fc4b29634f719d81952b888e7afcf6161ae8810e5c8e969b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Feb 20 09:57:02 np0005625203.localdomain podman[323195]: 2026-02-20 09:57:02.820819423 +0000 UTC m=+0.182264470 container start e57ec7b0b974d40fc4b29634f719d81952b888e7afcf6161ae8810e5c8e969b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:57:02 np0005625203.localdomain dnsmasq[323213]: started, version 2.85 cachesize 150
Feb 20 09:57:02 np0005625203.localdomain dnsmasq[323213]: DNS service limited to local subnets
Feb 20 09:57:02 np0005625203.localdomain dnsmasq[323213]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:57:02 np0005625203.localdomain dnsmasq[323213]: warning: no upstream servers configured
Feb 20 09:57:02 np0005625203.localdomain dnsmasq-dhcp[323213]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:57:02 np0005625203.localdomain dnsmasq[323213]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:57:02 np0005625203.localdomain dnsmasq-dhcp[323213]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:57:02 np0005625203.localdomain dnsmasq-dhcp[323213]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:57:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e166 e166: 6 total, 6 up, 6 in
Feb 20 09:57:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:02.887 262775 INFO neutron.agent.dhcp.agent [None req-b3cdc15a-85cc-4276-b1a3-32a0f19579c5 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:54Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ed9940>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ed9c40>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ed9880>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da505b970>], id=e4ed60b9-4a26-4dbb-853a-788a274ef8d1, ip_allocation=immediate, mac_address=fa:16:3e:17:6f:d7, name=tempest-NetworksTestDHCPv6-1923740238, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=53, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['e4176b30-58d9-406b-9af2-9e5325c11e48', 'e6cac300-8229-405a-9ee2-81f1f9ad889b'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:54Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=2403, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:54Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:57:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:03.036 262775 INFO neutron.agent.dhcp.agent [None req-a5a57bbc-3ff3-46ca-893e-98e49494b2d8 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58'} is completed
Feb 20 09:57:03 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:03.040 2 INFO neutron.agent.securitygroups_rpc [None req-f7f4080a-576f-4ec0-afc4-2b369a4e24bc 90a02ec8973644daaf9f628e26b82aba 68587c4c15964f28ad6d155288e119b0 - - default default] Security group rule updated ['602964d2-c9d4-4795-879d-2f4697b07a9a']
Feb 20 09:57:03 np0005625203.localdomain dnsmasq[323213]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 2 addresses
Feb 20 09:57:03 np0005625203.localdomain dnsmasq-dhcp[323213]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:57:03 np0005625203.localdomain dnsmasq-dhcp[323213]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:57:03 np0005625203.localdomain podman[323231]: 2026-02-20 09:57:03.060994403 +0000 UTC m=+0.054045182 container kill e57ec7b0b974d40fc4b29634f719d81952b888e7afcf6161ae8810e5c8e969b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:57:03 np0005625203.localdomain ceph-mon[296066]: pgmap v336: 177 pgs: 177 active+clean; 193 MiB data, 917 MiB used, 41 GiB / 42 GiB avail; 148 KiB/s rd, 14 KiB/s wr, 199 op/s
Feb 20 09:57:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3998727060' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3998727060' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:03 np0005625203.localdomain ceph-mon[296066]: osdmap e166: 6 total, 6 up, 6 in
Feb 20 09:57:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:03.212 262775 INFO neutron.agent.dhcp.agent [None req-b3cdc15a-85cc-4276-b1a3-32a0f19579c5 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:56:57Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c790a0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c79df0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c799a0>, <neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c79940>], id=120fd476-f6b0-46c9-be05-abf8da3b3f10, ip_allocation=immediate, mac_address=fa:16:3e:92:24:62, name=tempest-NetworksTestDHCPv6-1836661373, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:54:44Z, description=, dns_domain=, id=811e2462-6872-485d-9c09-d2dd9cb25273, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-NetworksTestDHCPv6-test-network-610089291, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=47373, qos_policy_id=None, revision_number=57, router:external=False, shared=False, standard_attr_id=1864, status=ACTIVE, subnets=['134c605c-1177-417c-a7db-fdb3317a1a5f', '523f797e-0240-405d-be7d-6f1c2d67a1b2'], tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:56Z, vlan_transparent=None, network_id=811e2462-6872-485d-9c09-d2dd9cb25273, port_security_enabled=True, project_id=cb36e48ce4264babb412d413a8bf7b9f, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['a85f25c3-88e2-4d71-a8d4-72a266c1246c'], standard_attr_id=2412, status=DOWN, tags=[], tenant_id=cb36e48ce4264babb412d413a8bf7b9f, updated_at=2026-02-20T09:56:57Z on network 811e2462-6872-485d-9c09-d2dd9cb25273
Feb 20 09:57:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:03.255 262775 INFO neutron.agent.dhcp.agent [None req-11050e43-aec4-4aca-a4b0-7c1c4a64edc9 - - - - - -] DHCP configuration for ports {'e4ed60b9-4a26-4dbb-853a-788a274ef8d1'} is completed
Feb 20 09:57:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:03.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:57:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:03.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:57:03 np0005625203.localdomain podman[323270]: 2026-02-20 09:57:03.391907601 +0000 UTC m=+0.059958225 container kill e57ec7b0b974d40fc4b29634f719d81952b888e7afcf6161ae8810e5c8e969b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:57:03 np0005625203.localdomain dnsmasq[323213]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 4 addresses
Feb 20 09:57:03 np0005625203.localdomain dnsmasq-dhcp[323213]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:57:03 np0005625203.localdomain dnsmasq-dhcp[323213]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:57:03 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:03.578 262775 INFO neutron.agent.dhcp.agent [None req-508f54ac-e0fe-415f-8cc3-7841486a05f4 - - - - - -] DHCP configuration for ports {'120fd476-f6b0-46c9-be05-abf8da3b3f10'} is completed
Feb 20 09:57:03 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:03.612 2 INFO neutron.agent.securitygroups_rpc [None req-7415ebd2-08fb-4812-9524-708fe60e5aaa 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['efc53d5c-88f6-4ec9-8815-9d765811b12e']
Feb 20 09:57:03 np0005625203.localdomain dnsmasq[323213]: exiting on receipt of SIGTERM
Feb 20 09:57:03 np0005625203.localdomain podman[323308]: 2026-02-20 09:57:03.810000766 +0000 UTC m=+0.058976216 container kill e57ec7b0b974d40fc4b29634f719d81952b888e7afcf6161ae8810e5c8e969b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Feb 20 09:57:03 np0005625203.localdomain systemd[1]: libpod-e57ec7b0b974d40fc4b29634f719d81952b888e7afcf6161ae8810e5c8e969b4.scope: Deactivated successfully.
Feb 20 09:57:03 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e167 e167: 6 total, 6 up, 6 in
Feb 20 09:57:03 np0005625203.localdomain podman[323323]: 2026-02-20 09:57:03.888240056 +0000 UTC m=+0.057178729 container died e57ec7b0b974d40fc4b29634f719d81952b888e7afcf6161ae8810e5c8e969b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 20 09:57:03 np0005625203.localdomain systemd[1]: tmp-crun.2NqsNY.mount: Deactivated successfully.
Feb 20 09:57:03 np0005625203.localdomain podman[323323]: 2026-02-20 09:57:03.947341245 +0000 UTC m=+0.116279888 container remove e57ec7b0b974d40fc4b29634f719d81952b888e7afcf6161ae8810e5c8e969b4 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:57:03 np0005625203.localdomain systemd[1]: libpod-conmon-e57ec7b0b974d40fc4b29634f719d81952b888e7afcf6161ae8810e5c8e969b4.scope: Deactivated successfully.
Feb 20 09:57:03 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:03.994 2 INFO neutron.agent.securitygroups_rpc [None req-f1e881f7-c612-45f2-b27b-1f5d8fe2e21f f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:57:04 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:04.441 2 INFO neutron.agent.securitygroups_rpc [None req-0e7411d4-9e8e-44df-aee4-9bd8dc94f75f f8f7e83376364aed92a95cbecc4fe358 cb36e48ce4264babb412d413a8bf7b9f - - default default] Security group member updated ['a85f25c3-88e2-4d71-a8d4-72a266c1246c']
Feb 20 09:57:04 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d0593b8b6a3dd78935ac4f0da746ebacf029d84f2332d41556b5c14c3d6c8ccd-merged.mount: Deactivated successfully.
Feb 20 09:57:04 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e57ec7b0b974d40fc4b29634f719d81952b888e7afcf6161ae8810e5c8e969b4-userdata-shm.mount: Deactivated successfully.
Feb 20 09:57:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:05.016 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:57:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:05.019 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:57:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:05.020 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:57:05 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "2b637ee9-db13-447d-b623-b978babd5cfe_82c45f12-897c-4165-a7b7-4039d7d47e93", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:05 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "2b637ee9-db13-447d-b623-b978babd5cfe", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:05 np0005625203.localdomain ceph-mon[296066]: osdmap e167: 6 total, 6 up, 6 in
Feb 20 09:57:05 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:05.189 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:6d:09 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec4f10d3-21c0-4f32-97fb-37b6b004b601, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3568ed7b-9263-43af-b4fd-ae333afc9a3b) old=Port_Binding(mac=['fa:16:3e:68:6d:09 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:57:05 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:05.192 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3568ed7b-9263-43af-b4fd-ae333afc9a3b in datapath 039b20b8-16a8-495e-968a-63fcd66a566c updated
Feb 20 09:57:05 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:05.195 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 039b20b8-16a8-495e-968a-63fcd66a566c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:57:05 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:05.196 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[a287e351-b069-4a96-aae4-73efc14be6f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:57:05 np0005625203.localdomain podman[323400]: 
Feb 20 09:57:05 np0005625203.localdomain podman[323400]: 2026-02-20 09:57:05.810317791 +0000 UTC m=+0.088439366 container create 914173931e4b6c02774cbd95b0d67605227cde07a2b467762eb0a30014247af0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:57:05 np0005625203.localdomain systemd[1]: Started libpod-conmon-914173931e4b6c02774cbd95b0d67605227cde07a2b467762eb0a30014247af0.scope.
Feb 20 09:57:05 np0005625203.localdomain podman[323400]: 2026-02-20 09:57:05.770453438 +0000 UTC m=+0.048575063 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:57:05 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:57:05 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d044d1ec1d48322c295bf803d53f44f45572890a909e54eca555acdb64ae1d3f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:57:05 np0005625203.localdomain podman[323400]: 2026-02-20 09:57:05.894029501 +0000 UTC m=+0.172151066 container init 914173931e4b6c02774cbd95b0d67605227cde07a2b467762eb0a30014247af0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:57:05 np0005625203.localdomain podman[323400]: 2026-02-20 09:57:05.902976339 +0000 UTC m=+0.181097974 container start 914173931e4b6c02774cbd95b0d67605227cde07a2b467762eb0a30014247af0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:57:05 np0005625203.localdomain dnsmasq[323419]: started, version 2.85 cachesize 150
Feb 20 09:57:05 np0005625203.localdomain dnsmasq[323419]: DNS service limited to local subnets
Feb 20 09:57:05 np0005625203.localdomain dnsmasq[323419]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:57:05 np0005625203.localdomain dnsmasq[323419]: warning: no upstream servers configured
Feb 20 09:57:05 np0005625203.localdomain dnsmasq-dhcp[323419]: DHCPv6, static leases only on 2001:db8::, lease time 1d
Feb 20 09:57:05 np0005625203.localdomain dnsmasq-dhcp[323419]: DHCPv6, static leases only on 2001:db8:0:1::, lease time 1d
Feb 20 09:57:05 np0005625203.localdomain dnsmasq[323419]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/addn_hosts - 0 addresses
Feb 20 09:57:05 np0005625203.localdomain dnsmasq-dhcp[323419]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/host
Feb 20 09:57:05 np0005625203.localdomain dnsmasq-dhcp[323419]: read /var/lib/neutron/dhcp/811e2462-6872-485d-9c09-d2dd9cb25273/opts
Feb 20 09:57:05 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:05.914 2 INFO neutron.agent.securitygroups_rpc [None req-e5b5b6c2-ec9f-4e00-8939-9097e06787f1 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['c1686fb3-a7b5-4191-9c5e-7c249c4e6c3c', 'efc53d5c-88f6-4ec9-8815-9d765811b12e']
Feb 20 09:57:06 np0005625203.localdomain ceph-mon[296066]: pgmap v338: 177 pgs: 177 active+clean; 193 MiB data, 917 MiB used, 41 GiB / 42 GiB avail; 136 KiB/s rd, 13 KiB/s wr, 183 op/s
Feb 20 09:57:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2301409396' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:57:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/4294664811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:57:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:06.053 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:06 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:57:06 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:57:06 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:06.137 262775 INFO neutron.agent.dhcp.agent [None req-68dd4d50-f7ad-4636-a3a0-3a753bd2b8f1 - - - - - -] DHCP configuration for ports {'cebd560e-7047-4cc1-9642-f5b7ec377d58', 'f76961cd-2c6c-4d42-a0a0-e6169d2bc4bd'} is completed
Feb 20 09:57:06 np0005625203.localdomain dnsmasq[323419]: exiting on receipt of SIGTERM
Feb 20 09:57:06 np0005625203.localdomain podman[323436]: 2026-02-20 09:57:06.251294805 +0000 UTC m=+0.061991479 container kill 914173931e4b6c02774cbd95b0d67605227cde07a2b467762eb0a30014247af0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:57:06 np0005625203.localdomain systemd[1]: libpod-914173931e4b6c02774cbd95b0d67605227cde07a2b467762eb0a30014247af0.scope: Deactivated successfully.
Feb 20 09:57:06 np0005625203.localdomain podman[323451]: 2026-02-20 09:57:06.321131335 +0000 UTC m=+0.053841616 container died 914173931e4b6c02774cbd95b0d67605227cde07a2b467762eb0a30014247af0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:57:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:06.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:57:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:06.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:57:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:06.344 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:57:06 np0005625203.localdomain podman[323451]: 2026-02-20 09:57:06.35133756 +0000 UTC m=+0.084047801 container cleanup 914173931e4b6c02774cbd95b0d67605227cde07a2b467762eb0a30014247af0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 09:57:06 np0005625203.localdomain systemd[1]: libpod-conmon-914173931e4b6c02774cbd95b0d67605227cde07a2b467762eb0a30014247af0.scope: Deactivated successfully.
Feb 20 09:57:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:06.371 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:57:06 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:06.387 2 INFO neutron.agent.securitygroups_rpc [None req-15e35e99-a929-4578-aaaf-c6b96452307f 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['c1686fb3-a7b5-4191-9c5e-7c249c4e6c3c']
Feb 20 09:57:06 np0005625203.localdomain podman[323452]: 2026-02-20 09:57:06.408999403 +0000 UTC m=+0.136876255 container remove 914173931e4b6c02774cbd95b0d67605227cde07a2b467762eb0a30014247af0 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-811e2462-6872-485d-9c09-d2dd9cb25273, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:57:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:06.472 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:06 np0005625203.localdomain kernel: device tapf76961cd-2c left promiscuous mode
Feb 20 09:57:06 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:06Z|00361|binding|INFO|Releasing lport f76961cd-2c6c-4d42-a0a0-e6169d2bc4bd from this chassis (sb_readonly=0)
Feb 20 09:57:06 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:06Z|00362|binding|INFO|Setting lport f76961cd-2c6c-4d42-a0a0-e6169d2bc4bd down in Southbound
Feb 20 09:57:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:06.483 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feba:cd59/64 2001:db8::2/64', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-811e2462-6872-485d-9c09-d2dd9cb25273', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb36e48ce4264babb412d413a8bf7b9f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f57d4cee-545f-46ed-8eee-c1b976528137, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=f76961cd-2c6c-4d42-a0a0-e6169d2bc4bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:57:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:06.485 161112 INFO neutron.agent.ovn.metadata.agent [-] Port f76961cd-2c6c-4d42-a0a0-e6169d2bc4bd in datapath 811e2462-6872-485d-9c09-d2dd9cb25273 unbound from our chassis
Feb 20 09:57:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:06.488 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 811e2462-6872-485d-9c09-d2dd9cb25273, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:57:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:06.489 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca115b8-1e96-4010-8e8c-d7f79aa010d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:57:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:06.494 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:06 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:06.749 262775 INFO neutron.agent.dhcp.agent [None req-d95441ce-b62a-4532-8986-971ad239ec1a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:06 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:06.749 262775 INFO neutron.agent.dhcp.agent [None req-d95441ce-b62a-4532-8986-971ad239ec1a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:06 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:06.750 262775 INFO neutron.agent.dhcp.agent [None req-d95441ce-b62a-4532-8986-971ad239ec1a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:06 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:06.750 262775 INFO neutron.agent.dhcp.agent [None req-d95441ce-b62a-4532-8986-971ad239ec1a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:06 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:06.750 262775 INFO neutron.agent.dhcp.agent [None req-d95441ce-b62a-4532-8986-971ad239ec1a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:06 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:06.751 262775 INFO neutron.agent.dhcp.agent [None req-d95441ce-b62a-4532-8986-971ad239ec1a - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:06 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d044d1ec1d48322c295bf803d53f44f45572890a909e54eca555acdb64ae1d3f-merged.mount: Deactivated successfully.
Feb 20 09:57:06 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-914173931e4b6c02774cbd95b0d67605227cde07a2b467762eb0a30014247af0-userdata-shm.mount: Deactivated successfully.
Feb 20 09:57:06 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d811e2462\x2d6872\x2d485d\x2d9c09\x2dd2dd9cb25273.mount: Deactivated successfully.
Feb 20 09:57:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:57:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:57:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:57:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:57:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:57:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:57:07 np0005625203.localdomain ceph-mon[296066]: pgmap v340: 177 pgs: 177 active+clean; 193 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 139 KiB/s rd, 51 KiB/s wr, 190 op/s
Feb 20 09:57:07 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:57:07 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "format": "json"}]: dispatch
Feb 20 09:57:07 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:57:07 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e168 e168: 6 total, 6 up, 6 in
Feb 20 09:57:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:07.367 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:57:07 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:57:07 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:57:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:07.670 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:57:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:07.671 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:57:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:07.671 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:57:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:57:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:07.699 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:07 np0005625203.localdomain systemd[1]: tmp-crun.Yy6OJa.mount: Deactivated successfully.
Feb 20 09:57:07 np0005625203.localdomain podman[323480]: 2026-02-20 09:57:07.773175587 +0000 UTC m=+0.087524369 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 20 09:57:07 np0005625203.localdomain podman[323480]: 2026-02-20 09:57:07.839648324 +0000 UTC m=+0.153997086 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:57:07 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:57:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:07.978 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "f96ea30b-5993-4393-8f64-efd08707fd5f_ff9c641b-ea0b-431f-af91-e17e9c0dd44a", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "f96ea30b-5993-4393-8f64-efd08707fd5f", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2989862322' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2989862322' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:08 np0005625203.localdomain ceph-mon[296066]: osdmap e168: 6 total, 6 up, 6 in
Feb 20 09:57:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:57:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e169 e169: 6 total, 6 up, 6 in
Feb 20 09:57:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:57:09 np0005625203.localdomain ceph-mon[296066]: pgmap v342: 177 pgs: 177 active+clean; 193 MiB data, 902 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 50 KiB/s wr, 37 op/s
Feb 20 09:57:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "format": "json"}]: dispatch
Feb 20 09:57:09 np0005625203.localdomain ceph-mon[296066]: osdmap e169: 6 total, 6 up, 6 in
Feb 20 09:57:09 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1936621399' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:09 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1936621399' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:09 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:09.636 2 INFO neutron.agent.securitygroups_rpc [None req-d187057a-39e5-4c52-82a0-d1bcafd46a90 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['46d5d21d-63a5-4d3d-a013-7b21b89cdba7']
Feb 20 09:57:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:57:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:57:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:09.684 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:09 np0005625203.localdomain podman[323506]: 2026-02-20 09:57:09.771047835 +0000 UTC m=+0.078743427 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, version=9.7, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347)
Feb 20 09:57:09 np0005625203.localdomain sshd[323534]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:57:09 np0005625203.localdomain podman[323506]: 2026-02-20 09:57:09.814287613 +0000 UTC m=+0.121983205 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, config_id=openstack_network_exporter, io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, version=9.7, vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9/ubi-minimal, build-date=2026-02-05T04:57:10Z, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 09:57:09 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:57:09 np0005625203.localdomain podman[323505]: 2026-02-20 09:57:09.83328145 +0000 UTC m=+0.142909052 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:57:09 np0005625203.localdomain podman[323505]: 2026-02-20 09:57:09.847276183 +0000 UTC m=+0.156903785 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 20 09:57:09 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:57:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:57:10 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "snap_name": "faa1b419-44e8-41e3-a207-23a7805bf4c9", "format": "json"}]: dispatch
Feb 20 09:57:10 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3137678042' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:10 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3137678042' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:10 np0005625203.localdomain sshd[323534]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:57:10 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:10.957 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:6d:09 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec4f10d3-21c0-4f32-97fb-37b6b004b601, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3568ed7b-9263-43af-b4fd-ae333afc9a3b) old=Port_Binding(mac=['fa:16:3e:68:6d:09 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:57:10 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:10.959 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3568ed7b-9263-43af-b4fd-ae333afc9a3b in datapath 039b20b8-16a8-495e-968a-63fcd66a566c updated
Feb 20 09:57:10 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:10.962 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 039b20b8-16a8-495e-968a-63fcd66a566c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:57:10 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:10.963 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[f895c01f-904d-40ef-9058-f9434adcebb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:57:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:11.057 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:11 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "877bc86e-3ae3-4788-a4cd-ecc7ec5a94e2_1280651e-fbca-4fea-b6ea-1b1c294293df", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:11 np0005625203.localdomain ceph-mon[296066]: pgmap v344: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 88 KiB/s wr, 42 op/s
Feb 20 09:57:11 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "877bc86e-3ae3-4788-a4cd-ecc7ec5a94e2", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Feb 20 09:57:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:57:11 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve49", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:57:12 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:12.025 2 INFO neutron.agent.securitygroups_rpc [None req-6ce6cc4b-c215-41b7-8af5-e062eb4d8872 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['b7f2b362-1261-45d0-afca-4d7d4dc43da1', '66935af6-6884-4649-9f3d-6c32279f86ee', '46d5d21d-63a5-4d3d-a013-7b21b89cdba7']
Feb 20 09:57:12 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e170 e170: 6 total, 6 up, 6 in
Feb 20 09:57:12 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve49", "tenant_id": "1401fb23701440858ed7175cc4dba63b", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:57:12 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:12.596 2 INFO neutron.agent.securitygroups_rpc [None req-9d686e86-35e8-431c-8fc4-b6265d5fa0d0 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['b7f2b362-1261-45d0-afca-4d7d4dc43da1', '66935af6-6884-4649-9f3d-6c32279f86ee']
Feb 20 09:57:12 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e171 e171: 6 total, 6 up, 6 in
Feb 20 09:57:13 np0005625203.localdomain ceph-mon[296066]: pgmap v345: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 72 KiB/s wr, 80 op/s
Feb 20 09:57:13 np0005625203.localdomain ceph-mon[296066]: osdmap e170: 6 total, 6 up, 6 in
Feb 20 09:57:13 np0005625203.localdomain ceph-mon[296066]: osdmap e171: 6 total, 6 up, 6 in
Feb 20 09:57:13 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:13.798 262775 INFO neutron.agent.linux.ip_lib [None req-58b4eaec-85e5-4f50-a725-232676af9cb6 - - - - - -] Device tap95594ce0-59 cannot be used as it has no MAC address
Feb 20 09:57:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:13.867 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:13 np0005625203.localdomain kernel: device tap95594ce0-59 entered promiscuous mode
Feb 20 09:57:13 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581433.8756] manager: (tap95594ce0-59): new Generic device (/org/freedesktop/NetworkManager/Devices/70)
Feb 20 09:57:13 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:13Z|00363|binding|INFO|Claiming lport 95594ce0-595d-4597-a33f-9d4324a3042e for this chassis.
Feb 20 09:57:13 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:13Z|00364|binding|INFO|95594ce0-595d-4597-a33f-9d4324a3042e: Claiming unknown
Feb 20 09:57:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:13.875 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:13 np0005625203.localdomain systemd-udevd[323556]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:57:13 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:13.885 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-a8cdd780-5302-40de-bd30-c04829d3000e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8cdd780-5302-40de-bd30-c04829d3000e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bc7f22347de4004b73776eab4064bd0', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdc71d21-1cdf-477c-8421-e1f9461f4a7c, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=95594ce0-595d-4597-a33f-9d4324a3042e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:57:13 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:13.886 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 95594ce0-595d-4597-a33f-9d4324a3042e in datapath a8cdd780-5302-40de-bd30-c04829d3000e bound to our chassis
Feb 20 09:57:13 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:13.887 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 4597826e-444e-402c-b494-b0348428d778 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:57:13 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:13.888 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8cdd780-5302-40de-bd30-c04829d3000e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:57:13 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:13.889 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b45fa3-bf25-4aa4-b510-2e420be07e49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:57:13 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap95594ce0-59: No such device
Feb 20 09:57:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:13.909 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:13 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:13Z|00365|binding|INFO|Setting lport 95594ce0-595d-4597-a33f-9d4324a3042e ovn-installed in OVS
Feb 20 09:57:13 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:13Z|00366|binding|INFO|Setting lport 95594ce0-595d-4597-a33f-9d4324a3042e up in Southbound
Feb 20 09:57:13 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap95594ce0-59: No such device
Feb 20 09:57:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:13.911 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:13.913 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:13 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap95594ce0-59: No such device
Feb 20 09:57:13 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap95594ce0-59: No such device
Feb 20 09:57:13 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap95594ce0-59: No such device
Feb 20 09:57:13 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap95594ce0-59: No such device
Feb 20 09:57:13 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap95594ce0-59: No such device
Feb 20 09:57:13 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap95594ce0-59: No such device
Feb 20 09:57:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:13.951 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:13.973 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "8b7ef8b3-43af-41f4-8a24-ef7ee4fe0934_5c6fcb89-e7b9-4769-a40e-771ef85df0e8", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "snap_name": "8b7ef8b3-43af-41f4-8a24-ef7ee4fe0934", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "snap_name": "faa1b419-44e8-41e3-a207-23a7805bf4c9_428b559d-5ff8-4ee5-af31-0b6d83df877e", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "snap_name": "faa1b419-44e8-41e3-a207-23a7805bf4c9", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:14.684 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:14 np0005625203.localdomain podman[323627]: 
Feb 20 09:57:14 np0005625203.localdomain podman[323627]: 2026-02-20 09:57:14.944240171 +0000 UTC m=+0.083381471 container create e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8cdd780-5302-40de-bd30-c04829d3000e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:57:14 np0005625203.localdomain systemd[1]: Started libpod-conmon-e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14.scope.
Feb 20 09:57:14 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:57:14 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c42e6c640f9802d0b748dce1bd268334e49016319e9001418078696d79f94040/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:57:15 np0005625203.localdomain podman[323627]: 2026-02-20 09:57:14.908797254 +0000 UTC m=+0.047938564 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:57:15 np0005625203.localdomain podman[323627]: 2026-02-20 09:57:15.009941313 +0000 UTC m=+0.149082603 container init e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8cdd780-5302-40de-bd30-c04829d3000e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:57:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:57:15 np0005625203.localdomain podman[323627]: 2026-02-20 09:57:15.021722149 +0000 UTC m=+0.160863439 container start e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8cdd780-5302-40de-bd30-c04829d3000e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:57:15 np0005625203.localdomain dnsmasq[323646]: started, version 2.85 cachesize 150
Feb 20 09:57:15 np0005625203.localdomain dnsmasq[323646]: DNS service limited to local subnets
Feb 20 09:57:15 np0005625203.localdomain dnsmasq[323646]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:57:15 np0005625203.localdomain dnsmasq[323646]: warning: no upstream servers configured
Feb 20 09:57:15 np0005625203.localdomain dnsmasq-dhcp[323646]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:57:15 np0005625203.localdomain dnsmasq[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/addn_hosts - 0 addresses
Feb 20 09:57:15 np0005625203.localdomain dnsmasq-dhcp[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/host
Feb 20 09:57:15 np0005625203.localdomain dnsmasq-dhcp[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/opts
Feb 20 09:57:15 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:15.043 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:6d:09'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '9', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ec4f10d3-21c0-4f32-97fb-37b6b004b601, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=3568ed7b-9263-43af-b4fd-ae333afc9a3b) old=Port_Binding(mac=['fa:16:3e:68:6d:09 10.100.0.18 10.100.0.2 10.100.0.34'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-039b20b8-16a8-495e-968a-63fcd66a566c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '62dd1d7e0cf547678612304aba2895e2', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:57:15 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:15.045 161112 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 3568ed7b-9263-43af-b4fd-ae333afc9a3b in datapath 039b20b8-16a8-495e-968a-63fcd66a566c updated
Feb 20 09:57:15 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:15.047 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 039b20b8-16a8-495e-968a-63fcd66a566c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:57:15 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:15.048 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[2a577477-567f-4e46-a874-a9cfe92f4ea3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:57:15 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:15.093 262775 INFO neutron.agent.dhcp.agent [None req-49f91563-c1d8-4530-bb7b-b5318ed02274 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:57:13Z, description=, device_id=0e46191c-0f3d-41cb-bcbf-751deef54c48, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c7edc0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c84910>], id=f5f6c31e-e6e9-41bc-a2dd-515c0f591ec4, ip_allocation=immediate, mac_address=fa:16:3e:16:83:76, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:57:10Z, description=, dns_domain=, id=a8cdd780-5302-40de-bd30-c04829d3000e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1773188037, port_security_enabled=True, project_id=4bc7f22347de4004b73776eab4064bd0, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64087, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2513, status=ACTIVE, subnets=['b1355493-8154-4e2a-a983-f9f50e876815'], tags=[], tenant_id=4bc7f22347de4004b73776eab4064bd0, updated_at=2026-02-20T09:57:11Z, vlan_transparent=None, network_id=a8cdd780-5302-40de-bd30-c04829d3000e, port_security_enabled=False, project_id=4bc7f22347de4004b73776eab4064bd0, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2530, status=DOWN, tags=[], tenant_id=4bc7f22347de4004b73776eab4064bd0, updated_at=2026-02-20T09:57:13Z on network a8cdd780-5302-40de-bd30-c04829d3000e
Feb 20 09:57:15 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:15.190 262775 INFO neutron.agent.dhcp.agent [None req-716fdcb5-42a8-45a1-97e0-baec1bdd4598 - - - - - -] DHCP configuration for ports {'b9a500dd-03e1-4d00-bcf5-2781745bb2f2'} is completed
Feb 20 09:57:15 np0005625203.localdomain ceph-mon[296066]: pgmap v348: 177 pgs: 177 active+clean; 193 MiB data, 903 MiB used, 41 GiB / 42 GiB avail; 42 KiB/s rd, 42 KiB/s wr, 64 op/s
Feb 20 09:57:15 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve48", "tenant_id": "1401fb23701440858ed7175cc4dba63b", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:57:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Feb 20 09:57:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:57:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve48", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:57:15 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1303196629' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:15 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1303196629' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:15 np0005625203.localdomain dnsmasq[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/addn_hosts - 1 addresses
Feb 20 09:57:15 np0005625203.localdomain dnsmasq-dhcp[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/host
Feb 20 09:57:15 np0005625203.localdomain dnsmasq-dhcp[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/opts
Feb 20 09:57:15 np0005625203.localdomain podman[323664]: 2026-02-20 09:57:15.344843085 +0000 UTC m=+0.070930275 container kill e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8cdd780-5302-40de-bd30-c04829d3000e, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 09:57:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e172 e172: 6 total, 6 up, 6 in
Feb 20 09:57:15 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:15.649 262775 INFO neutron.agent.dhcp.agent [None req-dec82803-f232-45c4-96c0-f8351b36bc96 - - - - - -] DHCP configuration for ports {'f5f6c31e-e6e9-41bc-a2dd-515c0f591ec4'} is completed
Feb 20 09:57:15 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:15.825 2 INFO neutron.agent.securitygroups_rpc [None req-e7815d36-a39d-42c8-a497-7fe4eae772f9 469c956856df4f7fa0303f662df3cdef 62dd1d7e0cf547678612304aba2895e2 - - default default] Security group member updated ['72ed92b6-af24-4274-854b-a52220405faf']
Feb 20 09:57:15 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:15.865 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:57:13Z, description=, device_id=0e46191c-0f3d-41cb-bcbf-751deef54c48, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e42100>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e425e0>], id=f5f6c31e-e6e9-41bc-a2dd-515c0f591ec4, ip_allocation=immediate, mac_address=fa:16:3e:16:83:76, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:57:10Z, description=, dns_domain=, id=a8cdd780-5302-40de-bd30-c04829d3000e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1773188037, port_security_enabled=True, project_id=4bc7f22347de4004b73776eab4064bd0, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64087, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2513, status=ACTIVE, subnets=['b1355493-8154-4e2a-a983-f9f50e876815'], tags=[], tenant_id=4bc7f22347de4004b73776eab4064bd0, updated_at=2026-02-20T09:57:11Z, vlan_transparent=None, network_id=a8cdd780-5302-40de-bd30-c04829d3000e, port_security_enabled=False, project_id=4bc7f22347de4004b73776eab4064bd0, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=2530, status=DOWN, tags=[], tenant_id=4bc7f22347de4004b73776eab4064bd0, updated_at=2026-02-20T09:57:13Z on network a8cdd780-5302-40de-bd30-c04829d3000e
Feb 20 09:57:15 np0005625203.localdomain systemd[1]: tmp-crun.YV0K88.mount: Deactivated successfully.
Feb 20 09:57:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:16.060 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:16 np0005625203.localdomain dnsmasq[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/addn_hosts - 1 addresses
Feb 20 09:57:16 np0005625203.localdomain dnsmasq-dhcp[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/host
Feb 20 09:57:16 np0005625203.localdomain podman[323701]: 2026-02-20 09:57:16.106982764 +0000 UTC m=+0.061617648 container kill e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8cdd780-5302-40de-bd30-c04829d3000e, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:57:16 np0005625203.localdomain dnsmasq-dhcp[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/opts
Feb 20 09:57:16 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:16.332 2 INFO neutron.agent.securitygroups_rpc [None req-3bc90bc6-4752-435d-941c-f0e75fc5d0a5 e8d99e5aba074cfb8aea01d99045d2af 8a08202c1391432d972dc0430612e0e0 - - default default] Security group member updated ['49b521a4-2cce-4f1a-b690-2fa2cab68db5']
Feb 20 09:57:16 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e173 e173: 6 total, 6 up, 6 in
Feb 20 09:57:16 np0005625203.localdomain ceph-mon[296066]: osdmap e172: 6 total, 6 up, 6 in
Feb 20 09:57:16 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2050439858' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:16 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2050439858' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:16 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:16.395 262775 INFO neutron.agent.dhcp.agent [None req-1605b901-55ed-4586-b937-c2d7a4c8b983 - - - - - -] DHCP configuration for ports {'f5f6c31e-e6e9-41bc-a2dd-515c0f591ec4'} is completed
Feb 20 09:57:16 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:16.929 2 INFO neutron.agent.securitygroups_rpc [None req-4f303a3e-0093-4654-a2f8-5b0853d1acad 3fd5694d6e624148892ddc3041d2f0e1 4bc7f22347de4004b73776eab4064bd0 - - default default] Security group member updated ['c599d16d-0283-4cf2-8a39-4a506ff8f2f0']
Feb 20 09:57:16 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:16.961 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:57:16Z, description=, device_id=, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4cf7670>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4cf7070>], id=66552ad4-e599-4f60-8450-2a8508ecc940, ip_allocation=immediate, mac_address=fa:16:3e:5b:e6:6c, name=tempest-FloatingIPNegativeTestJSON-318982942, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:57:10Z, description=, dns_domain=, id=a8cdd780-5302-40de-bd30-c04829d3000e, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-FloatingIPNegativeTestJSON-test-network-1773188037, port_security_enabled=True, project_id=4bc7f22347de4004b73776eab4064bd0, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=64087, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2513, status=ACTIVE, subnets=['b1355493-8154-4e2a-a983-f9f50e876815'], tags=[], tenant_id=4bc7f22347de4004b73776eab4064bd0, updated_at=2026-02-20T09:57:11Z, vlan_transparent=None, network_id=a8cdd780-5302-40de-bd30-c04829d3000e, port_security_enabled=True, project_id=4bc7f22347de4004b73776eab4064bd0, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['c599d16d-0283-4cf2-8a39-4a506ff8f2f0'], standard_attr_id=2532, status=DOWN, tags=[], tenant_id=4bc7f22347de4004b73776eab4064bd0, updated_at=2026-02-20T09:57:16Z on network a8cdd780-5302-40de-bd30-c04829d3000e
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.202 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.203 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.203 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.203 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.203 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.203 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:57:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:57:17 np0005625203.localdomain systemd[1]: tmp-crun.wyQcto.mount: Deactivated successfully.
Feb 20 09:57:17 np0005625203.localdomain dnsmasq[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/addn_hosts - 2 addresses
Feb 20 09:57:17 np0005625203.localdomain dnsmasq-dhcp[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/host
Feb 20 09:57:17 np0005625203.localdomain dnsmasq-dhcp[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/opts
Feb 20 09:57:17 np0005625203.localdomain podman[323737]: 2026-02-20 09:57:17.209750521 +0000 UTC m=+0.078568382 container kill e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8cdd780-5302-40de-bd30-c04829d3000e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true)
Feb 20 09:57:17 np0005625203.localdomain ceph-mon[296066]: pgmap v350: 177 pgs: 177 active+clean; 194 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 92 KiB/s rd, 90 KiB/s wr, 140 op/s
Feb 20 09:57:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "format": "json"}]: dispatch
Feb 20 09:57:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9754040c-efe1-4b31-bb60-2e1a242177cd", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "format": "json"}]: dispatch
Feb 20 09:57:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "56e65ef4-1a61-4d5a-9d2e-c59d2323fd6c", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:17 np0005625203.localdomain ceph-mon[296066]: osdmap e173: 6 total, 6 up, 6 in
Feb 20 09:57:17 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2731859964' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:17 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2731859964' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:17 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e174 e174: 6 total, 6 up, 6 in
Feb 20 09:57:17 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:17.452 262775 INFO neutron.agent.dhcp.agent [None req-cc3ed7d6-9a75-4a6a-ab0e-eca83f05952c - - - - - -] DHCP configuration for ports {'66552ad4-e599-4f60-8450-2a8508ecc940'} is completed
Feb 20 09:57:17 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=eve48,client_metadata.root=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739],prefix=session evict} (starting...)
Feb 20 09:57:17 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e175 e175: 6 total, 6 up, 6 in
Feb 20 09:57:18 np0005625203.localdomain ceph-mon[296066]: osdmap e174: 6 total, 6 up, 6 in
Feb 20 09:57:18 np0005625203.localdomain ceph-mon[296066]: pgmap v353: 177 pgs: 177 active+clean; 194 MiB data, 940 MiB used, 41 GiB / 42 GiB avail; 64 KiB/s rd, 110 KiB/s wr, 102 op/s
Feb 20 09:57:18 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve48", "format": "json"}]: dispatch
Feb 20 09:57:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve48", "format": "json"} : dispatch
Feb 20 09:57:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.eve48"} : dispatch
Feb 20 09:57:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.eve48"}]': finished
Feb 20 09:57:18 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve48", "format": "json"}]: dispatch
Feb 20 09:57:18 np0005625203.localdomain ceph-mon[296066]: osdmap e175: 6 total, 6 up, 6 in
Feb 20 09:57:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e176 e176: 6 total, 6 up, 6 in
Feb 20 09:57:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:19.687 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Feb 20 09:57:20 np0005625203.localdomain ceph-mon[296066]: osdmap e176: 6 total, 6 up, 6 in
Feb 20 09:57:20 np0005625203.localdomain ceph-mon[296066]: pgmap v356: 177 pgs: 177 active+clean; 194 MiB data, 958 MiB used, 41 GiB / 42 GiB avail; 2.5 KiB/s rd, 78 KiB/s wr, 11 op/s
Feb 20 09:57:20 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:20.854 2 INFO neutron.agent.securitygroups_rpc [None req-41f9c3ce-b340-465f-aa72-7be8aab7d24c 3fd5694d6e624148892ddc3041d2f0e1 4bc7f22347de4004b73776eab4064bd0 - - default default] Security group member updated ['c599d16d-0283-4cf2-8a39-4a506ff8f2f0']
Feb 20 09:57:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:21.063 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:21 np0005625203.localdomain dnsmasq[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/addn_hosts - 1 addresses
Feb 20 09:57:21 np0005625203.localdomain dnsmasq-dhcp[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/host
Feb 20 09:57:21 np0005625203.localdomain podman[323776]: 2026-02-20 09:57:21.09624122 +0000 UTC m=+0.055707554 container kill e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8cdd780-5302-40de-bd30-c04829d3000e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Feb 20 09:57:21 np0005625203.localdomain dnsmasq-dhcp[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/opts
Feb 20 09:57:21 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve47", "tenant_id": "1401fb23701440858ed7175cc4dba63b", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:57:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Feb 20 09:57:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:57:21 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.eve47", "caps": ["mds", "allow rw path=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739", "osd", "allow rw pool=manila_data namespace=fsvolumens_397e3bc9-4e24-4148-9154-b1a0a65a1d98", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:57:21 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e177 e177: 6 total, 6 up, 6 in
Feb 20 09:57:21 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:21.741 2 INFO neutron.agent.securitygroups_rpc [None req-84019169-f531-4b37-ab25-b8fba57ed27f e8d99e5aba074cfb8aea01d99045d2af 8a08202c1391432d972dc0430612e0e0 - - default default] Security group member updated ['49b521a4-2cce-4f1a-b690-2fa2cab68db5']
Feb 20 09:57:21 np0005625203.localdomain dnsmasq[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/addn_hosts - 0 addresses
Feb 20 09:57:21 np0005625203.localdomain dnsmasq-dhcp[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/host
Feb 20 09:57:21 np0005625203.localdomain dnsmasq-dhcp[323646]: read /var/lib/neutron/dhcp/a8cdd780-5302-40de-bd30-c04829d3000e/opts
Feb 20 09:57:21 np0005625203.localdomain podman[323816]: 2026-02-20 09:57:21.983373226 +0000 UTC m=+0.057875672 container kill e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8cdd780-5302-40de-bd30-c04829d3000e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:57:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:57:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:57:22 np0005625203.localdomain podman[323831]: 2026-02-20 09:57:22.106159624 +0000 UTC m=+0.088360604 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:57:22 np0005625203.localdomain podman[323831]: 2026-02-20 09:57:22.11763359 +0000 UTC m=+0.099834560 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:57:22 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:57:22 np0005625203.localdomain kernel: device tap95594ce0-59 left promiscuous mode
Feb 20 09:57:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:22.200 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:22 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:22Z|00367|binding|INFO|Releasing lport 95594ce0-595d-4597-a33f-9d4324a3042e from this chassis (sb_readonly=0)
Feb 20 09:57:22 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:57:22Z|00368|binding|INFO|Setting lport 95594ce0-595d-4597-a33f-9d4324a3042e down in Southbound
Feb 20 09:57:22 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:22.211 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-a8cdd780-5302-40de-bd30-c04829d3000e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a8cdd780-5302-40de-bd30-c04829d3000e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bc7f22347de4004b73776eab4064bd0', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fdc71d21-1cdf-477c-8421-e1f9461f4a7c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=95594ce0-595d-4597-a33f-9d4324a3042e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:57:22 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:22.213 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 95594ce0-595d-4597-a33f-9d4324a3042e in datapath a8cdd780-5302-40de-bd30-c04829d3000e unbound from our chassis
Feb 20 09:57:22 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:22.215 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a8cdd780-5302-40de-bd30-c04829d3000e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:57:22 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:22.216 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffb9be7-3be9-47b2-8706-6e43a39ebd3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:57:22 np0005625203.localdomain podman[323830]: 2026-02-20 09:57:22.216995524 +0000 UTC m=+0.204603271 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:57:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:22.224 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:22 np0005625203.localdomain podman[323830]: 2026-02-20 09:57:22.249360715 +0000 UTC m=+0.236968412 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:57:22 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:57:22 np0005625203.localdomain ceph-mon[296066]: osdmap e177: 6 total, 6 up, 6 in
Feb 20 09:57:22 np0005625203.localdomain ceph-mon[296066]: pgmap v358: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 5.1 MiB/s rd, 83 KiB/s wr, 175 op/s
Feb 20 09:57:22 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e178 e178: 6 total, 6 up, 6 in
Feb 20 09:57:23 np0005625203.localdomain dnsmasq[323646]: exiting on receipt of SIGTERM
Feb 20 09:57:23 np0005625203.localdomain podman[323901]: 2026-02-20 09:57:23.48810674 +0000 UTC m=+0.098915812 container kill e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8cdd780-5302-40de-bd30-c04829d3000e, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 09:57:23 np0005625203.localdomain systemd[1]: libpod-e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14.scope: Deactivated successfully.
Feb 20 09:57:23 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 20 09:57:23 np0005625203.localdomain podman[323917]: 2026-02-20 09:57:23.579243769 +0000 UTC m=+0.069552133 container died e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8cdd780-5302-40de-bd30-c04829d3000e, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Feb 20 09:57:23 np0005625203.localdomain systemd[1]: tmp-crun.dNfAqS.mount: Deactivated successfully.
Feb 20 09:57:23 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14-userdata-shm.mount: Deactivated successfully.
Feb 20 09:57:23 np0005625203.localdomain podman[323917]: 2026-02-20 09:57:23.634152118 +0000 UTC m=+0.124460462 container remove e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-a8cdd780-5302-40de-bd30-c04829d3000e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 20 09:57:23 np0005625203.localdomain systemd[1]: libpod-conmon-e463b5472efe1b015d154c0804cf5afafe3de98227426b5003f32050e92eca14.scope: Deactivated successfully.
Feb 20 09:57:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:23.666 262775 INFO neutron.agent.dhcp.agent [None req-63e7e25f-1c90-4b59-8ebf-57b0e5402868 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:23 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:23.856 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:23 np0005625203.localdomain ceph-mon[296066]: osdmap e178: 6 total, 6 up, 6 in
Feb 20 09:57:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e179 e179: 6 total, 6 up, 6 in
Feb 20 09:57:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:24.125 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:24 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-c42e6c640f9802d0b748dce1bd268334e49016319e9001418078696d79f94040-merged.mount: Deactivated successfully.
Feb 20 09:57:24 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2da8cdd780\x2d5302\x2d40de\x2dbd30\x2dc04829d3000e.mount: Deactivated successfully.
Feb 20 09:57:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:24.688 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:24 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=eve47,client_metadata.root=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739],prefix=session evict} (starting...)
Feb 20 09:57:24 np0005625203.localdomain ceph-mon[296066]: pgmap v360: 177 pgs: 177 active+clean; 194 MiB data, 947 MiB used, 41 GiB / 42 GiB avail; 3.7 MiB/s rd, 60 KiB/s wr, 127 op/s
Feb 20 09:57:24 np0005625203.localdomain ceph-mon[296066]: osdmap e179: 6 total, 6 up, 6 in
Feb 20 09:57:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve47", "format": "json"} : dispatch
Feb 20 09:57:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.eve47"} : dispatch
Feb 20 09:57:24 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.eve47"}]': finished
Feb 20 09:57:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:57:25 np0005625203.localdomain sshd[323942]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:57:25 np0005625203.localdomain sshd[323942]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:57:25 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve47", "format": "json"}]: dispatch
Feb 20 09:57:25 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve47", "format": "json"}]: dispatch
Feb 20 09:57:25 np0005625203.localdomain ceph-mon[296066]: mgrmap e50: np0005625202.arwxwo(active, since 9m), standbys: np0005625203.lonygy, np0005625204.exgrzx
Feb 20 09:57:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e180 e180: 6 total, 6 up, 6 in
Feb 20 09:57:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:26.066 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:26 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e181 e181: 6 total, 6 up, 6 in
Feb 20 09:57:27 np0005625203.localdomain ceph-mon[296066]: pgmap v362: 177 pgs: 177 active+clean; 241 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 3.6 MiB/s rd, 3.6 MiB/s wr, 224 op/s
Feb 20 09:57:27 np0005625203.localdomain ceph-mon[296066]: osdmap e180: 6 total, 6 up, 6 in
Feb 20 09:57:27 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1300248318' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:27 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1300248318' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:27 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e182 e182: 6 total, 6 up, 6 in
Feb 20 09:57:28 np0005625203.localdomain ceph-mon[296066]: osdmap e181: 6 total, 6 up, 6 in
Feb 20 09:57:28 np0005625203.localdomain ceph-mon[296066]: osdmap e182: 6 total, 6 up, 6 in
Feb 20 09:57:28 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3824275205' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:28 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3824275205' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:57:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:57:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:57:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 09:57:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18299 "" "Go-http-client/1.1"
Feb 20 09:57:29 np0005625203.localdomain ceph-mon[296066]: pgmap v365: 177 pgs: 177 active+clean; 241 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 90 KiB/s rd, 4.6 MiB/s wr, 143 op/s
Feb 20 09:57:29 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=eve49,client_metadata.root=/volumes/_nogroup/397e3bc9-4e24-4148-9154-b1a0a65a1d98/2bd5db79-8b70-47ad-aae0-ee8531062739],prefix=session evict} (starting...)
Feb 20 09:57:29 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:29.733 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:57:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve49", "format": "json"}]: dispatch
Feb 20 09:57:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.eve49", "format": "json"} : dispatch
Feb 20 09:57:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.eve49"} : dispatch
Feb 20 09:57:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.eve49"}]': finished
Feb 20 09:57:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "auth_id": "eve49", "format": "json"}]: dispatch
Feb 20 09:57:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "format": "json"}]: dispatch
Feb 20 09:57:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "397e3bc9-4e24-4148-9154-b1a0a65a1d98", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e183 e183: 6 total, 6 up, 6 in
Feb 20 09:57:30 np0005625203.localdomain sshd[323944]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:57:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:31.110 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:31 np0005625203.localdomain ceph-mon[296066]: pgmap v367: 177 pgs: 177 active+clean; 233 MiB data, 1.0 GiB used, 41 GiB / 42 GiB avail; 95 KiB/s rd, 3.9 MiB/s wr, 152 op/s
Feb 20 09:57:31 np0005625203.localdomain ceph-mon[296066]: osdmap e183: 6 total, 6 up, 6 in
Feb 20 09:57:31 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:31.211 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:31 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:57:31 np0005625203.localdomain sshd[323944]: Received disconnect from 185.196.11.208 port 42966:11: Bye Bye [preauth]
Feb 20 09:57:31 np0005625203.localdomain sshd[323944]: Disconnected from authenticating user root 185.196.11.208 port 42966 [preauth]
Feb 20 09:57:31 np0005625203.localdomain systemd[1]: tmp-crun.9y5It3.mount: Deactivated successfully.
Feb 20 09:57:31 np0005625203.localdomain podman[323946]: 2026-02-20 09:57:31.770062374 +0000 UTC m=+0.087074975 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:57:31 np0005625203.localdomain podman[323946]: 2026-02-20 09:57:31.7783422 +0000 UTC m=+0.095354831 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127)
Feb 20 09:57:31 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:57:32 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e184 e184: 6 total, 6 up, 6 in
Feb 20 09:57:32 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e185 e185: 6 total, 6 up, 6 in
Feb 20 09:57:33 np0005625203.localdomain ceph-mon[296066]: pgmap v369: 177 pgs: 177 active+clean; 195 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 103 KiB/s rd, 66 KiB/s wr, 148 op/s
Feb 20 09:57:33 np0005625203.localdomain ceph-mon[296066]: osdmap e184: 6 total, 6 up, 6 in
Feb 20 09:57:33 np0005625203.localdomain ceph-mon[296066]: osdmap e185: 6 total, 6 up, 6 in
Feb 20 09:57:33 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:33.416 2 INFO neutron.agent.securitygroups_rpc [None req-203ffc30-16ac-4832-a92b-6d9503978c8f fd5cbccf037047f8a8d2b9488223d9dc 92c7b74ddebf4a4a82ffeb6b9b8a9111 - - default default] Security group member updated ['23f09a95-adc3-4f59-96fa-bbbacd5ff83a']
Feb 20 09:57:33 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:33.800 2 INFO neutron.agent.securitygroups_rpc [None req-203ffc30-16ac-4832-a92b-6d9503978c8f fd5cbccf037047f8a8d2b9488223d9dc 92c7b74ddebf4a4a82ffeb6b9b8a9111 - - default default] Security group member updated ['23f09a95-adc3-4f59-96fa-bbbacd5ff83a']
Feb 20 09:57:33 np0005625203.localdomain sshd[323964]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:57:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e186 e186: 6 total, 6 up, 6 in
Feb 20 09:57:34 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2716620294' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:34 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2716620294' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:34.775 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:34 np0005625203.localdomain sshd[323964]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:57:34 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:34.818 2 INFO neutron.agent.securitygroups_rpc [None req-3f70ffff-dbcf-4468-97f0-19b384f44318 fd5cbccf037047f8a8d2b9488223d9dc 92c7b74ddebf4a4a82ffeb6b9b8a9111 - - default default] Security group member updated ['23f09a95-adc3-4f59-96fa-bbbacd5ff83a']
Feb 20 09:57:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:57:35 np0005625203.localdomain ceph-mon[296066]: pgmap v372: 177 pgs: 177 active+clean; 195 MiB data, 976 MiB used, 41 GiB / 42 GiB avail; 102 KiB/s rd, 65 KiB/s wr, 146 op/s
Feb 20 09:57:35 np0005625203.localdomain ceph-mon[296066]: osdmap e186: 6 total, 6 up, 6 in
Feb 20 09:57:35 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:57:35.478 2 INFO neutron.agent.securitygroups_rpc [None req-16cacc97-a725-4413-b643-7033155fe483 fd5cbccf037047f8a8d2b9488223d9dc 92c7b74ddebf4a4a82ffeb6b9b8a9111 - - default default] Security group member updated ['23f09a95-adc3-4f59-96fa-bbbacd5ff83a']
Feb 20 09:57:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:36.146 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:36 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:57:36 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3566039117' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:36 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:57:36 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3566039117' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:57:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:57:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:57:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:57:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:57:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:57:37 np0005625203.localdomain ceph-mon[296066]: pgmap v374: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 10 KiB/s wr, 91 op/s
Feb 20 09:57:37 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3566039117' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:37 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3566039117' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e187 e187: 6 total, 6 up, 6 in
Feb 20 09:57:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:57:38 np0005625203.localdomain podman[323966]: 2026-02-20 09:57:38.76553557 +0000 UTC m=+0.081922385 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:57:38 np0005625203.localdomain podman[323966]: 2026-02-20 09:57:38.825147615 +0000 UTC m=+0.141534390 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:57:38 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:57:38 np0005625203.localdomain ceph-mon[296066]: pgmap v375: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 59 KiB/s rd, 9.2 KiB/s wr, 83 op/s
Feb 20 09:57:38 np0005625203.localdomain ceph-mon[296066]: osdmap e187: 6 total, 6 up, 6 in
Feb 20 09:57:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:39.803 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:57:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:40.321 262775 INFO neutron.agent.dhcp.agent [None req-9da2ae79-4614-4cd3-9a31-490cc1129a07 - - - - - -] Synchronizing state
Feb 20 09:57:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:40.471 262775 INFO neutron.agent.dhcp.agent [None req-0252c527-9067-46c2-9115-b60c2dc28075 - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:57:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:40.472 262775 INFO neutron.agent.dhcp.agent [-] Starting network e1ec0e81-af09-42a6-8c6e-f557d6053cf4 dhcp configuration
Feb 20 09:57:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:40.473 262775 INFO neutron.agent.dhcp.agent [-] Finished network e1ec0e81-af09-42a6-8c6e-f557d6053cf4 dhcp configuration
Feb 20 09:57:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:40.473 262775 INFO neutron.agent.dhcp.agent [None req-0252c527-9067-46c2-9115-b60c2dc28075 - - - - - -] Synchronizing state complete
Feb 20 09:57:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:57:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:57:40 np0005625203.localdomain podman[323991]: 2026-02-20 09:57:40.763799692 +0000 UTC m=+0.081085739 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:57:40 np0005625203.localdomain podman[323992]: 2026-02-20 09:57:40.821352982 +0000 UTC m=+0.133406887 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, container_name=openstack_network_exporter, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, release=1770267347, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 09:57:40 np0005625203.localdomain podman[323991]: 2026-02-20 09:57:40.849909216 +0000 UTC m=+0.167195263 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute)
Feb 20 09:57:40 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:57:40 np0005625203.localdomain podman[323992]: 2026-02-20 09:57:40.86621346 +0000 UTC m=+0.178267385 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2026-02-05T04:57:10Z, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.buildah.version=1.33.7, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter)
Feb 20 09:57:40 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:57:40 np0005625203.localdomain ceph-mon[296066]: pgmap v377: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 16 KiB/s wr, 76 op/s
Feb 20 09:57:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:41.149 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:41 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:41.792 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:41 np0005625203.localdomain sudo[324028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:57:41 np0005625203.localdomain sudo[324028]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:57:41 np0005625203.localdomain sudo[324028]: pam_unix(sudo:session): session closed for user root
Feb 20 09:57:41 np0005625203.localdomain sudo[324046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:57:41 np0005625203.localdomain sudo[324046]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:57:41 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:41.962 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:42 np0005625203.localdomain sudo[324046]: pam_unix(sudo:session): session closed for user root
Feb 20 09:57:42 np0005625203.localdomain sudo[324096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:57:42 np0005625203.localdomain sudo[324096]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:57:42 np0005625203.localdomain sudo[324096]: pam_unix(sudo:session): session closed for user root
Feb 20 09:57:42 np0005625203.localdomain ceph-mon[296066]: pgmap v378: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 60 KiB/s rd, 14 KiB/s wr, 83 op/s
Feb 20 09:57:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:57:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:57:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:57:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:57:44 np0005625203.localdomain ceph-mon[296066]: pgmap v379: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 51 KiB/s rd, 12 KiB/s wr, 71 op/s
Feb 20 09:57:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:57:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:44.806 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:57:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:46.191 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:46 np0005625203.localdomain ceph-mon[296066]: pgmap v380: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 5.7 KiB/s wr, 16 op/s
Feb 20 09:57:48 np0005625203.localdomain ceph-mon[296066]: pgmap v381: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 12 KiB/s rd, 5.7 KiB/s wr, 16 op/s
Feb 20 09:57:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:49.808 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:57:50 np0005625203.localdomain ceph-mon[296066]: pgmap v382: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 4.9 KiB/s wr, 14 op/s
Feb 20 09:57:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:51.237 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:57:51 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:57:51 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "92707d6b-2313-43a9-b4cc-78c6dfb385e9", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:57:51 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "92707d6b-2313-43a9-b4cc-78c6dfb385e9", "format": "json"}]: dispatch
Feb 20 09:57:51 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:57:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:57:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:57:52 np0005625203.localdomain podman[324115]: 2026-02-20 09:57:52.786563307 +0000 UTC m=+0.093109721 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:57:52 np0005625203.localdomain podman[324115]: 2026-02-20 09:57:52.822225741 +0000 UTC m=+0.128772145 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:57:52 np0005625203.localdomain podman[324114]: 2026-02-20 09:57:52.833664355 +0000 UTC m=+0.142313534 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 09:57:52 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:57:52 np0005625203.localdomain podman[324114]: 2026-02-20 09:57:52.866981256 +0000 UTC m=+0.175630425 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:57:52 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:57:52 np0005625203.localdomain ceph-mon[296066]: pgmap v383: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 255 B/s wr, 13 op/s
Feb 20 09:57:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e188 e188: 6 total, 6 up, 6 in
Feb 20 09:57:53 np0005625203.localdomain ceph-mon[296066]: osdmap e188: 6 total, 6 up, 6 in
Feb 20 09:57:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:54.288 262775 INFO neutron.agent.dhcp.agent [None req-0252c527-9067-46c2-9115-b60c2dc28075 - - - - - -] Synchronizing state
Feb 20 09:57:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:54.428 262775 INFO neutron.agent.dhcp.agent [None req-af20781d-07d3-4eea-9908-43f9e0ab5016 - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:57:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:54.429 262775 INFO neutron.agent.dhcp.agent [-] Starting network 057f3b72-ca0f-41eb-978e-1318b477c4fa dhcp configuration
Feb 20 09:57:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:54.430 262775 INFO neutron.agent.dhcp.agent [-] Finished network 057f3b72-ca0f-41eb-978e-1318b477c4fa dhcp configuration
Feb 20 09:57:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:54.431 262775 INFO neutron.agent.dhcp.agent [None req-af20781d-07d3-4eea-9908-43f9e0ab5016 - - - - - -] Synchronizing state complete
Feb 20 09:57:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:54.432 262775 INFO neutron.agent.dhcp.agent [None req-5aa097f0-10d8-4510-9d8d-287f357d4220 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:57:54 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:57:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:54.832 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:57:54.955 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:57:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:57:55 np0005625203.localdomain ceph-mon[296066]: pgmap v385: 177 pgs: 177 active+clean; 195 MiB data, 961 MiB used, 41 GiB / 42 GiB avail
Feb 20 09:57:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ad9afd43-9e16-42b2-8035-3a7a6ffc2a82", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:57:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:57:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e189 e189: 6 total, 6 up, 6 in
Feb 20 09:57:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:57:55 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:57:56 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ad9afd43-9e16-42b2-8035-3a7a6ffc2a82", "format": "json"}]: dispatch
Feb 20 09:57:56 np0005625203.localdomain ceph-mon[296066]: osdmap e189: 6 total, 6 up, 6 in
Feb 20 09:57:56 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:57:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:56.279 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:56.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:57:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:56.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:57:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:56.368 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:57:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:56.369 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:57:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:56.369 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:57:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:56.370 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:57:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:56.370 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:57:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:57:56 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1119748421' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:57:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:56.824 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.058 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.060 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11667MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.061 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.061 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:57:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ef5178d4-d0bc-4d31-ba72-f530bb14424f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:57:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ef5178d4-d0bc-4d31-ba72-f530bb14424f", "format": "json"}]: dispatch
Feb 20 09:57:57 np0005625203.localdomain ceph-mon[296066]: pgmap v387: 177 pgs: 177 active+clean; 195 MiB data, 962 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 17 KiB/s wr, 25 op/s
Feb 20 09:57:57 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3816358087' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:57:57 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3816358087' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:57:57 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1119748421' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.137 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.138 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.175 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:57:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:57.332 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:57:57 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:57:57.333 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.373 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:57:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:57:57 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2600015293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.694 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.701 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.715 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.718 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:57:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:57.718 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:57:58 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2600015293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:57:58 np0005625203.localdomain sshd[324204]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:57:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:57:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:57:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:57:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 09:57:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18302 "" "Go-http-client/1.1"
Feb 20 09:57:59 np0005625203.localdomain ceph-mon[296066]: pgmap v388: 177 pgs: 177 active+clean; 195 MiB data, 962 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 17 KiB/s wr, 25 op/s
Feb 20 09:57:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ad9afd43-9e16-42b2-8035-3a7a6ffc2a82", "format": "json"}]: dispatch
Feb 20 09:57:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ad9afd43-9e16-42b2-8035-3a7a6ffc2a82", "force": true, "format": "json"}]: dispatch
Feb 20 09:57:59 np0005625203.localdomain sshd[324204]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:57:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:57:59.864 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:58:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:00.719 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:58:00 np0005625203.localdomain ceph-mon[296066]: pgmap v389: 177 pgs: 177 active+clean; 195 MiB data, 962 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 48 KiB/s wr, 27 op/s
Feb 20 09:58:01 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:01.052 262775 INFO neutron.agent.linux.ip_lib [None req-a87a4341-239e-413e-bf01-5252d818105d - - - - - -] Device tap5d4a7eda-44 cannot be used as it has no MAC address
Feb 20 09:58:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:01.203 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:01 np0005625203.localdomain kernel: device tap5d4a7eda-44 entered promiscuous mode
Feb 20 09:58:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:01Z|00369|binding|INFO|Claiming lport 5d4a7eda-44c4-46ad-a801-cf15d3f54ffd for this chassis.
Feb 20 09:58:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:01.212 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:01 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581481.2131] manager: (tap5d4a7eda-44): new Generic device (/org/freedesktop/NetworkManager/Devices/71)
Feb 20 09:58:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:01Z|00370|binding|INFO|5d4a7eda-44c4-46ad-a801-cf15d3f54ffd: Claiming unknown
Feb 20 09:58:01 np0005625203.localdomain systemd-udevd[324216]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:58:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:01.221 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-0df15180-77b4-4a39-944d-6dd7686af62e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0df15180-77b4-4a39-944d-6dd7686af62e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80723b5de8af4075aa84c53e89b4d020', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef9b0228-a510-4244-bb76-4a29703274b7, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=5d4a7eda-44c4-46ad-a801-cf15d3f54ffd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:58:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:01.223 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 5d4a7eda-44c4-46ad-a801-cf15d3f54ffd in datapath 0df15180-77b4-4a39-944d-6dd7686af62e bound to our chassis
Feb 20 09:58:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:01.226 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0df15180-77b4-4a39-944d-6dd7686af62e or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:58:01 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:01.227 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[6371c0fb-a4e6-4fba-be62-b78a782771fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:58:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap5d4a7eda-44: No such device
Feb 20 09:58:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:01Z|00371|binding|INFO|Setting lport 5d4a7eda-44c4-46ad-a801-cf15d3f54ffd ovn-installed in OVS
Feb 20 09:58:01 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:01Z|00372|binding|INFO|Setting lport 5d4a7eda-44c4-46ad-a801-cf15d3f54ffd up in Southbound
Feb 20 09:58:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap5d4a7eda-44: No such device
Feb 20 09:58:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:01.252 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap5d4a7eda-44: No such device
Feb 20 09:58:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap5d4a7eda-44: No such device
Feb 20 09:58:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap5d4a7eda-44: No such device
Feb 20 09:58:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap5d4a7eda-44: No such device
Feb 20 09:58:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap5d4a7eda-44: No such device
Feb 20 09:58:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:01.281 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:01 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap5d4a7eda-44: No such device
Feb 20 09:58:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:01.292 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:01.332 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:01 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:58:01 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/4041107687' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:58:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2856583038' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:58:01 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:02 np0005625203.localdomain podman[324285]: 
Feb 20 09:58:02 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:02Z|00373|binding|INFO|Removing iface tap5d4a7eda-44 ovn-installed in OVS
Feb 20 09:58:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:02.245 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 9cba2ba2-ac4f-4639-a04d-8affe62e731b with type ""
Feb 20 09:58:02 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:02Z|00374|binding|INFO|Removing lport 5d4a7eda-44c4-46ad-a801-cf15d3f54ffd ovn-installed in OVS
Feb 20 09:58:02 np0005625203.localdomain podman[324285]: 2026-02-20 09:58:02.30189212 +0000 UTC m=+0.144714118 container create 74d6cb6e4b08f23b6f08c08b1759058cb8f536d2b95d7a11540868bbcc018e87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0df15180-77b4-4a39-944d-6dd7686af62e, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:58:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:02.302 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:02.303 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.1/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-0df15180-77b4-4a39-944d-6dd7686af62e', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0df15180-77b4-4a39-944d-6dd7686af62e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80723b5de8af4075aa84c53e89b4d020', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef9b0228-a510-4244-bb76-4a29703274b7, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=5d4a7eda-44c4-46ad-a801-cf15d3f54ffd) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:58:02 np0005625203.localdomain podman[324285]: 2026-02-20 09:58:02.205298911 +0000 UTC m=+0.048120979 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:58:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:02.308 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 5d4a7eda-44c4-46ad-a801-cf15d3f54ffd in datapath 0df15180-77b4-4a39-944d-6dd7686af62e unbound from our chassis
Feb 20 09:58:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:02.309 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0df15180-77b4-4a39-944d-6dd7686af62e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:58:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:02.311 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[dfa6b921-52b4-4fde-ab7e-3adf941913d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:58:02 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:58:02 np0005625203.localdomain systemd[1]: Started libpod-conmon-74d6cb6e4b08f23b6f08c08b1759058cb8f536d2b95d7a11540868bbcc018e87.scope.
Feb 20 09:58:02 np0005625203.localdomain systemd[1]: tmp-crun.CG9Gkv.mount: Deactivated successfully.
Feb 20 09:58:02 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:58:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:02.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:58:02 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5c20301d4b47f6e097c21980bcef6ed278c582f1416b5a47a3abb216f8b9305/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:58:02 np0005625203.localdomain podman[324285]: 2026-02-20 09:58:02.354506608 +0000 UTC m=+0.197328626 container init 74d6cb6e4b08f23b6f08c08b1759058cb8f536d2b95d7a11540868bbcc018e87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0df15180-77b4-4a39-944d-6dd7686af62e, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS)
Feb 20 09:58:02 np0005625203.localdomain podman[324285]: 2026-02-20 09:58:02.361509614 +0000 UTC m=+0.204331632 container start 74d6cb6e4b08f23b6f08c08b1759058cb8f536d2b95d7a11540868bbcc018e87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0df15180-77b4-4a39-944d-6dd7686af62e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Feb 20 09:58:02 np0005625203.localdomain dnsmasq[324314]: started, version 2.85 cachesize 150
Feb 20 09:58:02 np0005625203.localdomain dnsmasq[324314]: DNS service limited to local subnets
Feb 20 09:58:02 np0005625203.localdomain dnsmasq[324314]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:58:02 np0005625203.localdomain dnsmasq[324314]: warning: no upstream servers configured
Feb 20 09:58:02 np0005625203.localdomain dnsmasq-dhcp[324314]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:58:02 np0005625203.localdomain dnsmasq[324314]: read /var/lib/neutron/dhcp/0df15180-77b4-4a39-944d-6dd7686af62e/addn_hosts - 0 addresses
Feb 20 09:58:02 np0005625203.localdomain dnsmasq-dhcp[324314]: read /var/lib/neutron/dhcp/0df15180-77b4-4a39-944d-6dd7686af62e/host
Feb 20 09:58:02 np0005625203.localdomain dnsmasq-dhcp[324314]: read /var/lib/neutron/dhcp/0df15180-77b4-4a39-944d-6dd7686af62e/opts
Feb 20 09:58:02 np0005625203.localdomain podman[324299]: 2026-02-20 09:58:02.428770705 +0000 UTC m=+0.091955486 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260127)
Feb 20 09:58:02 np0005625203.localdomain podman[324299]: 2026-02-20 09:58:02.46125171 +0000 UTC m=+0.124436521 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 20 09:58:02 np0005625203.localdomain kernel: device tap5d4a7eda-44 left promiscuous mode
Feb 20 09:58:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:02.465 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:02 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:58:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:02.480 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.542 262775 INFO neutron.agent.dhcp.agent [None req-daed6a7f-a67a-4233-ab03-d7f77a595748 - - - - - -] DHCP configuration for ports {'48c507af-9355-408d-ba58-ea7f7fff1549'} is completed
Feb 20 09:58:02 np0005625203.localdomain dnsmasq[324314]: read /var/lib/neutron/dhcp/0df15180-77b4-4a39-944d-6dd7686af62e/addn_hosts - 0 addresses
Feb 20 09:58:02 np0005625203.localdomain dnsmasq-dhcp[324314]: read /var/lib/neutron/dhcp/0df15180-77b4-4a39-944d-6dd7686af62e/host
Feb 20 09:58:02 np0005625203.localdomain podman[324341]: 2026-02-20 09:58:02.770072094 +0000 UTC m=+0.071412550 container kill 74d6cb6e4b08f23b6f08c08b1759058cb8f536d2b95d7a11540868bbcc018e87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0df15180-77b4-4a39-944d-6dd7686af62e, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:58:02 np0005625203.localdomain dnsmasq-dhcp[324314]: read /var/lib/neutron/dhcp/0df15180-77b4-4a39-944d-6dd7686af62e/opts
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent [None req-988ce594-9a35-4be2-a8d9-0876aa303db6 - - - - - -] Unable to reload_allocations dhcp for 0df15180-77b4-4a39-944d-6dd7686af62e.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap5d4a7eda-44 not found in namespace qdhcp-0df15180-77b4-4a39-944d-6dd7686af62e.
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     return fut.result()
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     raise self._exception
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap5d4a7eda-44 not found in namespace qdhcp-0df15180-77b4-4a39-944d-6dd7686af62e.
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.806 262775 ERROR neutron.agent.dhcp.agent 
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.810 262775 INFO neutron.agent.dhcp.agent [None req-af20781d-07d3-4eea-9908-43f9e0ab5016 - - - - - -] Synchronizing state
Feb 20 09:58:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e190 e190: 6 total, 6 up, 6 in
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.957 262775 INFO neutron.agent.dhcp.agent [None req-91ae8f80-f9e6-4cec-898a-0d998bb2bebc - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.958 262775 INFO neutron.agent.dhcp.agent [-] Starting network 0df15180-77b4-4a39-944d-6dd7686af62e dhcp configuration
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.959 262775 INFO neutron.agent.dhcp.agent [-] Finished network 0df15180-77b4-4a39-944d-6dd7686af62e dhcp configuration
Feb 20 09:58:02 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:02.960 262775 INFO neutron.agent.dhcp.agent [None req-91ae8f80-f9e6-4cec-898a-0d998bb2bebc - - - - - -] Synchronizing state complete
Feb 20 09:58:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:02.963 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:02 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ff9e0809-0706-432a-8bd4-0d7cee6634c8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:58:02 np0005625203.localdomain ceph-mon[296066]: pgmap v390: 177 pgs: 177 active+clean; 195 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 46 KiB/s wr, 46 op/s
Feb 20 09:58:02 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ff9e0809-0706-432a-8bd4-0d7cee6634c8", "format": "json"}]: dispatch
Feb 20 09:58:02 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ef5178d4-d0bc-4d31-ba72-f530bb14424f", "format": "json"}]: dispatch
Feb 20 09:58:02 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ef5178d4-d0bc-4d31-ba72-f530bb14424f", "force": true, "format": "json"}]: dispatch
Feb 20 09:58:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4091018490' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4091018490' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:02 np0005625203.localdomain ceph-mon[296066]: osdmap e190: 6 total, 6 up, 6 in
Feb 20 09:58:03 np0005625203.localdomain dnsmasq[324314]: exiting on receipt of SIGTERM
Feb 20 09:58:03 np0005625203.localdomain podman[324371]: 2026-02-20 09:58:03.171011468 +0000 UTC m=+0.063676131 container kill 74d6cb6e4b08f23b6f08c08b1759058cb8f536d2b95d7a11540868bbcc018e87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0df15180-77b4-4a39-944d-6dd7686af62e, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 20 09:58:03 np0005625203.localdomain systemd[1]: libpod-74d6cb6e4b08f23b6f08c08b1759058cb8f536d2b95d7a11540868bbcc018e87.scope: Deactivated successfully.
Feb 20 09:58:03 np0005625203.localdomain podman[324385]: 2026-02-20 09:58:03.260549778 +0000 UTC m=+0.075929920 container died 74d6cb6e4b08f23b6f08c08b1759058cb8f536d2b95d7a11540868bbcc018e87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0df15180-77b4-4a39-944d-6dd7686af62e, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:58:03 np0005625203.localdomain podman[324385]: 2026-02-20 09:58:03.298788942 +0000 UTC m=+0.114169024 container cleanup 74d6cb6e4b08f23b6f08c08b1759058cb8f536d2b95d7a11540868bbcc018e87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0df15180-77b4-4a39-944d-6dd7686af62e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:58:03 np0005625203.localdomain systemd[1]: libpod-conmon-74d6cb6e4b08f23b6f08c08b1759058cb8f536d2b95d7a11540868bbcc018e87.scope: Deactivated successfully.
Feb 20 09:58:03 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-d5c20301d4b47f6e097c21980bcef6ed278c582f1416b5a47a3abb216f8b9305-merged.mount: Deactivated successfully.
Feb 20 09:58:03 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-74d6cb6e4b08f23b6f08c08b1759058cb8f536d2b95d7a11540868bbcc018e87-userdata-shm.mount: Deactivated successfully.
Feb 20 09:58:03 np0005625203.localdomain podman[324392]: 2026-02-20 09:58:03.328834202 +0000 UTC m=+0.130019694 container remove 74d6cb6e4b08f23b6f08c08b1759058cb8f536d2b95d7a11540868bbcc018e87 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0df15180-77b4-4a39-944d-6dd7686af62e, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Feb 20 09:58:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:03.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:58:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:03.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:58:03 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d0df15180\x2d77b4\x2d4a39\x2d944d\x2d6dd7686af62e.mount: Deactivated successfully.
Feb 20 09:58:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1636974104' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1636974104' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:04.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:58:04 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:04.666 262775 INFO neutron.agent.linux.ip_lib [None req-eba7a666-f618-4113-a73f-34bf9e1b987b - - - - - -] Device tap41a5e94f-fc cannot be used as it has no MAC address
Feb 20 09:58:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:04.698 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:04 np0005625203.localdomain kernel: device tap41a5e94f-fc entered promiscuous mode
Feb 20 09:58:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:04Z|00375|binding|INFO|Claiming lport 41a5e94f-fce6-442e-a6a0-c353adf641fb for this chassis.
Feb 20 09:58:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:04Z|00376|binding|INFO|41a5e94f-fce6-442e-a6a0-c353adf641fb: Claiming unknown
Feb 20 09:58:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:04.708 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:04 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581484.7102] manager: (tap41a5e94f-fc): new Generic device (/org/freedesktop/NetworkManager/Devices/72)
Feb 20 09:58:04 np0005625203.localdomain systemd-udevd[324422]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:58:04 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:04.719 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-cdaaf8f5-53cc-4b06-9de4-275e7e106c96', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdaaf8f5-53cc-4b06-9de4-275e7e106c96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80723b5de8af4075aa84c53e89b4d020', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a4dee2f-ae52-4324-9c7a-95c06067ab31, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=41a5e94f-fce6-442e-a6a0-c353adf641fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:58:04 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:04.721 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 41a5e94f-fce6-442e-a6a0-c353adf641fb in datapath cdaaf8f5-53cc-4b06-9de4-275e7e106c96 bound to our chassis
Feb 20 09:58:04 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:04.722 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network cdaaf8f5-53cc-4b06-9de4-275e7e106c96 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:58:04 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:04.723 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[3ff71d30-6632-4506-86a6-0b5eacaa1082]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:58:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap41a5e94f-fc: No such device
Feb 20 09:58:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap41a5e94f-fc: No such device
Feb 20 09:58:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap41a5e94f-fc: No such device
Feb 20 09:58:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:04Z|00377|binding|INFO|Setting lport 41a5e94f-fce6-442e-a6a0-c353adf641fb ovn-installed in OVS
Feb 20 09:58:04 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:04Z|00378|binding|INFO|Setting lport 41a5e94f-fce6-442e-a6a0-c353adf641fb up in Southbound
Feb 20 09:58:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap41a5e94f-fc: No such device
Feb 20 09:58:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:04.748 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap41a5e94f-fc: No such device
Feb 20 09:58:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap41a5e94f-fc: No such device
Feb 20 09:58:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap41a5e94f-fc: No such device
Feb 20 09:58:04 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap41a5e94f-fc: No such device
Feb 20 09:58:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:04.793 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:04.829 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:04.875 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:05 np0005625203.localdomain ceph-mon[296066]: pgmap v392: 177 pgs: 177 active+clean; 195 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 16 KiB/s rd, 30 KiB/s wr, 22 op/s
Feb 20 09:58:05 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3388708526' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:58:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:58:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:58:05 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3083830693' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:58:05 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3083830693' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:05.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:58:05 np0005625203.localdomain podman[324492]: 
Feb 20 09:58:05 np0005625203.localdomain podman[324492]: 2026-02-20 09:58:05.840442684 +0000 UTC m=+0.094061830 container create 60a826349662dbc779e9103e05726995c8ce3954bfe64b09fa8975bb67af3473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdaaf8f5-53cc-4b06-9de4-275e7e106c96, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:58:05 np0005625203.localdomain systemd[1]: Started libpod-conmon-60a826349662dbc779e9103e05726995c8ce3954bfe64b09fa8975bb67af3473.scope.
Feb 20 09:58:05 np0005625203.localdomain podman[324492]: 2026-02-20 09:58:05.793952496 +0000 UTC m=+0.047571672 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:58:05 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:58:05 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea05043f49570143f826fb70bc42eda83b8b3be12db7fd0f6353466d85ebee5c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:58:05 np0005625203.localdomain podman[324492]: 2026-02-20 09:58:05.91691573 +0000 UTC m=+0.170534876 container init 60a826349662dbc779e9103e05726995c8ce3954bfe64b09fa8975bb67af3473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdaaf8f5-53cc-4b06-9de4-275e7e106c96, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:58:05 np0005625203.localdomain podman[324492]: 2026-02-20 09:58:05.926444285 +0000 UTC m=+0.180063401 container start 60a826349662dbc779e9103e05726995c8ce3954bfe64b09fa8975bb67af3473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdaaf8f5-53cc-4b06-9de4-275e7e106c96, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:58:05 np0005625203.localdomain dnsmasq[324510]: started, version 2.85 cachesize 150
Feb 20 09:58:05 np0005625203.localdomain dnsmasq[324510]: DNS service limited to local subnets
Feb 20 09:58:05 np0005625203.localdomain dnsmasq[324510]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:58:05 np0005625203.localdomain dnsmasq[324510]: warning: no upstream servers configured
Feb 20 09:58:05 np0005625203.localdomain dnsmasq-dhcp[324510]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:58:05 np0005625203.localdomain dnsmasq[324510]: read /var/lib/neutron/dhcp/cdaaf8f5-53cc-4b06-9de4-275e7e106c96/addn_hosts - 0 addresses
Feb 20 09:58:05 np0005625203.localdomain dnsmasq-dhcp[324510]: read /var/lib/neutron/dhcp/cdaaf8f5-53cc-4b06-9de4-275e7e106c96/host
Feb 20 09:58:05 np0005625203.localdomain dnsmasq-dhcp[324510]: read /var/lib/neutron/dhcp/cdaaf8f5-53cc-4b06-9de4-275e7e106c96/opts
Feb 20 09:58:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3083830693' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3083830693' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3015932775' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:58:06 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:06.132 262775 INFO neutron.agent.dhcp.agent [None req-249a8839-a247-4e63-824d-6b0deaf0fd8f - - - - - -] DHCP configuration for ports {'3f0135b6-e89f-4f5f-b11b-4560d3300ee6'} is completed
Feb 20 09:58:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:06.330 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:06.336 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:58:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:06.569 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port b4ed381d-8b02-4b78-8dd9-ba86881439b2 with type ""
Feb 20 09:58:06 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:06Z|00379|binding|INFO|Removing iface tap41a5e94f-fc ovn-installed in OVS
Feb 20 09:58:06 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:06Z|00380|binding|INFO|Removing lport 41a5e94f-fce6-442e-a6a0-c353adf641fb ovn-installed in OVS
Feb 20 09:58:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:06.571 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-cdaaf8f5-53cc-4b06-9de4-275e7e106c96', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cdaaf8f5-53cc-4b06-9de4-275e7e106c96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80723b5de8af4075aa84c53e89b4d020', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a4dee2f-ae52-4324-9c7a-95c06067ab31, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=41a5e94f-fce6-442e-a6a0-c353adf641fb) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:58:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:06.571 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:06.574 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 41a5e94f-fce6-442e-a6a0-c353adf641fb in datapath cdaaf8f5-53cc-4b06-9de4-275e7e106c96 unbound from our chassis
Feb 20 09:58:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:06.576 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cdaaf8f5-53cc-4b06-9de4-275e7e106c96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:58:06 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:06.577 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[14473e5b-67d7-4cf2-8481-10e9f41b4881]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:58:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:06.580 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:06.585 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:06 np0005625203.localdomain kernel: device tap41a5e94f-fc left promiscuous mode
Feb 20 09:58:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:06.598 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:07 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ff9e0809-0706-432a-8bd4-0d7cee6634c8", "format": "json"}]: dispatch
Feb 20 09:58:07 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ff9e0809-0706-432a-8bd4-0d7cee6634c8", "force": true, "format": "json"}]: dispatch
Feb 20 09:58:07 np0005625203.localdomain ceph-mon[296066]: pgmap v393: 177 pgs: 177 active+clean; 195 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 43 KiB/s wr, 55 op/s
Feb 20 09:58:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:58:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:58:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:58:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:58:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:58:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:58:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:07.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:58:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:07.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:58:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:07.344 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:58:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:07.359 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:58:07 np0005625203.localdomain dnsmasq[324510]: read /var/lib/neutron/dhcp/cdaaf8f5-53cc-4b06-9de4-275e7e106c96/addn_hosts - 0 addresses
Feb 20 09:58:07 np0005625203.localdomain podman[324530]: 2026-02-20 09:58:07.421262281 +0000 UTC m=+0.082639127 container kill 60a826349662dbc779e9103e05726995c8ce3954bfe64b09fa8975bb67af3473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdaaf8f5-53cc-4b06-9de4-275e7e106c96, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 09:58:07 np0005625203.localdomain dnsmasq-dhcp[324510]: read /var/lib/neutron/dhcp/cdaaf8f5-53cc-4b06-9de4-275e7e106c96/host
Feb 20 09:58:07 np0005625203.localdomain dnsmasq-dhcp[324510]: read /var/lib/neutron/dhcp/cdaaf8f5-53cc-4b06-9de4-275e7e106c96/opts
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent [None req-2b8233b8-8e88-4e86-bbd1-8cc5694b677b - - - - - -] Unable to reload_allocations dhcp for cdaaf8f5-53cc-4b06-9de4-275e7e106c96.: neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap41a5e94f-fc not found in namespace qdhcp-cdaaf8f5-53cc-4b06-9de4-275e7e106c96.
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent Traceback (most recent call last):
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/dhcp/agent.py", line 264, in _call_driver
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     rv = getattr(driver, action)(**action_kwargs)
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 673, in reload_allocations
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     self.device_manager.update(self.network, self.interface_name)
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1899, in update
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route(network, device_name)
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1610, in _set_default_route
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     self._set_default_route_ip_version(network, device_name,
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/dhcp.py", line 1539, in _set_default_route_ip_version
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     gateway = device.route.get_gateway(ip_version=ip_version)
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 671, in get_gateway
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     routes = self.list_routes(ip_version, scope=scope, table=table)
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 656, in list_routes
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     return list_ip_routes(self._parent.namespace, ip_version, scope=scope,
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/neutron/agent/linux/ip_lib.py", line 1611, in list_ip_routes
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     routes = privileged.list_ip_routes(namespace, ip_version, device=device,
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 333, in wrapped_f
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     return self(f, *args, **kw)
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 423, in __call__
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     do = self.iter(retry_state=retry_state)
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 360, in iter
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     return fut.result()
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     return self.__get_result()
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     raise self._exception
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/tenacity/__init__.py", line 426, in __call__
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     result = fn(*args, **kwargs)
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     return self.channel.remote_call(name, args, kwargs,
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent     raise exc_type(*result[2])
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent neutron.privileged.agent.linux.ip_lib.NetworkInterfaceNotFound: Network interface tap41a5e94f-fc not found in namespace qdhcp-cdaaf8f5-53cc-4b06-9de4-275e7e106c96.
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.452 262775 ERROR neutron.agent.dhcp.agent 
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.455 262775 INFO neutron.agent.dhcp.agent [None req-91ae8f80-f9e6-4cec-898a-0d998bb2bebc - - - - - -] Synchronizing state
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.608 262775 INFO neutron.agent.dhcp.agent [None req-7f096cc3-89af-4168-9692-c629a8d30580 - - - - - -] All active networks have been fetched through RPC.
Feb 20 09:58:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:07.612 262775 INFO neutron.agent.dhcp.agent [-] Starting network cdaaf8f5-53cc-4b06-9de4-275e7e106c96 dhcp configuration
Feb 20 09:58:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:07.672 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:58:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:07.673 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:58:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:07.674 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:58:07 np0005625203.localdomain dnsmasq[324510]: exiting on receipt of SIGTERM
Feb 20 09:58:07 np0005625203.localdomain podman[324559]: 2026-02-20 09:58:07.81165635 +0000 UTC m=+0.064093174 container kill 60a826349662dbc779e9103e05726995c8ce3954bfe64b09fa8975bb67af3473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdaaf8f5-53cc-4b06-9de4-275e7e106c96, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 09:58:07 np0005625203.localdomain systemd[1]: libpod-60a826349662dbc779e9103e05726995c8ce3954bfe64b09fa8975bb67af3473.scope: Deactivated successfully.
Feb 20 09:58:07 np0005625203.localdomain sshd[324587]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:58:07 np0005625203.localdomain podman[324573]: 2026-02-20 09:58:07.893013926 +0000 UTC m=+0.060767861 container died 60a826349662dbc779e9103e05726995c8ce3954bfe64b09fa8975bb67af3473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdaaf8f5-53cc-4b06-9de4-275e7e106c96, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 20 09:58:07 np0005625203.localdomain systemd[1]: tmp-crun.qyeL2X.mount: Deactivated successfully.
Feb 20 09:58:07 np0005625203.localdomain podman[324573]: 2026-02-20 09:58:07.929785573 +0000 UTC m=+0.097539488 container cleanup 60a826349662dbc779e9103e05726995c8ce3954bfe64b09fa8975bb67af3473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdaaf8f5-53cc-4b06-9de4-275e7e106c96, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:58:07 np0005625203.localdomain systemd[1]: libpod-conmon-60a826349662dbc779e9103e05726995c8ce3954bfe64b09fa8975bb67af3473.scope: Deactivated successfully.
Feb 20 09:58:07 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:58:07 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1749782927' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:07 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:58:07 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1749782927' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:07 np0005625203.localdomain podman[324574]: 2026-02-20 09:58:07.9629792 +0000 UTC m=+0.127531106 container remove 60a826349662dbc779e9103e05726995c8ce3954bfe64b09fa8975bb67af3473 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-cdaaf8f5-53cc-4b06-9de4-275e7e106c96, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:58:08 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:08.063 262775 INFO neutron.agent.dhcp.agent [None req-adba5e63-5692-4dd1-9bd2-f0b8ddb4816b - - - - - -] Finished network cdaaf8f5-53cc-4b06-9de4-275e7e106c96 dhcp configuration
Feb 20 09:58:08 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:08.065 262775 INFO neutron.agent.dhcp.agent [None req-7f096cc3-89af-4168-9692-c629a8d30580 - - - - - -] Synchronizing state complete
Feb 20 09:58:08 np0005625203.localdomain sshd[324587]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:58:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:08.258 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:08.355 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:58:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:58:08 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:08 np0005625203.localdomain systemd[1]: tmp-crun.QyioCA.mount: Deactivated successfully.
Feb 20 09:58:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-ea05043f49570143f826fb70bc42eda83b8b3be12db7fd0f6353466d85ebee5c-merged.mount: Deactivated successfully.
Feb 20 09:58:08 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60a826349662dbc779e9103e05726995c8ce3954bfe64b09fa8975bb67af3473-userdata-shm.mount: Deactivated successfully.
Feb 20 09:58:08 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2dcdaaf8f5\x2d53cc\x2d4b06\x2d9de4\x2d275e7e106c96.mount: Deactivated successfully.
Feb 20 09:58:08 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:58:08 np0005625203.localdomain ceph-mon[296066]: pgmap v394: 177 pgs: 177 active+clean; 195 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 43 KiB/s wr, 55 op/s
Feb 20 09:58:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1749782927' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1749782927' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:09 np0005625203.localdomain podman[324602]: 2026-02-20 09:58:09.024523792 +0000 UTC m=+0.093118052 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Feb 20 09:58:09 np0005625203.localdomain podman[324602]: 2026-02-20 09:58:09.070089581 +0000 UTC m=+0.138683831 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:58:09 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:58:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:09.903 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "134e598c-37ed-480d-a639-35f631513b30", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:58:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "134e598c-37ed-480d-a639-35f631513b30", "format": "json"}]: dispatch
Feb 20 09:58:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e190 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:58:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e191 e191: 6 total, 6 up, 6 in
Feb 20 09:58:10 np0005625203.localdomain ceph-mon[296066]: pgmap v395: 177 pgs: 177 active+clean; 195 MiB data, 963 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 42 KiB/s wr, 55 op/s
Feb 20 09:58:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:11.362 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:58:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:58:11 np0005625203.localdomain podman[324628]: 2026-02-20 09:58:11.772048484 +0000 UTC m=+0.085221477 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=ubi9/ubi-minimal, maintainer=Red Hat, Inc., architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, managed_by=edpm_ansible, release=1770267347, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 20 09:58:11 np0005625203.localdomain podman[324628]: 2026-02-20 09:58:11.817387408 +0000 UTC m=+0.130560361 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, managed_by=edpm_ansible, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, io.openshift.expose-services=, release=1770267347, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7, architecture=x86_64, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc.)
Feb 20 09:58:11 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:58:11 np0005625203.localdomain podman[324627]: 2026-02-20 09:58:11.838109698 +0000 UTC m=+0.152992234 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Feb 20 09:58:11 np0005625203.localdomain podman[324627]: 2026-02-20 09:58:11.878473727 +0000 UTC m=+0.193356293 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Feb 20 09:58:11 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:58:12 np0005625203.localdomain ceph-mon[296066]: osdmap e191: 6 total, 6 up, 6 in
Feb 20 09:58:12 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e192 e192: 6 total, 6 up, 6 in
Feb 20 09:58:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:12.337 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:58:13 np0005625203.localdomain ceph-mon[296066]: pgmap v397: 177 pgs: 177 active+clean; 195 MiB data, 964 MiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 50 KiB/s wr, 67 op/s
Feb 20 09:58:13 np0005625203.localdomain ceph-mon[296066]: osdmap e192: 6 total, 6 up, 6 in
Feb 20 09:58:13 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "134e598c-37ed-480d-a639-35f631513b30", "format": "json"}]: dispatch
Feb 20 09:58:13 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "134e598c-37ed-480d-a639-35f631513b30", "force": true, "format": "json"}]: dispatch
Feb 20 09:58:14 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e193 e193: 6 total, 6 up, 6 in
Feb 20 09:58:14 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:14.936 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:58:15 np0005625203.localdomain ceph-mon[296066]: pgmap v399: 177 pgs: 177 active+clean; 195 MiB data, 964 MiB used, 41 GiB / 42 GiB avail; 17 KiB/s rd, 32 KiB/s wr, 27 op/s
Feb 20 09:58:15 np0005625203.localdomain ceph-mon[296066]: osdmap e193: 6 total, 6 up, 6 in
Feb 20 09:58:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e194 e194: 6 total, 6 up, 6 in
Feb 20 09:58:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:58:15 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:58:15 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4248636922' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:58:15 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/4248636922' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:16 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:16.041 262775 INFO neutron.agent.linux.ip_lib [None req-ad513c5d-83d9-4915-aeac-19396eefc7cb - - - - - -] Device tap3e1d84f6-87 cannot be used as it has no MAC address
Feb 20 09:58:16 np0005625203.localdomain ceph-mon[296066]: osdmap e194: 6 total, 6 up, 6 in
Feb 20 09:58:16 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f29b2b87-a99b-49d8-b340-b09eb6239fb8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:58:16 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f29b2b87-a99b-49d8-b340-b09eb6239fb8", "format": "json"}]: dispatch
Feb 20 09:58:16 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:16 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4248636922' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:16 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/4248636922' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:16.104 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:16 np0005625203.localdomain kernel: device tap3e1d84f6-87 entered promiscuous mode
Feb 20 09:58:16 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:16Z|00381|binding|INFO|Claiming lport 3e1d84f6-87e8-48b3-a54d-bdddbd5e73af for this chassis.
Feb 20 09:58:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:16.113 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:16 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581496.1165] manager: (tap3e1d84f6-87): new Generic device (/org/freedesktop/NetworkManager/Devices/73)
Feb 20 09:58:16 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:16Z|00382|binding|INFO|3e1d84f6-87e8-48b3-a54d-bdddbd5e73af: Claiming unknown
Feb 20 09:58:16 np0005625203.localdomain systemd-udevd[324676]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:58:16 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:16.126 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-4ac30705-2fd2-49be-a1f7-08c621308b32', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ac30705-2fd2-49be-a1f7-08c621308b32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80723b5de8af4075aa84c53e89b4d020', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcded48e-f603-4a0b-b8d9-4fd0f1812023, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=3e1d84f6-87e8-48b3-a54d-bdddbd5e73af) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:58:16 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:16.128 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 3e1d84f6-87e8-48b3-a54d-bdddbd5e73af in datapath 4ac30705-2fd2-49be-a1f7-08c621308b32 bound to our chassis
Feb 20 09:58:16 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:16.131 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 94f01330-7f58-4765-9b71-6b5c6abc133f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:58:16 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:16.132 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ac30705-2fd2-49be-a1f7-08c621308b32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:58:16 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:16.132 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[85a2fe0d-d8f3-436d-bc19-2fe383457b70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:58:16 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:16Z|00383|binding|INFO|Setting lport 3e1d84f6-87e8-48b3-a54d-bdddbd5e73af ovn-installed in OVS
Feb 20 09:58:16 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:16Z|00384|binding|INFO|Setting lport 3e1d84f6-87e8-48b3-a54d-bdddbd5e73af up in Southbound
Feb 20 09:58:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:16.153 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:16 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3e1d84f6-87: No such device
Feb 20 09:58:16 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3e1d84f6-87: No such device
Feb 20 09:58:16 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3e1d84f6-87: No such device
Feb 20 09:58:16 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3e1d84f6-87: No such device
Feb 20 09:58:16 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3e1d84f6-87: No such device
Feb 20 09:58:16 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3e1d84f6-87: No such device
Feb 20 09:58:16 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3e1d84f6-87: No such device
Feb 20 09:58:16 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap3e1d84f6-87: No such device
Feb 20 09:58:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:16.197 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:16.230 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:16 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:16Z|00385|binding|INFO|Removing iface tap3e1d84f6-87 ovn-installed in OVS
Feb 20 09:58:16 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:16Z|00386|binding|INFO|Removing lport 3e1d84f6-87e8-48b3-a54d-bdddbd5e73af ovn-installed in OVS
Feb 20 09:58:16 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:16.283 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 94f01330-7f58-4765-9b71-6b5c6abc133f with type ""
Feb 20 09:58:16 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:16.284 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-4ac30705-2fd2-49be-a1f7-08c621308b32', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ac30705-2fd2-49be-a1f7-08c621308b32', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '80723b5de8af4075aa84c53e89b4d020', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dcded48e-f603-4a0b-b8d9-4fd0f1812023, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=3e1d84f6-87e8-48b3-a54d-bdddbd5e73af) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:58:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:16.284 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:16 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:16.286 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 3e1d84f6-87e8-48b3-a54d-bdddbd5e73af in datapath 4ac30705-2fd2-49be-a1f7-08c621308b32 unbound from our chassis
Feb 20 09:58:16 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:16.289 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ac30705-2fd2-49be-a1f7-08c621308b32, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:58:16 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:16.291 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[c8ba96ec-352a-4e0a-b54d-4bcbb789c616]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:58:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:16.292 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:16.364 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:17.096 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:17 np0005625203.localdomain ceph-mon[296066]: pgmap v402: 177 pgs: 177 active+clean; 196 MiB data, 968 MiB used, 41 GiB / 42 GiB avail; 63 KiB/s rd, 29 KiB/s wr, 88 op/s
Feb 20 09:58:17 np0005625203.localdomain podman[324745]: 
Feb 20 09:58:17 np0005625203.localdomain podman[324745]: 2026-02-20 09:58:17.179920544 +0000 UTC m=+0.096008171 container create f767e6a6dade044dc852203e5d2657e842068415544bd196ccc68ace769fc016 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ac30705-2fd2-49be-a1f7-08c621308b32, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:58:17 np0005625203.localdomain systemd[1]: Started libpod-conmon-f767e6a6dade044dc852203e5d2657e842068415544bd196ccc68ace769fc016.scope.
Feb 20 09:58:17 np0005625203.localdomain podman[324745]: 2026-02-20 09:58:17.136728737 +0000 UTC m=+0.052816334 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:58:17 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:58:17 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/569fff0756ace2d344b219eca8cb271b4d400097e93999774c30b816afa539c5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:58:17 np0005625203.localdomain podman[324745]: 2026-02-20 09:58:17.25834667 +0000 UTC m=+0.174434227 container init f767e6a6dade044dc852203e5d2657e842068415544bd196ccc68ace769fc016 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ac30705-2fd2-49be-a1f7-08c621308b32, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:58:17 np0005625203.localdomain podman[324745]: 2026-02-20 09:58:17.271082244 +0000 UTC m=+0.187169811 container start f767e6a6dade044dc852203e5d2657e842068415544bd196ccc68ace769fc016 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ac30705-2fd2-49be-a1f7-08c621308b32, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:58:17 np0005625203.localdomain dnsmasq[324763]: started, version 2.85 cachesize 150
Feb 20 09:58:17 np0005625203.localdomain dnsmasq[324763]: DNS service limited to local subnets
Feb 20 09:58:17 np0005625203.localdomain dnsmasq[324763]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:58:17 np0005625203.localdomain dnsmasq[324763]: warning: no upstream servers configured
Feb 20 09:58:17 np0005625203.localdomain dnsmasq-dhcp[324763]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:58:17 np0005625203.localdomain dnsmasq[324763]: read /var/lib/neutron/dhcp/4ac30705-2fd2-49be-a1f7-08c621308b32/addn_hosts - 0 addresses
Feb 20 09:58:17 np0005625203.localdomain dnsmasq-dhcp[324763]: read /var/lib/neutron/dhcp/4ac30705-2fd2-49be-a1f7-08c621308b32/host
Feb 20 09:58:17 np0005625203.localdomain dnsmasq-dhcp[324763]: read /var/lib/neutron/dhcp/4ac30705-2fd2-49be-a1f7-08c621308b32/opts
Feb 20 09:58:17 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:17.409 262775 INFO neutron.agent.dhcp.agent [None req-5751df7d-4f67-4db0-a207-27fc37bdc3d6 - - - - - -] DHCP configuration for ports {'5dadfd46-3ce6-46cc-8b2f-61a34ae82728'} is completed
Feb 20 09:58:17 np0005625203.localdomain dnsmasq[324763]: exiting on receipt of SIGTERM
Feb 20 09:58:17 np0005625203.localdomain podman[324780]: 2026-02-20 09:58:17.516543358 +0000 UTC m=+0.064056633 container kill f767e6a6dade044dc852203e5d2657e842068415544bd196ccc68ace769fc016 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ac30705-2fd2-49be-a1f7-08c621308b32, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:58:17 np0005625203.localdomain systemd[1]: libpod-f767e6a6dade044dc852203e5d2657e842068415544bd196ccc68ace769fc016.scope: Deactivated successfully.
Feb 20 09:58:17 np0005625203.localdomain podman[324793]: 2026-02-20 09:58:17.572044945 +0000 UTC m=+0.044843278 container died f767e6a6dade044dc852203e5d2657e842068415544bd196ccc68ace769fc016 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ac30705-2fd2-49be-a1f7-08c621308b32, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:58:17 np0005625203.localdomain podman[324793]: 2026-02-20 09:58:17.660367248 +0000 UTC m=+0.133165541 container cleanup f767e6a6dade044dc852203e5d2657e842068415544bd196ccc68ace769fc016 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ac30705-2fd2-49be-a1f7-08c621308b32, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:58:17 np0005625203.localdomain systemd[1]: libpod-conmon-f767e6a6dade044dc852203e5d2657e842068415544bd196ccc68ace769fc016.scope: Deactivated successfully.
Feb 20 09:58:17 np0005625203.localdomain podman[324800]: 2026-02-20 09:58:17.686139856 +0000 UTC m=+0.146303928 container remove f767e6a6dade044dc852203e5d2657e842068415544bd196ccc68ace769fc016 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-4ac30705-2fd2-49be-a1f7-08c621308b32, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:58:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:17.700 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:17 np0005625203.localdomain kernel: device tap3e1d84f6-87 left promiscuous mode
Feb 20 09:58:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:17.720 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:17 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:17.736 262775 INFO neutron.agent.dhcp.agent [None req-50e2da10-13d4-4137-95f7-5a44d4e2d5c0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:58:17 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:17.736 262775 INFO neutron.agent.dhcp.agent [None req-50e2da10-13d4-4137-95f7-5a44d4e2d5c0 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:58:17 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e195 e195: 6 total, 6 up, 6 in
Feb 20 09:58:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-569fff0756ace2d344b219eca8cb271b4d400097e93999774c30b816afa539c5-merged.mount: Deactivated successfully.
Feb 20 09:58:18 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f767e6a6dade044dc852203e5d2657e842068415544bd196ccc68ace769fc016-userdata-shm.mount: Deactivated successfully.
Feb 20 09:58:18 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d4ac30705\x2d2fd2\x2d49be\x2da1f7\x2d08c621308b32.mount: Deactivated successfully.
Feb 20 09:58:18 np0005625203.localdomain ceph-mon[296066]: pgmap v403: 177 pgs: 177 active+clean; 196 MiB data, 968 MiB used, 41 GiB / 42 GiB avail; 49 KiB/s rd, 22 KiB/s wr, 67 op/s
Feb 20 09:58:18 np0005625203.localdomain ceph-mon[296066]: osdmap e195: 6 total, 6 up, 6 in
Feb 20 09:58:18 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/11949244' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:18 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/11949244' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:58:19 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:19.938 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:19 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:58:19 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "format": "json"}]: dispatch
Feb 20 09:58:19 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:58:20 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f29b2b87-a99b-49d8-b340-b09eb6239fb8", "format": "json"}]: dispatch
Feb 20 09:58:20 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f29b2b87-a99b-49d8-b340-b09eb6239fb8", "force": true, "format": "json"}]: dispatch
Feb 20 09:58:20 np0005625203.localdomain ceph-mon[296066]: pgmap v405: 177 pgs: 177 active+clean; 196 MiB data, 990 MiB used, 41 GiB / 42 GiB avail; 65 KiB/s rd, 34 KiB/s wr, 89 op/s
Feb 20 09:58:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:21.400 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:22 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:58:22 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1485614407' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:22 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:58:22 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1485614407' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:22 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e196 e196: 6 total, 6 up, 6 in
Feb 20 09:58:22 np0005625203.localdomain ceph-mon[296066]: pgmap v406: 177 pgs: 177 active+clean; 196 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 96 KiB/s rd, 54 KiB/s wr, 134 op/s
Feb 20 09:58:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 09:58:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:58:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:58:22 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1485614407' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:22 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1485614407' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:22 np0005625203.localdomain ceph-mon[296066]: osdmap e196: 6 total, 6 up, 6 in
Feb 20 09:58:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:58:23 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:58:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:58:23 np0005625203.localdomain podman[324824]: 2026-02-20 09:58:23.774810445 +0000 UTC m=+0.087200449 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:58:23 np0005625203.localdomain podman[324824]: 2026-02-20 09:58:23.813574284 +0000 UTC m=+0.125964328 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:58:23 np0005625203.localdomain podman[324825]: 2026-02-20 09:58:23.829360113 +0000 UTC m=+0.141696795 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 09:58:23 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:58:23 np0005625203.localdomain podman[324825]: 2026-02-20 09:58:23.868460272 +0000 UTC m=+0.180796924 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:58:23 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:58:24 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:58:24 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:24.975 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:58:25 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ec39e814-8000-487e-8406-15e7edd35489", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:58:25 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ec39e814-8000-487e-8406-15e7edd35489", "format": "json"}]: dispatch
Feb 20 09:58:25 np0005625203.localdomain ceph-mon[296066]: pgmap v408: 177 pgs: 177 active+clean; 196 MiB data, 991 MiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 35 KiB/s wr, 75 op/s
Feb 20 09:58:25 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:58:26 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e197 e197: 6 total, 6 up, 6 in
Feb 20 09:58:26 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3466105401' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:26 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3466105401' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 09:58:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 20 09:58:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 20 09:58:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:26.425 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:27 np0005625203.localdomain ceph-mon[296066]: pgmap v409: 177 pgs: 177 active+clean; 196 MiB data, 992 MiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 57 KiB/s wr, 130 op/s
Feb 20 09:58:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 09:58:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 09:58:27 np0005625203.localdomain ceph-mon[296066]: osdmap e197: 6 total, 6 up, 6 in
Feb 20 09:58:27 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2603382454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:27 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2603382454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:28 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3400407797' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:28 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3400407797' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:58:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:58:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:58:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 09:58:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18306 "" "Go-http-client/1.1"
Feb 20 09:58:29 np0005625203.localdomain ceph-mon[296066]: pgmap v411: 177 pgs: 177 active+clean; 196 MiB data, 992 MiB used, 41 GiB / 42 GiB avail; 79 KiB/s rd, 48 KiB/s wr, 114 op/s
Feb 20 09:58:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ec39e814-8000-487e-8406-15e7edd35489", "format": "json"}]: dispatch
Feb 20 09:58:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ec39e814-8000-487e-8406-15e7edd35489", "force": true, "format": "json"}]: dispatch
Feb 20 09:58:29 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1678028411' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:29 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1678028411' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 09:58:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:30.017 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:58:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch
Feb 20 09:58:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:58:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:58:30 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:30.792 262775 INFO neutron.agent.linux.ip_lib [None req-e0846c17-1c7f-4e97-91af-24ab3d7c8bb5 - - - - - -] Device tap1346e854-9f cannot be used as it has no MAC address
Feb 20 09:58:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:30.819 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:30 np0005625203.localdomain kernel: device tap1346e854-9f entered promiscuous mode
Feb 20 09:58:30 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581510.8262] manager: (tap1346e854-9f): new Generic device (/org/freedesktop/NetworkManager/Devices/74)
Feb 20 09:58:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:30.825 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:30 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:30Z|00387|binding|INFO|Claiming lport 1346e854-9f79-4f81-bb67-e77bc80b6d4d for this chassis.
Feb 20 09:58:30 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:30Z|00388|binding|INFO|1346e854-9f79-4f81-bb67-e77bc80b6d4d: Claiming unknown
Feb 20 09:58:30 np0005625203.localdomain systemd-udevd[324881]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:58:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:30.838 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-c427c754-98c1-4c1f-88ef-98f49fcb980c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c427c754-98c1-4c1f-88ef-98f49fcb980c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68573218a6b141beb49fbacc5b306c7d', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49ea52af-09a1-4f6d-be63-7f5675bd0bb1, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=1346e854-9f79-4f81-bb67-e77bc80b6d4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:58:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:30.840 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 1346e854-9f79-4f81-bb67-e77bc80b6d4d in datapath c427c754-98c1-4c1f-88ef-98f49fcb980c bound to our chassis
Feb 20 09:58:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:30.843 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c427c754-98c1-4c1f-88ef-98f49fcb980c or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:58:30 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:30.844 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[fb59c349-d1ec-485f-ae4f-0d0bcc8e9cfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:58:30 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1346e854-9f: No such device
Feb 20 09:58:30 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1346e854-9f: No such device
Feb 20 09:58:30 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1346e854-9f: No such device
Feb 20 09:58:30 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:30Z|00389|binding|INFO|Setting lport 1346e854-9f79-4f81-bb67-e77bc80b6d4d ovn-installed in OVS
Feb 20 09:58:30 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:30Z|00390|binding|INFO|Setting lport 1346e854-9f79-4f81-bb67-e77bc80b6d4d up in Southbound
Feb 20 09:58:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:30.867 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:30 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1346e854-9f: No such device
Feb 20 09:58:30 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1346e854-9f: No such device
Feb 20 09:58:30 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1346e854-9f: No such device
Feb 20 09:58:30 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1346e854-9f: No such device
Feb 20 09:58:30 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap1346e854-9f: No such device
Feb 20 09:58:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:30.910 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:30 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:30.942 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:31 np0005625203.localdomain ceph-mon[296066]: pgmap v412: 177 pgs: 177 active+clean; 196 MiB data, 992 MiB used, 41 GiB / 42 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 58 op/s
Feb 20 09:58:31 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1881893775' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:31 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1881893775' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:31.458 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:31 np0005625203.localdomain podman[324952]: 
Feb 20 09:58:31 np0005625203.localdomain podman[324952]: 2026-02-20 09:58:31.892336643 +0000 UTC m=+0.085634511 container create 8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c427c754-98c1-4c1f-88ef-98f49fcb980c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:58:31 np0005625203.localdomain systemd[1]: Started libpod-conmon-8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540.scope.
Feb 20 09:58:31 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:58:31 np0005625203.localdomain podman[324952]: 2026-02-20 09:58:31.850241521 +0000 UTC m=+0.043539409 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:58:31 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/133d1219889b399b3156b0977916de7f7308b95f44eeda5abb50138cc5c9af41/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:58:31 np0005625203.localdomain podman[324952]: 2026-02-20 09:58:31.962596856 +0000 UTC m=+0.155894714 container init 8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c427c754-98c1-4c1f-88ef-98f49fcb980c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:58:31 np0005625203.localdomain podman[324952]: 2026-02-20 09:58:31.973571786 +0000 UTC m=+0.166869644 container start 8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c427c754-98c1-4c1f-88ef-98f49fcb980c, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 09:58:31 np0005625203.localdomain dnsmasq[324970]: started, version 2.85 cachesize 150
Feb 20 09:58:31 np0005625203.localdomain dnsmasq[324970]: DNS service limited to local subnets
Feb 20 09:58:31 np0005625203.localdomain dnsmasq[324970]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:58:31 np0005625203.localdomain dnsmasq[324970]: warning: no upstream servers configured
Feb 20 09:58:31 np0005625203.localdomain dnsmasq-dhcp[324970]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:58:31 np0005625203.localdomain dnsmasq[324970]: read /var/lib/neutron/dhcp/c427c754-98c1-4c1f-88ef-98f49fcb980c/addn_hosts - 0 addresses
Feb 20 09:58:31 np0005625203.localdomain dnsmasq-dhcp[324970]: read /var/lib/neutron/dhcp/c427c754-98c1-4c1f-88ef-98f49fcb980c/host
Feb 20 09:58:31 np0005625203.localdomain dnsmasq-dhcp[324970]: read /var/lib/neutron/dhcp/c427c754-98c1-4c1f-88ef-98f49fcb980c/opts
Feb 20 09:58:32 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:32.133 262775 INFO neutron.agent.dhcp.agent [None req-f8de032b-153c-4dae-a0c7-f8d295390265 - - - - - -] DHCP configuration for ports {'c00c7579-221a-425f-a452-148d27eb2d2c'} is completed
Feb 20 09:58:32 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:58:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:58:32 np0005625203.localdomain podman[324971]: 2026-02-20 09:58:32.780545612 +0000 UTC m=+0.096440925 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Feb 20 09:58:32 np0005625203.localdomain podman[324971]: 2026-02-20 09:58:32.811065436 +0000 UTC m=+0.126960729 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:58:32 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:58:32 np0005625203.localdomain systemd[1]: tmp-crun.HOuToC.mount: Deactivated successfully.
Feb 20 09:58:32 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e198 e198: 6 total, 6 up, 6 in
Feb 20 09:58:33 np0005625203.localdomain ceph-mon[296066]: pgmap v413: 177 pgs: 177 active+clean; 196 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 104 KiB/s rd, 70 KiB/s wr, 148 op/s
Feb 20 09:58:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 09:58:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 20 09:58:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 20 09:58:33 np0005625203.localdomain ceph-mon[296066]: osdmap e198: 6 total, 6 up, 6 in
Feb 20 09:58:33 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1453609140' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:58:33 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1453609140' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:58:34 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 09:58:34 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 09:58:34 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "92707d6b-2313-43a9-b4cc-78c6dfb385e9", "format": "json"}]: dispatch
Feb 20 09:58:34 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "92707d6b-2313-43a9-b4cc-78c6dfb385e9", "force": true, "format": "json"}]: dispatch
Feb 20 09:58:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:35.020 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:58:35 np0005625203.localdomain ceph-mon[296066]: pgmap v415: 177 pgs: 177 active+clean; 196 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 75 KiB/s rd, 53 KiB/s wr, 105 op/s
Feb 20 09:58:35 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:35.425 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:58:35Z, description=, device_id=6e190587-ab34-4300-b8f0-29e369b79c71, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4f01f70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4f018b0>], id=294a08ae-f8b4-4714-b39d-0f2456a0f1c1, ip_allocation=immediate, mac_address=fa:16:3e:ab:60:50, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:58:28Z, description=, dns_domain=, id=c427c754-98c1-4c1f-88ef-98f49fcb980c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-508245943-network, port_security_enabled=True, project_id=68573218a6b141beb49fbacc5b306c7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5979, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2976, status=ACTIVE, subnets=['99be2391-e5c1-43dc-9c11-136054de9192'], tags=[], tenant_id=68573218a6b141beb49fbacc5b306c7d, updated_at=2026-02-20T09:58:29Z, vlan_transparent=None, network_id=c427c754-98c1-4c1f-88ef-98f49fcb980c, port_security_enabled=False, project_id=68573218a6b141beb49fbacc5b306c7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3002, status=DOWN, tags=[], tenant_id=68573218a6b141beb49fbacc5b306c7d, updated_at=2026-02-20T09:58:35Z on network c427c754-98c1-4c1f-88ef-98f49fcb980c
Feb 20 09:58:35 np0005625203.localdomain dnsmasq[324970]: read /var/lib/neutron/dhcp/c427c754-98c1-4c1f-88ef-98f49fcb980c/addn_hosts - 1 addresses
Feb 20 09:58:35 np0005625203.localdomain dnsmasq-dhcp[324970]: read /var/lib/neutron/dhcp/c427c754-98c1-4c1f-88ef-98f49fcb980c/host
Feb 20 09:58:35 np0005625203.localdomain dnsmasq-dhcp[324970]: read /var/lib/neutron/dhcp/c427c754-98c1-4c1f-88ef-98f49fcb980c/opts
Feb 20 09:58:35 np0005625203.localdomain podman[325005]: 2026-02-20 09:58:35.623979822 +0000 UTC m=+0.057150280 container kill 8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c427c754-98c1-4c1f-88ef-98f49fcb980c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 09:58:35 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:35.735 262775 INFO neutron.agent.linux.ip_lib [None req-5a615ed9-193c-4ba7-80e0-a99ddba02d24 - - - - - -] Device tapafd29956-14 cannot be used as it has no MAC address
Feb 20 09:58:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:35.776 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:35 np0005625203.localdomain kernel: device tapafd29956-14 entered promiscuous mode
Feb 20 09:58:35 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581515.7822] manager: (tapafd29956-14): new Generic device (/org/freedesktop/NetworkManager/Devices/75)
Feb 20 09:58:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:35.782 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:35 np0005625203.localdomain systemd-udevd[325034]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:58:35 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:35Z|00391|binding|INFO|Claiming lport afd29956-146a-4b52-adfb-966c11fcecd7 for this chassis.
Feb 20 09:58:35 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:35Z|00392|binding|INFO|afd29956-146a-4b52-adfb-966c11fcecd7: Claiming unknown
Feb 20 09:58:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:35.787 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:35 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:35.796 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-d612a55c-b2aa-4665-bf00-3e649d762c79', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d612a55c-b2aa-4665-bf00-3e649d762c79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9fdf2c09b98d48c0bc67cc1c7702a8f4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0faef055-9745-4ab8-b295-a6260661d3dc, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=afd29956-146a-4b52-adfb-966c11fcecd7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:58:35 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:35.798 161112 INFO neutron.agent.ovn.metadata.agent [-] Port afd29956-146a-4b52-adfb-966c11fcecd7 in datapath d612a55c-b2aa-4665-bf00-3e649d762c79 bound to our chassis
Feb 20 09:58:35 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:35.800 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network d612a55c-b2aa-4665-bf00-3e649d762c79 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 09:58:35 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:35.801 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[fad1744d-ab27-4cff-b697-d7334e10e8f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:58:35 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:35Z|00393|binding|INFO|Setting lport afd29956-146a-4b52-adfb-966c11fcecd7 ovn-installed in OVS
Feb 20 09:58:35 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:35Z|00394|binding|INFO|Setting lport afd29956-146a-4b52-adfb-966c11fcecd7 up in Southbound
Feb 20 09:58:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:35.824 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:35.875 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:35 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:35.898 262775 INFO neutron.agent.dhcp.agent [None req-e0707d9a-24c4-4b87-8cb3-d4b7a565edf2 - - - - - -] DHCP configuration for ports {'294a08ae-f8b4-4714-b39d-0f2456a0f1c1'} is completed
Feb 20 09:58:35 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:35.904 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:36 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 09:58:36 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:58:36 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:58:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:36.494 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:36 np0005625203.localdomain podman[325091]: 
Feb 20 09:58:36 np0005625203.localdomain podman[325091]: 2026-02-20 09:58:36.781023418 +0000 UTC m=+0.089944133 container create e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d612a55c-b2aa-4665-bf00-3e649d762c79, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 09:58:36 np0005625203.localdomain systemd[1]: Started libpod-conmon-e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55.scope.
Feb 20 09:58:36 np0005625203.localdomain podman[325091]: 2026-02-20 09:58:36.741417873 +0000 UTC m=+0.050338628 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:58:36 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:58:36 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc1cf7a5b58b7161351dea26d410d0b70ad5edd88b7cedd26a4ea9550ce0202/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:58:36 np0005625203.localdomain podman[325091]: 2026-02-20 09:58:36.865233083 +0000 UTC m=+0.174153798 container init e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d612a55c-b2aa-4665-bf00-3e649d762c79, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 09:58:36 np0005625203.localdomain podman[325091]: 2026-02-20 09:58:36.874570542 +0000 UTC m=+0.183491257 container start e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d612a55c-b2aa-4665-bf00-3e649d762c79, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127)
Feb 20 09:58:36 np0005625203.localdomain dnsmasq[325108]: started, version 2.85 cachesize 150
Feb 20 09:58:36 np0005625203.localdomain dnsmasq[325108]: DNS service limited to local subnets
Feb 20 09:58:36 np0005625203.localdomain dnsmasq[325108]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:58:36 np0005625203.localdomain dnsmasq[325108]: warning: no upstream servers configured
Feb 20 09:58:36 np0005625203.localdomain dnsmasq-dhcp[325108]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:58:36 np0005625203.localdomain dnsmasq[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/addn_hosts - 0 addresses
Feb 20 09:58:36 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/host
Feb 20 09:58:36 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/opts
Feb 20 09:58:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:58:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:58:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:58:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:58:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:58:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:58:37 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:37.066 262775 INFO neutron.agent.dhcp.agent [None req-5ece1f7a-31c4-4ff9-92bc-85f6fe1f8dc7 - - - - - -] DHCP configuration for ports {'082dea75-c58b-4458-a355-a40b55af6a87'} is completed
Feb 20 09:58:37 np0005625203.localdomain ceph-mon[296066]: pgmap v416: 177 pgs: 177 active+clean; 197 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 85 KiB/s rd, 63 KiB/s wr, 122 op/s
Feb 20 09:58:37 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:58:38 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:38.425 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:58:35Z, description=, device_id=6e190587-ab34-4300-b8f0-29e369b79c71, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4eb6160>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4eb6850>], id=294a08ae-f8b4-4714-b39d-0f2456a0f1c1, ip_allocation=immediate, mac_address=fa:16:3e:ab:60:50, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:58:28Z, description=, dns_domain=, id=c427c754-98c1-4c1f-88ef-98f49fcb980c, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesActionsTest-508245943-network, port_security_enabled=True, project_id=68573218a6b141beb49fbacc5b306c7d, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=5979, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2976, status=ACTIVE, subnets=['99be2391-e5c1-43dc-9c11-136054de9192'], tags=[], tenant_id=68573218a6b141beb49fbacc5b306c7d, updated_at=2026-02-20T09:58:29Z, vlan_transparent=None, network_id=c427c754-98c1-4c1f-88ef-98f49fcb980c, port_security_enabled=False, project_id=68573218a6b141beb49fbacc5b306c7d, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3002, status=DOWN, tags=[], tenant_id=68573218a6b141beb49fbacc5b306c7d, updated_at=2026-02-20T09:58:35Z on network c427c754-98c1-4c1f-88ef-98f49fcb980c
Feb 20 09:58:38 np0005625203.localdomain dnsmasq[324970]: read /var/lib/neutron/dhcp/c427c754-98c1-4c1f-88ef-98f49fcb980c/addn_hosts - 1 addresses
Feb 20 09:58:38 np0005625203.localdomain podman[325126]: 2026-02-20 09:58:38.638990188 +0000 UTC m=+0.059197532 container kill 8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c427c754-98c1-4c1f-88ef-98f49fcb980c, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 09:58:38 np0005625203.localdomain dnsmasq-dhcp[324970]: read /var/lib/neutron/dhcp/c427c754-98c1-4c1f-88ef-98f49fcb980c/host
Feb 20 09:58:38 np0005625203.localdomain dnsmasq-dhcp[324970]: read /var/lib/neutron/dhcp/c427c754-98c1-4c1f-88ef-98f49fcb980c/opts
Feb 20 09:58:38 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:38.905 262775 INFO neutron.agent.dhcp.agent [None req-a5f62861-c6b4-44b3-8788-0429edf5dfa3 - - - - - -] DHCP configuration for ports {'294a08ae-f8b4-4714-b39d-0f2456a0f1c1'} is completed
Feb 20 09:58:39 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:58:39 np0005625203.localdomain ceph-mon[296066]: pgmap v417: 177 pgs: 177 active+clean; 197 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 81 KiB/s rd, 60 KiB/s wr, 116 op/s
Feb 20 09:58:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 09:58:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 20 09:58:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 20 09:58:39 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:58:39 np0005625203.localdomain systemd[1]: tmp-crun.l1HH0z.mount: Deactivated successfully.
Feb 20 09:58:39 np0005625203.localdomain podman[325146]: 2026-02-20 09:58:39.780258287 +0000 UTC m=+0.083836675 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:58:39 np0005625203.localdomain podman[325146]: 2026-02-20 09:58:39.846170246 +0000 UTC m=+0.149748644 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 20 09:58:39 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:58:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:58:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:40.049 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 09:58:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 09:58:41 np0005625203.localdomain ceph-mon[296066]: pgmap v418: 177 pgs: 177 active+clean; 197 MiB data, 997 MiB used, 41 GiB / 42 GiB avail; 80 KiB/s rd, 60 KiB/s wr, 113 op/s
Feb 20 09:58:41 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:41.309 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:58:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:41.541 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 09:58:42 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:42Z|00395|ovn_bfd|INFO|Enabled BFD on interface ovn-0c414b-0
Feb 20 09:58:42 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:42Z|00396|ovn_bfd|INFO|Enabled BFD on interface ovn-2275c3-0
Feb 20 09:58:42 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:42Z|00397|ovn_bfd|INFO|Enabled BFD on interface ovn-2df8cc-0
Feb 20 09:58:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:42.372 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:42.383 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:42.389 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:42.413 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:42.452 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:58:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:58:42 np0005625203.localdomain podman[325174]: 2026-02-20 09:58:42.800418945 +0000 UTC m=+0.082910326 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:58:42 np0005625203.localdomain podman[325174]: 2026-02-20 09:58:42.815221923 +0000 UTC m=+0.097713304 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Feb 20 09:58:42 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:58:42 np0005625203.localdomain podman[325175]: 2026-02-20 09:58:42.908127697 +0000 UTC m=+0.183743736 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-type=git)
Feb 20 09:58:42 np0005625203.localdomain podman[325175]: 2026-02-20 09:58:42.951317553 +0000 UTC m=+0.226933562 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., version=9.7, name=ubi9/ubi-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter)
Feb 20 09:58:42 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:58:43 np0005625203.localdomain sudo[325213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:58:43 np0005625203.localdomain sudo[325213]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:58:43 np0005625203.localdomain sudo[325213]: pam_unix(sudo:session): session closed for user root
Feb 20 09:58:43 np0005625203.localdomain sudo[325231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --image registry.redhat.io/rhceph/rhceph-7-rhel9:latest --timeout 895 ls
Feb 20 09:58:43 np0005625203.localdomain sudo[325231]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:58:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:43.647 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:43 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:43.649 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:58:42Z, description=, device_id=7b11a9b1-00cf-4de3-addd-57993750a1cc, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ef6610>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4ef6670>], id=40e45d8c-649c-441f-8391-efc8b0363d89, ip_allocation=immediate, mac_address=fa:16:3e:5c:3d:ce, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:58:33Z, description=, dns_domain=, id=d612a55c-b2aa-4665-bf00-3e649d762c79, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-589446165-network, port_security_enabled=True, project_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=39824, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2998, status=ACTIVE, subnets=['0033955d-e101-4546-b91c-6f7858342e2d'], tags=[], tenant_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, updated_at=2026-02-20T09:58:34Z, vlan_transparent=None, network_id=d612a55c-b2aa-4665-bf00-3e649d762c79, port_security_enabled=False, project_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3022, status=DOWN, tags=[], tenant_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, updated_at=2026-02-20T09:58:43Z on network d612a55c-b2aa-4665-bf00-3e649d762c79
Feb 20 09:58:44 np0005625203.localdomain ceph-mon[296066]: pgmap v419: 177 pgs: 177 active+clean; 197 MiB data, 998 MiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 49 KiB/s wr, 36 op/s
Feb 20 09:58:44 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch
Feb 20 09:58:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:58:44 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:58:44 np0005625203.localdomain podman[325323]: 2026-02-20 09:58:44.149201101 +0000 UTC m=+0.199847172 container exec f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, org.opencontainers.image.created=2026-02-09T10:25:24Z, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14, GIT_CLEAN=True, GIT_REPO=https://github.com/ceph/ceph-container.git, version=7, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, vcs-type=git, io.openshift.expose-services=, release=1770267347, CEPH_POINT_RELEASE=, architecture=x86_64, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, name=rhceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.42.2, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, description=Red Hat Ceph Storage 7, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., ceph=True, url=https://catalog.redhat.com/en/search?searchType=containers, RELEASE=main, build-date=2026-02-09T10:25:24Z, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, com.redhat.component=rhceph-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=Red Hat Ceph Storage 7)
Feb 20 09:58:44 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:44.207 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:44 np0005625203.localdomain podman[325323]: 2026-02-20 09:58:44.254409788 +0000 UTC m=+0.305055849 container exec_died f6bf94ad66e54cdd610286b233c639418eb2b995978f1a145d6cfa77a016071d (image=registry.redhat.io/rhceph/rhceph-7-rhel9:latest, name=ceph-a8557ee9-b55d-5519-942c-cf8f6172f1d8-crash-np0005625203, distribution-scope=public, name=rhceph, vcs-ref=b3986a21dbd047e1edac0f24f7c0e811518e5b14, CEPH_POINT_RELEASE=, com.redhat.component=rhceph-container, version=7, io.openshift.expose-services=, release=1770267347, summary=Provides the latest Red Hat Ceph Storage 7 on RHEL 9 in a fully featured and supported base image., description=Red Hat Ceph Storage 7, org.opencontainers.image.created=2026-02-09T10:25:24Z, io.k8s.description=Red Hat Ceph Storage 7, ceph=True, build-date=2026-02-09T10:25:24Z, io.k8s.display-name=Red Hat Ceph Storage 7 on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, GIT_BRANCH=main, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, RELEASE=main, GIT_REPO=https://github.com/ceph/ceph-container.git, vcs-type=git, io.buildah.version=1.42.2, architecture=x86_64, io.openshift.tags=rhceph ceph, vendor=Red Hat, Inc., GIT_CLEAN=True, url=https://catalog.redhat.com/en/search?searchType=containers, GIT_COMMIT=12717c0777377369ea674892da98b0d85250f5b0, org.opencontainers.image.revision=b3986a21dbd047e1edac0f24f7c0e811518e5b14)
Feb 20 09:58:44 np0005625203.localdomain podman[325382]: 2026-02-20 09:58:44.423749507 +0000 UTC m=+0.054530268 container kill e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d612a55c-b2aa-4665-bf00-3e649d762c79, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:58:44 np0005625203.localdomain dnsmasq[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/addn_hosts - 1 addresses
Feb 20 09:58:44 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/host
Feb 20 09:58:44 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/opts
Feb 20 09:58:44 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:44.681 262775 INFO neutron.agent.dhcp.agent [None req-9a316dd0-907d-4f8f-9a2a-8282c806630d - - - - - -] DHCP configuration for ports {'40e45d8c-649c-441f-8391-efc8b0363d89'} is completed
Feb 20 09:58:44 np0005625203.localdomain sudo[325231]: pam_unix(sudo:session): session closed for user root
Feb 20 09:58:44 np0005625203.localdomain sudo[325477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:58:45 np0005625203.localdomain sudo[325477]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:58:45 np0005625203.localdomain sudo[325477]: pam_unix(sudo:session): session closed for user root
Feb 20 09:58:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:58:45 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:45.053 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:45 np0005625203.localdomain sudo[325495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:58:45 np0005625203.localdomain sudo[325495]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:58:45 np0005625203.localdomain ceph-mon[296066]: pgmap v420: 177 pgs: 177 active+clean; 197 MiB data, 998 MiB used, 41 GiB / 42 GiB avail; 20 KiB/s rd, 46 KiB/s wr, 34 op/s
Feb 20 09:58:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:58:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:58:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:58:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:58:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:58:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:58:45 np0005625203.localdomain sudo[325495]: pam_unix(sudo:session): session closed for user root
Feb 20 09:58:45 np0005625203.localdomain sshd[325544]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:58:46 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:46.018 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:58:42Z, description=, device_id=7b11a9b1-00cf-4de3-addd-57993750a1cc, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da56fb0d0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4c84f70>], id=40e45d8c-649c-441f-8391-efc8b0363d89, ip_allocation=immediate, mac_address=fa:16:3e:5c:3d:ce, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:58:33Z, description=, dns_domain=, id=d612a55c-b2aa-4665-bf00-3e649d762c79, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-589446165-network, port_security_enabled=True, project_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=39824, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2998, status=ACTIVE, subnets=['0033955d-e101-4546-b91c-6f7858342e2d'], tags=[], tenant_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, updated_at=2026-02-20T09:58:34Z, vlan_transparent=None, network_id=d612a55c-b2aa-4665-bf00-3e649d762c79, port_security_enabled=False, project_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3022, status=DOWN, tags=[], tenant_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, updated_at=2026-02-20T09:58:43Z on network d612a55c-b2aa-4665-bf00-3e649d762c79
Feb 20 09:58:46 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.4", "name": "osd_memory_target"} : dispatch
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.5", "name": "osd_memory_target"} : dispatch
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config rm", "who": "osd.3", "name": "osd_memory_target"} : dispatch
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:58:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:58:46 np0005625203.localdomain podman[325563]: 2026-02-20 09:58:46.24168633 +0000 UTC m=+0.066982694 container kill e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d612a55c-b2aa-4665-bf00-3e649d762c79, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 09:58:46 np0005625203.localdomain dnsmasq[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/addn_hosts - 1 addresses
Feb 20 09:58:46 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/host
Feb 20 09:58:46 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/opts
Feb 20 09:58:46 np0005625203.localdomain sudo[325569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:58:46 np0005625203.localdomain sudo[325569]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:58:46 np0005625203.localdomain sudo[325569]: pam_unix(sudo:session): session closed for user root
Feb 20 09:58:46 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:46.530 262775 INFO neutron.agent.dhcp.agent [None req-cd3bd245-347d-42df-ba90-0b6389f880ba - - - - - -] DHCP configuration for ports {'40e45d8c-649c-441f-8391-efc8b0363d89'} is completed
Feb 20 09:58:46 np0005625203.localdomain sshd[325544]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:58:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:46.587 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:47 np0005625203.localdomain ceph-mon[296066]: pgmap v421: 177 pgs: 177 active+clean; 197 MiB data, 999 MiB used, 41 GiB / 42 GiB avail; 18 KiB/s rd, 54 KiB/s wr, 33 op/s
Feb 20 09:58:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 09:58:47 np0005625203.localdomain ceph-mon[296066]: Adjusting osd_memory_target on np0005625203.localdomain to 836.6M
Feb 20 09:58:47 np0005625203.localdomain ceph-mon[296066]: Adjusting osd_memory_target on np0005625202.localdomain to 836.6M
Feb 20 09:58:47 np0005625203.localdomain ceph-mon[296066]: Unable to set osd_memory_target on np0005625203.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:58:47 np0005625203.localdomain ceph-mon[296066]: Unable to set osd_memory_target on np0005625202.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:58:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 09:58:47 np0005625203.localdomain ceph-mon[296066]: Adjusting osd_memory_target on np0005625204.localdomain to 836.6M
Feb 20 09:58:47 np0005625203.localdomain ceph-mon[296066]: Unable to set osd_memory_target on np0005625204.localdomain to 877246668: error parsing value: Value '877246668' is below minimum 939524096
Feb 20 09:58:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:47.610 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:48 np0005625203.localdomain sshd[325602]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:58:48 np0005625203.localdomain sshd[325602]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:58:49 np0005625203.localdomain ceph-mon[296066]: pgmap v422: 177 pgs: 177 active+clean; 197 MiB data, 999 MiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 39 KiB/s wr, 6 op/s
Feb 20 09:58:49 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:58:49 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 09:58:49 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:58:49 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:58:49 np0005625203.localdomain sshd[325604]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:58:49 np0005625203.localdomain sshd[325604]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:58:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:58:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:50.056 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:58:51 np0005625203.localdomain ceph-mon[296066]: pgmap v423: 177 pgs: 177 active+clean; 197 MiB data, 1007 MiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 39 KiB/s wr, 7 op/s
Feb 20 09:58:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:51.627 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:52 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:58:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:58:52 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:53 np0005625203.localdomain ceph-mon[296066]: pgmap v424: 177 pgs: 177 active+clean; 253 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 4.7 MiB/s wr, 37 op/s
Feb 20 09:58:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 09:58:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 20 09:58:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 20 09:58:53 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 09:58:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 09:58:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "bc980eec-f143-463a-b3c7-dd1de3420de8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:58:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "bc980eec-f143-463a-b3c7-dd1de3420de8", "format": "json"}]: dispatch
Feb 20 09:58:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:58:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:58:54 np0005625203.localdomain podman[325607]: 2026-02-20 09:58:54.804784604 +0000 UTC m=+0.108948062 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:58:54 np0005625203.localdomain podman[325607]: 2026-02-20 09:58:54.817699043 +0000 UTC m=+0.121862471 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 09:58:54 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:58:54 np0005625203.localdomain podman[325606]: 2026-02-20 09:58:54.893631392 +0000 UTC m=+0.199163743 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:58:54 np0005625203.localdomain podman[325606]: 2026-02-20 09:58:54.900487234 +0000 UTC m=+0.206019565 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:58:54 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:58:54 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:54.964 262775 INFO neutron.agent.linux.ip_lib [None req-41067841-2c71-494c-a852-e01e7218899b - - - - - -] Device tap0e22fe2c-b4 cannot be used as it has no MAC address
Feb 20 09:58:54 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:54.991 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:54 np0005625203.localdomain kernel: device tap0e22fe2c-b4 entered promiscuous mode
Feb 20 09:58:55 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581535.0007] manager: (tap0e22fe2c-b4): new Generic device (/org/freedesktop/NetworkManager/Devices/76)
Feb 20 09:58:55 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:55Z|00398|binding|INFO|Claiming lport 0e22fe2c-b42a-47f1-9d70-2bb83d66cc98 for this chassis.
Feb 20 09:58:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:55.003 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:55 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:55Z|00399|binding|INFO|0e22fe2c-b42a-47f1-9d70-2bb83d66cc98: Claiming unknown
Feb 20 09:58:55 np0005625203.localdomain systemd-udevd[325663]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 09:58:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:55.019 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-eba31194-1fe0-47f9-81b7-c37337ed3597', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eba31194-1fe0-47f9-81b7-c37337ed3597', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db4c2cde6adc4016a4bb7c41aa8e59c8', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=538149bd-edbc-484e-b1fe-3819d98f7f1f, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=0e22fe2c-b42a-47f1-9d70-2bb83d66cc98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:58:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:55.021 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 0e22fe2c-b42a-47f1-9d70-2bb83d66cc98 in datapath eba31194-1fe0-47f9-81b7-c37337ed3597 bound to our chassis
Feb 20 09:58:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:55.024 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port a713a662-d9a7-437f-8509-0a1e98c137f5 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 09:58:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:55.025 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eba31194-1fe0-47f9-81b7-c37337ed3597, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:58:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:55.026 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd7810d-f2d4-4c39-a8bb-e39cdf601bed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:58:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:58:55 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:55Z|00400|binding|INFO|Setting lport 0e22fe2c-b42a-47f1-9d70-2bb83d66cc98 ovn-installed in OVS
Feb 20 09:58:55 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:55Z|00401|binding|INFO|Setting lport 0e22fe2c-b42a-47f1-9d70-2bb83d66cc98 up in Southbound
Feb 20 09:58:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:55.045 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:55.047 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:55.058 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:55.089 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:55.117 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:55 np0005625203.localdomain ceph-mon[296066]: pgmap v425: 177 pgs: 177 active+clean; 253 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.7 MiB/s rd, 4.7 MiB/s wr, 34 op/s
Feb 20 09:58:55 np0005625203.localdomain podman[325718]: 
Feb 20 09:58:55 np0005625203.localdomain podman[325718]: 2026-02-20 09:58:55.945654459 +0000 UTC m=+0.077915642 container create 7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eba31194-1fe0-47f9-81b7-c37337ed3597, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 09:58:56 np0005625203.localdomain podman[325718]: 2026-02-20 09:58:55.901158492 +0000 UTC m=+0.033419715 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 09:58:56 np0005625203.localdomain systemd[1]: Started libpod-conmon-7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92.scope.
Feb 20 09:58:56 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 09:58:56 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/589e6e6dbc3f157bcb24c9bd4064e01bf918c94914e0bc5bac23c392d5538e24/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 09:58:56 np0005625203.localdomain podman[325718]: 2026-02-20 09:58:56.044409474 +0000 UTC m=+0.176670667 container init 7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eba31194-1fe0-47f9-81b7-c37337ed3597, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:58:56 np0005625203.localdomain podman[325718]: 2026-02-20 09:58:56.052852555 +0000 UTC m=+0.185113738 container start 7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eba31194-1fe0-47f9-81b7-c37337ed3597, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 09:58:56 np0005625203.localdomain dnsmasq[325736]: started, version 2.85 cachesize 150
Feb 20 09:58:56 np0005625203.localdomain dnsmasq[325736]: DNS service limited to local subnets
Feb 20 09:58:56 np0005625203.localdomain dnsmasq[325736]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 09:58:56 np0005625203.localdomain dnsmasq[325736]: warning: no upstream servers configured
Feb 20 09:58:56 np0005625203.localdomain dnsmasq-dhcp[325736]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 09:58:56 np0005625203.localdomain dnsmasq[325736]: read /var/lib/neutron/dhcp/eba31194-1fe0-47f9-81b7-c37337ed3597/addn_hosts - 0 addresses
Feb 20 09:58:56 np0005625203.localdomain dnsmasq-dhcp[325736]: read /var/lib/neutron/dhcp/eba31194-1fe0-47f9-81b7-c37337ed3597/host
Feb 20 09:58:56 np0005625203.localdomain dnsmasq-dhcp[325736]: read /var/lib/neutron/dhcp/eba31194-1fe0-47f9-81b7-c37337ed3597/opts
Feb 20 09:58:56 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:56.123 262775 INFO neutron.agent.dhcp.agent [None req-26643d60-905a-482d-b7d9-519737bcc018 - - - - - -] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:58:55Z, description=, device_id=95ba958f-a3ec-4f5b-8859-f347b6468462, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e89df0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e89d30>], id=1092c0c2-b2b0-4d30-b2f1-0b3c8319f1ae, ip_allocation=immediate, mac_address=fa:16:3e:d4:76:23, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:58:51Z, description=, dns_domain=, id=eba31194-1fe0-47f9-81b7-c37337ed3597, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--444942481, port_security_enabled=True, project_id=db4c2cde6adc4016a4bb7c41aa8e59c8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40602, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3045, status=ACTIVE, subnets=['f0844e0c-d09b-43c0-86d9-cb1cc9bf2b7f'], tags=[], tenant_id=db4c2cde6adc4016a4bb7c41aa8e59c8, updated_at=2026-02-20T09:58:52Z, vlan_transparent=None, network_id=eba31194-1fe0-47f9-81b7-c37337ed3597, port_security_enabled=False, project_id=db4c2cde6adc4016a4bb7c41aa8e59c8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3062, status=DOWN, tags=[], tenant_id=db4c2cde6adc4016a4bb7c41aa8e59c8, updated_at=2026-02-20T09:58:55Z on network eba31194-1fe0-47f9-81b7-c37337ed3597
Feb 20 09:58:56 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:56.239 262775 INFO neutron.agent.dhcp.agent [None req-f89710d3-5ca0-40c9-acbc-a8f235b0ecd6 - - - - - -] DHCP configuration for ports {'b07e65bc-5ed0-48ac-8e0a-5cd210a54440'} is completed
Feb 20 09:58:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:56.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:58:56 np0005625203.localdomain dnsmasq[325736]: read /var/lib/neutron/dhcp/eba31194-1fe0-47f9-81b7-c37337ed3597/addn_hosts - 1 addresses
Feb 20 09:58:56 np0005625203.localdomain dnsmasq-dhcp[325736]: read /var/lib/neutron/dhcp/eba31194-1fe0-47f9-81b7-c37337ed3597/host
Feb 20 09:58:56 np0005625203.localdomain podman[325754]: 2026-02-20 09:58:56.357517521 +0000 UTC m=+0.064580589 container kill 7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eba31194-1fe0-47f9-81b7-c37337ed3597, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 09:58:56 np0005625203.localdomain dnsmasq-dhcp[325736]: read /var/lib/neutron/dhcp/eba31194-1fe0-47f9-81b7-c37337ed3597/opts
Feb 20 09:58:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/438591704' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 09:58:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:58:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:58:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1695766244' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:56.669 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:58:56 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:56 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:56.681 262775 INFO neutron.agent.dhcp.agent [None req-7efb158f-b190-471f-b910-2316173d8a93 - - - - - -] DHCP configuration for ports {'1092c0c2-b2b0-4d30-b2f1-0b3c8319f1ae'} is completed
Feb 20 09:58:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e199 e199: 6 total, 6 up, 6 in
Feb 20 09:58:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:57.125 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:58:55Z, description=, device_id=95ba958f-a3ec-4f5b-8859-f347b6468462, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da56dff40>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da5771c70>], id=1092c0c2-b2b0-4d30-b2f1-0b3c8319f1ae, ip_allocation=immediate, mac_address=fa:16:3e:d4:76:23, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:58:51Z, description=, dns_domain=, id=eba31194-1fe0-47f9-81b7-c37337ed3597, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-router-network01--444942481, port_security_enabled=True, project_id=db4c2cde6adc4016a4bb7c41aa8e59c8, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=40602, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3045, status=ACTIVE, subnets=['f0844e0c-d09b-43c0-86d9-cb1cc9bf2b7f'], tags=[], tenant_id=db4c2cde6adc4016a4bb7c41aa8e59c8, updated_at=2026-02-20T09:58:52Z, vlan_transparent=None, network_id=eba31194-1fe0-47f9-81b7-c37337ed3597, port_security_enabled=False, project_id=db4c2cde6adc4016a4bb7c41aa8e59c8, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3062, status=DOWN, tags=[], tenant_id=db4c2cde6adc4016a4bb7c41aa8e59c8, updated_at=2026-02-20T09:58:55Z on network eba31194-1fe0-47f9-81b7-c37337ed3597
Feb 20 09:58:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:57.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:58:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:57.372 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:58:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:57.373 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:58:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:57.373 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:58:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:57.374 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:58:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:57.374 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:58:57 np0005625203.localdomain ceph-mon[296066]: pgmap v426: 177 pgs: 177 active+clean; 436 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 1.8 MiB/s rd, 18 MiB/s wr, 85 op/s
Feb 20 09:58:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch
Feb 20 09:58:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e9bd40a1-418b-4133-bfcb-60a8a2bd8eed", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:58:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:58:57 np0005625203.localdomain ceph-mon[296066]: osdmap e199: 6 total, 6 up, 6 in
Feb 20 09:58:57 np0005625203.localdomain dnsmasq[325736]: read /var/lib/neutron/dhcp/eba31194-1fe0-47f9-81b7-c37337ed3597/addn_hosts - 1 addresses
Feb 20 09:58:57 np0005625203.localdomain dnsmasq-dhcp[325736]: read /var/lib/neutron/dhcp/eba31194-1fe0-47f9-81b7-c37337ed3597/host
Feb 20 09:58:57 np0005625203.localdomain dnsmasq-dhcp[325736]: read /var/lib/neutron/dhcp/eba31194-1fe0-47f9-81b7-c37337ed3597/opts
Feb 20 09:58:57 np0005625203.localdomain podman[325794]: 2026-02-20 09:58:57.53697758 +0000 UTC m=+0.077823908 container kill 7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eba31194-1fe0-47f9-81b7-c37337ed3597, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127)
Feb 20 09:58:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:58:57 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/420351506' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:58:57 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:58:57.827 262775 INFO neutron.agent.dhcp.agent [None req-18ef726a-bddf-4ca5-9d77-d0c5bd2d60dd - - - - - -] DHCP configuration for ports {'1092c0c2-b2b0-4d30-b2f1-0b3c8319f1ae'} is completed
Feb 20 09:58:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:57.831 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.053 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.055 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11604MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.056 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.056 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.134 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.134 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.182 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:58:58 np0005625203.localdomain dnsmasq[325736]: read /var/lib/neutron/dhcp/eba31194-1fe0-47f9-81b7-c37337ed3597/addn_hosts - 0 addresses
Feb 20 09:58:58 np0005625203.localdomain dnsmasq-dhcp[325736]: read /var/lib/neutron/dhcp/eba31194-1fe0-47f9-81b7-c37337ed3597/host
Feb 20 09:58:58 np0005625203.localdomain dnsmasq-dhcp[325736]: read /var/lib/neutron/dhcp/eba31194-1fe0-47f9-81b7-c37337ed3597/opts
Feb 20 09:58:58 np0005625203.localdomain podman[325855]: 2026-02-20 09:58:58.340307074 +0000 UTC m=+0.060687649 container kill 7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eba31194-1fe0-47f9-81b7-c37337ed3597, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3)
Feb 20 09:58:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e200 e200: 6 total, 6 up, 6 in
Feb 20 09:58:58 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e9bd40a1-418b-4133-bfcb-60a8a2bd8eed", "format": "json"}]: dispatch
Feb 20 09:58:58 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/420351506' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:58:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:58Z|00402|binding|INFO|Releasing lport 0e22fe2c-b42a-47f1-9d70-2bb83d66cc98 from this chassis (sb_readonly=0)
Feb 20 09:58:58 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:58:58Z|00403|binding|INFO|Setting lport 0e22fe2c-b42a-47f1-9d70-2bb83d66cc98 down in Southbound
Feb 20 09:58:58 np0005625203.localdomain kernel: device tap0e22fe2c-b4 left promiscuous mode
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.542 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:58.550 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-eba31194-1fe0-47f9-81b7-c37337ed3597', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eba31194-1fe0-47f9-81b7-c37337ed3597', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'db4c2cde6adc4016a4bb7c41aa8e59c8', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=538149bd-edbc-484e-b1fe-3819d98f7f1f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=0e22fe2c-b42a-47f1-9d70-2bb83d66cc98) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:58:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:58.551 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 0e22fe2c-b42a-47f1-9d70-2bb83d66cc98 in datapath eba31194-1fe0-47f9-81b7-c37337ed3597 unbound from our chassis
Feb 20 09:58:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:58.553 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eba31194-1fe0-47f9-81b7-c37337ed3597, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:58:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:58.554 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[20cf0c2b-adaf-40f8-aef9-45784c3480a6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.559 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.641 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.648 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.664 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.667 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 09:58:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:58.668 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:58:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:58:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:58:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:58:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 161365 "" "Go-http-client/1.1"
Feb 20 09:58:59 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:59.056 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:58:59 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:58:59.057 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:58:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19725 "" "Go-http-client/1.1"
Feb 20 09:58:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:58:59.096 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:58:59 np0005625203.localdomain ceph-mon[296066]: pgmap v428: 177 pgs: 177 active+clean; 436 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 2.1 MiB/s rd, 21 MiB/s wr, 99 op/s
Feb 20 09:58:59 np0005625203.localdomain ceph-mon[296066]: osdmap e200: 6 total, 6 up, 6 in
Feb 20 09:58:59 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1250733315' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:58:59 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:59:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:00.061 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: pgmap v430: 177 pgs: 177 active+clean; 469 MiB data, 1.6 GiB used, 40 GiB / 42 GiB avail; 878 KiB/s rd, 24 MiB/s wr, 80 op/s
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e9bd40a1-418b-4133-bfcb-60a8a2bd8eed", "format": "json"}]: dispatch
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e9bd40a1-418b-4133-bfcb-60a8a2bd8eed", "force": true, "format": "json"}]: dispatch
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.439676) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581540439774, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 2815, "num_deletes": 266, "total_data_size": 4226290, "memory_usage": 4292192, "flush_reason": "Manual Compaction"}
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581540454449, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 2755961, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26048, "largest_seqno": 28858, "table_properties": {"data_size": 2744524, "index_size": 7302, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 27174, "raw_average_key_size": 22, "raw_value_size": 2720547, "raw_average_value_size": 2257, "num_data_blocks": 307, "num_entries": 1205, "num_filter_entries": 1205, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581414, "oldest_key_time": 1771581414, "file_creation_time": 1771581540, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 14820 microseconds, and 7678 cpu microseconds.
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.454503) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 2755961 bytes OK
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.454531) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.456112) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.456133) EVENT_LOG_v1 {"time_micros": 1771581540456127, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.456159) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 4213111, prev total WAL file size 4213111, number of live WAL files 2.
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.457476) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132303438' seq:72057594037927935, type:22 .. '7061786F73003132333030' seq:0, type:0; will stop at (end)
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(2691KB)], [42(17MB)]
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581540457593, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 20730196, "oldest_snapshot_seqno": -1}
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 13382 keys, 19529564 bytes, temperature: kUnknown
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581540571324, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 19529564, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19449521, "index_size": 45510, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33477, "raw_key_size": 356146, "raw_average_key_size": 26, "raw_value_size": 19218374, "raw_average_value_size": 1436, "num_data_blocks": 1741, "num_entries": 13382, "num_filter_entries": 13382, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581540, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.571694) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 19529564 bytes
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.573195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.1 rd, 171.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 17.1 +0.0 blob) out(18.6 +0.0 blob), read-write-amplify(14.6) write-amplify(7.1) OK, records in: 13935, records dropped: 553 output_compression: NoCompression
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.573225) EVENT_LOG_v1 {"time_micros": 1771581540573211, "job": 24, "event": "compaction_finished", "compaction_time_micros": 113812, "compaction_time_cpu_micros": 60890, "output_level": 6, "num_output_files": 1, "total_output_size": 19529564, "num_input_records": 13935, "num_output_records": 13382, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581540573741, "job": 24, "event": "table_file_deletion", "file_number": 44}
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581540576529, "job": 24, "event": "table_file_deletion", "file_number": 42}
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.457278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.576572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.576579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.576582) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.576585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:59:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:00.576588) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:59:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:00.669 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:59:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:01.710 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/691707941' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:02 np0005625203.localdomain ceph-mon[296066]: pgmap v431: 177 pgs: 177 active+clean; 602 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 2.8 MiB/s rd, 37 MiB/s wr, 170 op/s
Feb 20 09:59:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3305751398' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3305751398' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/909841881' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:59:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:59:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2407210374' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.002847) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581543004123, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 298, "num_deletes": 256, "total_data_size": 82288, "memory_usage": 89400, "flush_reason": "Manual Compaction"}
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581543007175, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 53717, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28863, "largest_seqno": 29156, "table_properties": {"data_size": 51804, "index_size": 152, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4900, "raw_average_key_size": 17, "raw_value_size": 47932, "raw_average_value_size": 172, "num_data_blocks": 7, "num_entries": 278, "num_filter_entries": 278, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581540, "oldest_key_time": 1771581540, "file_creation_time": 1771581543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 3194 microseconds, and 1132 cpu microseconds.
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.007232) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 53717 bytes OK
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.007256) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.009173) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.009199) EVENT_LOG_v1 {"time_micros": 1771581543009191, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.009224) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 80078, prev total WAL file size 80078, number of live WAL files 2.
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.013868) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303231' seq:72057594037927935, type:22 .. '6C6F676D0034323733' seq:0, type:0; will stop at (end)
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(52KB)], [45(18MB)]
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581543013940, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 19583281, "oldest_snapshot_seqno": -1}
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 13140 keys, 18931089 bytes, temperature: kUnknown
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581543131828, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 18931089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 18853881, "index_size": 43251, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32901, "raw_key_size": 352075, "raw_average_key_size": 26, "raw_value_size": 18628190, "raw_average_value_size": 1417, "num_data_blocks": 1636, "num_entries": 13140, "num_filter_entries": 13140, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581543, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.132656) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 18931089 bytes
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.136721) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.4 rd, 159.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 18.6 +0.0 blob) out(18.1 +0.0 blob), read-write-amplify(717.0) write-amplify(352.4) OK, records in: 13660, records dropped: 520 output_compression: NoCompression
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.136750) EVENT_LOG_v1 {"time_micros": 1771581543136738, "job": 26, "event": "compaction_finished", "compaction_time_micros": 118405, "compaction_time_cpu_micros": 51095, "output_level": 6, "num_output_files": 1, "total_output_size": 18931089, "num_input_records": 13660, "num_output_records": 13140, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581543137518, "job": 26, "event": "table_file_deletion", "file_number": 47}
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581543140856, "job": 26, "event": "table_file_deletion", "file_number": 45}
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.013778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.141080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.141086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.141090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.141093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-09:59:03.141096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 09:59:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:03.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:59:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2407210374' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:59:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/4235731053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:59:03 np0005625203.localdomain podman[325900]: 2026-02-20 09:59:03.78209752 +0000 UTC m=+0.092314147 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 20 09:59:03 np0005625203.localdomain podman[325900]: 2026-02-20 09:59:03.786465085 +0000 UTC m=+0.096681692 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127)
Feb 20 09:59:03 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:59:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:04.340 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:59:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:04.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 09:59:04 np0005625203.localdomain sshd[325918]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:59:04 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e201 e201: 6 total, 6 up, 6 in
Feb 20 09:59:04 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "bc980eec-f143-463a-b3c7-dd1de3420de8", "format": "json"}]: dispatch
Feb 20 09:59:04 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "bc980eec-f143-463a-b3c7-dd1de3420de8", "force": true, "format": "json"}]: dispatch
Feb 20 09:59:04 np0005625203.localdomain ceph-mon[296066]: pgmap v432: 177 pgs: 177 active+clean; 602 MiB data, 2.0 GiB used, 40 GiB / 42 GiB avail; 2.7 MiB/s rd, 18 MiB/s wr, 92 op/s
Feb 20 09:59:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:59:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:05.396 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:05 np0005625203.localdomain ceph-mon[296066]: osdmap e201: 6 total, 6 up, 6 in
Feb 20 09:59:05 np0005625203.localdomain sshd[325918]: Invalid user ubuntu from 34.131.211.42 port 55382
Feb 20 09:59:06 np0005625203.localdomain sshd[325918]: Received disconnect from 34.131.211.42 port 55382:11: Bye Bye [preauth]
Feb 20 09:59:06 np0005625203.localdomain sshd[325918]: Disconnected from invalid user ubuntu 34.131.211.42 port 55382 [preauth]
Feb 20 09:59:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:06.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:59:06 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:59:06 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:06.788 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:06 np0005625203.localdomain ceph-mon[296066]: pgmap v434: 177 pgs: 177 active+clean; 769 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 5.4 MiB/s rd, 35 MiB/s wr, 174 op/s
Feb 20 09:59:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1770859695' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:59:06 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/812428714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:59:06 np0005625203.localdomain dnsmasq[325736]: exiting on receipt of SIGTERM
Feb 20 09:59:06 np0005625203.localdomain podman[325936]: 2026-02-20 09:59:06.950969238 +0000 UTC m=+0.068391637 container kill 7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eba31194-1fe0-47f9-81b7-c37337ed3597, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127)
Feb 20 09:59:06 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:59:06 np0005625203.localdomain systemd[1]: libpod-7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92.scope: Deactivated successfully.
Feb 20 09:59:07 np0005625203.localdomain podman[325949]: 2026-02-20 09:59:07.029636371 +0000 UTC m=+0.062181104 container died 7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eba31194-1fe0-47f9-81b7-c37337ed3597, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Feb 20 09:59:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:59:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:59:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:59:07 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92-userdata-shm.mount: Deactivated successfully.
Feb 20 09:59:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:59:07.059 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 09:59:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:59:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:59:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:59:07 np0005625203.localdomain podman[325949]: 2026-02-20 09:59:07.063861551 +0000 UTC m=+0.096406264 container cleanup 7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eba31194-1fe0-47f9-81b7-c37337ed3597, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:59:07 np0005625203.localdomain systemd[1]: libpod-conmon-7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92.scope: Deactivated successfully.
Feb 20 09:59:07 np0005625203.localdomain podman[325950]: 2026-02-20 09:59:07.104103396 +0000 UTC m=+0.133067038 container remove 7b094124f79d997519d5aff0cf08abe84c9b4caac86b6558e4846e5f838a4f92 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-eba31194-1fe0-47f9-81b7-c37337ed3597, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb 20 09:59:07 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:59:07.327 262775 INFO neutron.agent.dhcp.agent [None req-ec5e4dde-f027-4264-bf61-799f1f41991e - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:59:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:07.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:59:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:59:07.672 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:59:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:59:07.673 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:59:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:59:07.673 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:59:07 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-589e6e6dbc3f157bcb24c9bd4064e01bf918c94914e0bc5bac23c392d5538e24-merged.mount: Deactivated successfully.
Feb 20 09:59:07 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2deba31194\x2d1fe0\x2d47f9\x2d81b7\x2dc37337ed3597.mount: Deactivated successfully.
Feb 20 09:59:08 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:59:08.046 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:59:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9e3e5801-af35-415e-a8cf-27831a052e85", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:59:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9e3e5801-af35-415e-a8cf-27831a052e85", "format": "json"}]: dispatch
Feb 20 09:59:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 09:59:08 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 09:59:08 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 20 09:59:08 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 20 09:59:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 09:59:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:08.099 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:59:08 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:08.337 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:59:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:08.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:59:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:08.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 09:59:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:08.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 09:59:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:08.361 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 09:59:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e202 e202: 6 total, 6 up, 6 in
Feb 20 09:59:09 np0005625203.localdomain ceph-mon[296066]: pgmap v435: 177 pgs: 177 active+clean; 769 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 4.7 MiB/s rd, 31 MiB/s wr, 151 op/s
Feb 20 09:59:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "29f88fa1-d324-41a4-9d70-2be9681e5831", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:59:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "29f88fa1-d324-41a4-9d70-2be9681e5831", "format": "json"}]: dispatch
Feb 20 09:59:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:59:10 np0005625203.localdomain ceph-mon[296066]: osdmap e202: 6 total, 6 up, 6 in
Feb 20 09:59:10 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 09:59:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e203 e203: 6 total, 6 up, 6 in
Feb 20 09:59:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:11.286 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:59:11 np0005625203.localdomain podman[325977]: 2026-02-20 09:59:11.357996571 +0000 UTC m=+0.055335763 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20260127, config_id=ovn_controller)
Feb 20 09:59:11 np0005625203.localdomain ceph-mon[296066]: pgmap v437: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 801 MiB data, 2.4 GiB used, 40 GiB / 42 GiB avail; 2.7 MiB/s rd, 22 MiB/s wr, 91 op/s
Feb 20 09:59:11 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9e3e5801-af35-415e-a8cf-27831a052e85", "format": "json"}]: dispatch
Feb 20 09:59:11 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9e3e5801-af35-415e-a8cf-27831a052e85", "force": true, "format": "json"}]: dispatch
Feb 20 09:59:11 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch
Feb 20 09:59:11 np0005625203.localdomain ceph-mon[296066]: osdmap e203: 6 total, 6 up, 6 in
Feb 20 09:59:11 np0005625203.localdomain podman[325977]: 2026-02-20 09:59:11.4222793 +0000 UTC m=+0.119618482 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 09:59:11 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:59:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:11.789 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:59:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:59:12 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:59:12 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:13 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:13.063 2 INFO neutron.agent.securitygroups_rpc [None req-02b2e18c-e6bb-49a4-a8f1-2084c77a3d21 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['aead394c-a7d3-40bc-acee-c30aa527c351']
Feb 20 09:59:13 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:13.179 2 INFO neutron.agent.securitygroups_rpc [None req-056c38b9-a9d3-4d30-8e17-1a44ab4fc9c9 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['aead394c-a7d3-40bc-acee-c30aa527c351']
Feb 20 09:59:13 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e204 e204: 6 total, 6 up, 6 in
Feb 20 09:59:13 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:13.620 2 INFO neutron.agent.securitygroups_rpc [None req-5619c222-5cd3-438e-b875-1e00ee8b5a9d b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']
Feb 20 09:59:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:59:13 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:59:13 np0005625203.localdomain podman[326005]: 2026-02-20 09:59:13.757789574 +0000 UTC m=+0.076435976 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 09:59:13 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:13.792 2 INFO neutron.agent.securitygroups_rpc [None req-76a11935-2f93-444b-98b0-ed592d92678c b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']
Feb 20 09:59:13 np0005625203.localdomain podman[326005]: 2026-02-20 09:59:13.797267875 +0000 UTC m=+0.115914257 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 20 09:59:13 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:59:13 np0005625203.localdomain ceph-mon[296066]: pgmap v439: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 857 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 2.3 MiB/s rd, 27 MiB/s wr, 159 op/s
Feb 20 09:59:13 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "93d1cbc6-d6ab-4443-b83a-ea63f0f8018d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:59:13 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "93d1cbc6-d6ab-4443-b83a-ea63f0f8018d", "format": "json"}]: dispatch
Feb 20 09:59:13 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:13 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2887261055' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:13 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2887261055' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:13 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2616686135' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:13 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2616686135' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:13 np0005625203.localdomain ceph-mon[296066]: osdmap e204: 6 total, 6 up, 6 in
Feb 20 09:59:13 np0005625203.localdomain podman[326006]: 2026-02-20 09:59:13.868691975 +0000 UTC m=+0.181829956 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, managed_by=edpm_ansible, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, architecture=x86_64, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, version=9.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Feb 20 09:59:13 np0005625203.localdomain podman[326006]: 2026-02-20 09:59:13.879854161 +0000 UTC m=+0.192992082 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2026-02-05T04:57:10Z, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., version=9.7, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Feb 20 09:59:13 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:59:13 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:13.953 2 INFO neutron.agent.securitygroups_rpc [None req-92dae041-433a-447b-819c-ca016de78f58 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']
Feb 20 09:59:14 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:14.378 2 INFO neutron.agent.securitygroups_rpc [None req-2cd7aa12-02dc-45d2-aacd-015fd7ca5faf b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']
Feb 20 09:59:14 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:14.498 2 INFO neutron.agent.securitygroups_rpc [None req-8704f1e9-0786-45ef-9124-4ff6c69c9edf b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']
Feb 20 09:59:14 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:14.619 2 INFO neutron.agent.securitygroups_rpc [None req-82bfdf30-c718-45ba-8302-76a78964efac b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']
Feb 20 09:59:14 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:14.758 2 INFO neutron.agent.securitygroups_rpc [None req-6f3202e0-dd36-49f7-90de-4aa05c7d3120 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']
Feb 20 09:59:14 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:14.894 2 INFO neutron.agent.securitygroups_rpc [None req-348b4a4e-d300-45af-acd6-5c08e553ddf3 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']
Feb 20 09:59:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:59:15 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:15.075 2 INFO neutron.agent.securitygroups_rpc [None req-9fbdc85e-428a-4317-ba9b-cd92888d9cb2 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']
Feb 20 09:59:15 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:15.219 2 INFO neutron.agent.securitygroups_rpc [None req-db51d6cf-e253-422e-b04f-9b8629f57782 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['f624598f-524d-4cc1-8bc8-0fb402ed46a6']
Feb 20 09:59:15 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:15.854 2 INFO neutron.agent.securitygroups_rpc [None req-af481eae-6194-40c9-88e4-bb7253323390 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['16efbbcf-ddc6-4434-9318-5d841ffddaef']
Feb 20 09:59:15 np0005625203.localdomain ceph-mon[296066]: pgmap v441: 177 pgs: 1 active+clean+snaptrim, 176 active+clean; 857 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 93 KiB/s rd, 15 MiB/s wr, 131 op/s
Feb 20 09:59:16 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:59:16 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:16.328 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:16 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:16.607 2 INFO neutron.agent.securitygroups_rpc [None req-8397ac5f-4c0e-48b4-864b-bbce3e3a32e8 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['868259ee-6cd3-44fa-b964-b511ba69ce8b']
Feb 20 09:59:16 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:59:16 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:16.742 2 INFO neutron.agent.securitygroups_rpc [None req-248bf9b5-6ff0-42de-8583-69a922702068 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['868259ee-6cd3-44fa-b964-b511ba69ce8b']
Feb 20 09:59:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:16.791 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:16 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e205 e205: 6 total, 6 up, 6 in
Feb 20 09:59:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ec89c339-8feb-4567-a8fe-cb38fb00b60f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:59:17 np0005625203.localdomain ceph-mon[296066]: pgmap v442: 177 pgs: 177 active+clean; 835 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 126 KiB/s rd, 17 MiB/s wr, 184 op/s
Feb 20 09:59:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ec89c339-8feb-4567-a8fe-cb38fb00b60f", "format": "json"}]: dispatch
Feb 20 09:59:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 09:59:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 09:59:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 20 09:59:17 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 20 09:59:17 np0005625203.localdomain ceph-mon[296066]: osdmap e205: 6 total, 6 up, 6 in
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 09:59:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 09:59:17 np0005625203.localdomain sshd[326045]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:59:18 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e206 e206: 6 total, 6 up, 6 in
Feb 20 09:59:18 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 09:59:18 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:59:18 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2185046675' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:18 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:59:18 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2185046675' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:18 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:18.750 2 INFO neutron.agent.securitygroups_rpc [None req-00ebe7d1-26f1-436c-a8d3-18ae30d4ceca b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['350e41a6-6799-4255-abb2-bda7d280e893']
Feb 20 09:59:18 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:18.937 2 INFO neutron.agent.securitygroups_rpc [None req-ab9099b6-173a-4528-9f76-ddc0c1b400ee b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['350e41a6-6799-4255-abb2-bda7d280e893']
Feb 20 09:59:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:59:19 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:19 np0005625203.localdomain ceph-mon[296066]: pgmap v444: 177 pgs: 177 active+clean; 835 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 36 KiB/s rd, 3.3 MiB/s wr, 56 op/s
Feb 20 09:59:19 np0005625203.localdomain ceph-mon[296066]: osdmap e206: 6 total, 6 up, 6 in
Feb 20 09:59:19 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2185046675' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:19 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2185046675' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:19 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 09:59:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e207 e207: 6 total, 6 up, 6 in
Feb 20 09:59:19 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:19.470 2 INFO neutron.agent.securitygroups_rpc [None req-483e27bf-9a6d-411a-b87b-b6f37447f4e8 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['0ead8c40-caeb-4988-8a84-cd8090714d6e']
Feb 20 09:59:19 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:19.685 2 INFO neutron.agent.securitygroups_rpc [None req-ac337b7c-a535-40e3-b3bd-b5580b0e941d b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['0ead8c40-caeb-4988-8a84-cd8090714d6e']
Feb 20 09:59:19 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:19.821 2 INFO neutron.agent.securitygroups_rpc [None req-c5f864ad-b631-4012-870f-280605d80045 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['0ead8c40-caeb-4988-8a84-cd8090714d6e']
Feb 20 09:59:19 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:19.992 2 INFO neutron.agent.securitygroups_rpc [None req-ac1e61bd-c67f-4839-a794-1523a2080faa b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['0ead8c40-caeb-4988-8a84-cd8090714d6e']
Feb 20 09:59:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:59:20 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:20.200 2 INFO neutron.agent.securitygroups_rpc [None req-7e4bce2e-fb70-442e-b47b-26c11122b51c b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['0ead8c40-caeb-4988-8a84-cd8090714d6e']
Feb 20 09:59:20 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:20.362 2 INFO neutron.agent.securitygroups_rpc [None req-9c6d9773-e163-46df-a8b7-894bf61ef867 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['0ead8c40-caeb-4988-8a84-cd8090714d6e']
Feb 20 09:59:20 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "17a556c8-9ef9-44cc-8e06-318686688991", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:59:20 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "17a556c8-9ef9-44cc-8e06-318686688991", "format": "json"}]: dispatch
Feb 20 09:59:20 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:59:20 np0005625203.localdomain ceph-mon[296066]: osdmap e207: 6 total, 6 up, 6 in
Feb 20 09:59:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:59:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:59:20 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1875354009' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:20 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1875354009' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:20 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:20.784 2 INFO neutron.agent.securitygroups_rpc [None req-48000ec6-91b4-434a-ab1c-3ae5eaf7b735 b8c5cd85f3954e32bc8ce4cc39694a8d 25c20dca96c143a09e2486b79027be5d - - default default] Security group rule updated ['cefa71e1-4cfe-4451-bb5c-ca133ddcf1fd']
Feb 20 09:59:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:21.413 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:21 np0005625203.localdomain ceph-mon[296066]: pgmap v447: 177 pgs: 177 active+clean; 850 MiB data, 2.7 GiB used, 39 GiB / 42 GiB avail; 46 KiB/s rd, 6.7 MiB/s wr, 73 op/s
Feb 20 09:59:21 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3226930422' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:21 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e208 e208: 6 total, 6 up, 6 in
Feb 20 09:59:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:21.793 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:22 np0005625203.localdomain ceph-mon[296066]: osdmap e208: 6 total, 6 up, 6 in
Feb 20 09:59:22 np0005625203.localdomain ceph-mon[296066]: pgmap v449: 177 pgs: 177 active+clean; 886 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 111 KiB/s rd, 23 MiB/s wr, 173 op/s
Feb 20 09:59:22 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e209 e209: 6 total, 6 up, 6 in
Feb 20 09:59:22 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:22.579 2 INFO neutron.agent.securitygroups_rpc [None req-56dcf14a-a69d-4366-adc0-f7e0579b7cd8 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Security group rule updated ['9d889f17-f220-427e-bd61-2fb67b868596']
Feb 20 09:59:22 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:22.700 2 INFO neutron.agent.securitygroups_rpc [None req-87bed7c3-5c32-49ad-acc0-0a3642727263 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Security group rule updated ['9d889f17-f220-427e-bd61-2fb67b868596']
Feb 20 09:59:22 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:59:22 np0005625203.localdomain sshd[326045]: Invalid user n8n from 103.48.192.48 port 63727
Feb 20 09:59:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e210 e210: 6 total, 6 up, 6 in
Feb 20 09:59:23 np0005625203.localdomain ceph-mon[296066]: osdmap e209: 6 total, 6 up, 6 in
Feb 20 09:59:23 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 09:59:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 09:59:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 20 09:59:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 20 09:59:23 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 09:59:23 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "17a556c8-9ef9-44cc-8e06-318686688991", "format": "json"}]: dispatch
Feb 20 09:59:23 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "17a556c8-9ef9-44cc-8e06-318686688991", "force": true, "format": "json"}]: dispatch
Feb 20 09:59:23 np0005625203.localdomain ceph-mon[296066]: osdmap e210: 6 total, 6 up, 6 in
Feb 20 09:59:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e211 e211: 6 total, 6 up, 6 in
Feb 20 09:59:24 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 20 09:59:24 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:59:24 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e211 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:59:25 np0005625203.localdomain ceph-mon[296066]: pgmap v452: 177 pgs: 177 active+clean; 886 MiB data, 2.9 GiB used, 39 GiB / 42 GiB avail; 121 KiB/s rd, 25 MiB/s wr, 190 op/s
Feb 20 09:59:25 np0005625203.localdomain ceph-mon[296066]: osdmap e211: 6 total, 6 up, 6 in
Feb 20 09:59:25 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2f097eba-f8b3-4922-a9b2-f188b7fb5c09", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:59:25 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2f097eba-f8b3-4922-a9b2-f188b7fb5c09", "format": "json"}]: dispatch
Feb 20 09:59:25 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:59:25 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:59:25 np0005625203.localdomain podman[326047]: 2026-02-20 09:59:25.772005298 +0000 UTC m=+0.090314875 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:59:25 np0005625203.localdomain podman[326047]: 2026-02-20 09:59:25.806780983 +0000 UTC m=+0.125090560 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 09:59:25 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:59:25 np0005625203.localdomain podman[326048]: 2026-02-20 09:59:25.825101041 +0000 UTC m=+0.140247850 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 09:59:25 np0005625203.localdomain podman[326048]: 2026-02-20 09:59:25.863409145 +0000 UTC m=+0.178555924 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:59:25 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:59:26 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e212 e212: 6 total, 6 up, 6 in
Feb 20 09:59:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 09:59:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:59:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:59:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:26.415 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:26.796 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:27 np0005625203.localdomain ceph-mon[296066]: pgmap v454: 177 pgs: 177 active+clean; 975 MiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 101 KiB/s rd, 31 MiB/s wr, 160 op/s
Feb 20 09:59:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch
Feb 20 09:59:27 np0005625203.localdomain ceph-mon[296066]: osdmap e212: 6 total, 6 up, 6 in
Feb 20 09:59:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ec89c339-8feb-4567-a8fe-cb38fb00b60f", "format": "json"}]: dispatch
Feb 20 09:59:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ec89c339-8feb-4567-a8fe-cb38fb00b60f", "force": true, "format": "json"}]: dispatch
Feb 20 09:59:27 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3674888499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:59:27 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:59:27 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1073711962' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:27 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:59:27 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1073711962' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:27 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 09:59:27.223 2 INFO neutron.agent.securitygroups_rpc [req-f264314a-f5fb-4167-9b9a-7fac156c481a req-f4a185a8-c20a-4c61-b6ac-a21285bd72eb 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Security group member updated ['9d889f17-f220-427e-bd61-2fb67b868596']
Feb 20 09:59:27 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:59:27.400 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:59:27Z, description=, device_id=25d7d566-3a21-4292-a6ad-96dca2d2ec79, device_owner=, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4d23220>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4d23a30>], id=3cc99a44-cc7e-4f81-bce6-8e63dc92e267, ip_allocation=immediate, mac_address=fa:16:3e:b4:f9:fa, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T09:58:33Z, description=, dns_domain=, id=d612a55c-b2aa-4665-bf00-3e649d762c79, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-VolumesBackupsTest-589446165-network, port_security_enabled=True, project_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=39824, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=2998, status=ACTIVE, subnets=['0033955d-e101-4546-b91c-6f7858342e2d'], tags=[], tenant_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, updated_at=2026-02-20T09:58:34Z, vlan_transparent=None, network_id=d612a55c-b2aa-4665-bf00-3e649d762c79, port_security_enabled=True, project_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=['9d889f17-f220-427e-bd61-2fb67b868596'], standard_attr_id=3151, status=DOWN, tags=[], tenant_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, updated_at=2026-02-20T09:59:27Z on network d612a55c-b2aa-4665-bf00-3e649d762c79
Feb 20 09:59:27 np0005625203.localdomain dnsmasq[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/addn_hosts - 2 addresses
Feb 20 09:59:27 np0005625203.localdomain podman[326110]: 2026-02-20 09:59:27.602646433 +0000 UTC m=+0.043696512 container kill e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d612a55c-b2aa-4665-bf00-3e649d762c79, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 09:59:27 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/host
Feb 20 09:59:27 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/opts
Feb 20 09:59:27 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:59:27.900 262775 INFO neutron.agent.dhcp.agent [None req-8db61e5c-1b7c-452f-8447-03fd2035f125 - - - - - -] DHCP configuration for ports {'3cc99a44-cc7e-4f81-bce6-8e63dc92e267'} is completed
Feb 20 09:59:28 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:59:28.003 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=np0005625202.localdomain, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T09:59:27Z, description=, device_id=25d7d566-3a21-4292-a6ad-96dca2d2ec79, device_owner=compute:nova, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da573c520>], dns_domain=, dns_name=tempest-volumesbackupstest-instance-1173654775, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da573c580>], id=3cc99a44-cc7e-4f81-bce6-8e63dc92e267, ip_allocation=immediate, mac_address=fa:16:3e:b4:f9:fa, name=, network_id=d612a55c-b2aa-4665-bf00-3e649d762c79, port_security_enabled=True, project_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=2, security_groups=['9d889f17-f220-427e-bd61-2fb67b868596'], standard_attr_id=3151, status=DOWN, tags=[], tenant_id=9fdf2c09b98d48c0bc67cc1c7702a8f4, updated_at=2026-02-20T09:59:27Z on network d612a55c-b2aa-4665-bf00-3e649d762c79
Feb 20 09:59:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e213 e213: 6 total, 6 up, 6 in
Feb 20 09:59:28 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1073711962' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:28 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1073711962' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:28 np0005625203.localdomain ceph-mon[296066]: osdmap e213: 6 total, 6 up, 6 in
Feb 20 09:59:28 np0005625203.localdomain dnsmasq[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/addn_hosts - 2 addresses
Feb 20 09:59:28 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/host
Feb 20 09:59:28 np0005625203.localdomain podman[326146]: 2026-02-20 09:59:28.218390313 +0000 UTC m=+0.059997287 container kill e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d612a55c-b2aa-4665-bf00-3e649d762c79, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127)
Feb 20 09:59:28 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/opts
Feb 20 09:59:28 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:59:28.429 262775 INFO neutron.agent.dhcp.agent [None req-e1992962-c250-4345-a6c1-7823386901c6 - - - - - -] DHCP configuration for ports {'3cc99a44-cc7e-4f81-bce6-8e63dc92e267'} is completed
Feb 20 09:59:28 np0005625203.localdomain podman[240359]: time="2026-02-20T09:59:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:59:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:59:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 159542 "" "Go-http-client/1.1"
Feb 20 09:59:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 19264 "" "Go-http-client/1.1"
Feb 20 09:59:29 np0005625203.localdomain ceph-mon[296066]: pgmap v456: 177 pgs: 177 active+clean; 975 MiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 82 KiB/s rd, 25 MiB/s wr, 129 op/s
Feb 20 09:59:29 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3380022144' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3380022144' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2f097eba-f8b3-4922-a9b2-f188b7fb5c09", "format": "json"}]: dispatch
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2f097eba-f8b3-4922-a9b2-f188b7fb5c09", "force": true, "format": "json"}]: dispatch
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "93d1cbc6-d6ab-4443-b83a-ea63f0f8018d", "format": "json"}]: dispatch
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "93d1cbc6-d6ab-4443-b83a-ea63f0f8018d", "force": true, "format": "json"}]: dispatch
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/751048439' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2224840685' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3380022144' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:30 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3380022144' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:31 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:59:31 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3197445632' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:31 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:59:31 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/3197445632' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:31 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 09:59:31 np0005625203.localdomain ceph-mon[296066]: pgmap v458: 177 pgs: 177 active+clean; 983 MiB data, 3.3 GiB used, 39 GiB / 42 GiB avail; 96 KiB/s rd, 23 MiB/s wr, 143 op/s
Feb 20 09:59:31 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 09:59:31 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3197445632' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:31 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3197445632' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:31.457 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:31.799 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:32 np0005625203.localdomain sshd[326167]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:59:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e214 e214: 6 total, 6 up, 6 in
Feb 20 09:59:33 np0005625203.localdomain sshd[326167]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:59:33 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:33.094 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:33 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:33.105 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:33 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:33.126 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:33 np0005625203.localdomain ceph-mon[296066]: pgmap v459: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 157 KiB/s rd, 35 MiB/s wr, 241 op/s
Feb 20 09:59:33 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "29f88fa1-d324-41a4-9d70-2be9681e5831", "format": "json"}]: dispatch
Feb 20 09:59:33 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "29f88fa1-d324-41a4-9d70-2be9681e5831", "force": true, "format": "json"}]: dispatch
Feb 20 09:59:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 09:59:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:59:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:59:33 np0005625203.localdomain ceph-mon[296066]: osdmap e214: 6 total, 6 up, 6 in
Feb 20 09:59:34 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:59:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 09:59:34 np0005625203.localdomain podman[326170]: 2026-02-20 09:59:34.767737966 +0000 UTC m=+0.082648378 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 20 09:59:34 np0005625203.localdomain podman[326170]: 2026-02-20 09:59:34.77628389 +0000 UTC m=+0.091194352 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Feb 20 09:59:34 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 09:59:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:59:35 np0005625203.localdomain ceph-mon[296066]: pgmap v461: 177 pgs: 177 active+clean; 1.1 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 102 KiB/s rd, 18 MiB/s wr, 154 op/s
Feb 20 09:59:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e215 e215: 6 total, 6 up, 6 in
Feb 20 09:59:36 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:59:36 np0005625203.localdomain ceph-mon[296066]: osdmap e215: 6 total, 6 up, 6 in
Feb 20 09:59:36 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 09:59:36 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 20 09:59:36 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 20 09:59:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:36.460 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:36 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:59:36 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/597628337' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:36 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:59:36 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/597628337' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:36.857 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:59:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 09:59:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:59:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   09:59:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 09:59:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 09:59:37 np0005625203.localdomain ceph-mon[296066]: pgmap v463: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 3.1 MiB/s rd, 36 MiB/s wr, 314 op/s
Feb 20 09:59:37 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 09:59:37 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 09:59:37 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/597628337' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:37 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/597628337' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e216 e216: 6 total, 6 up, 6 in
Feb 20 09:59:37 np0005625203.localdomain sshd[326188]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:59:37 np0005625203.localdomain sshd[326188]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:59:38 np0005625203.localdomain ceph-mon[296066]: osdmap e216: 6 total, 6 up, 6 in
Feb 20 09:59:38 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/359541798' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:38 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/359541798' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: pgmap v465: 177 pgs: 177 active+clean; 1.2 GiB data, 4.0 GiB used, 38 GiB / 42 GiB avail; 3.9 MiB/s rd, 23 MiB/s wr, 204 op/s
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "format": "json"}]: dispatch
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1664593204' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1664593204' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2948968927' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:59:39 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2948968927' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:59:40 np0005625203.localdomain ceph-mon[296066]: pgmap v466: 177 pgs: 177 active+clean; 1.0 GiB data, 3.6 GiB used, 38 GiB / 42 GiB avail; 3.5 MiB/s rd, 21 MiB/s wr, 209 op/s
Feb 20 09:59:40 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2948968927' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:40 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2948968927' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:40 np0005625203.localdomain dnsmasq[324970]: read /var/lib/neutron/dhcp/c427c754-98c1-4c1f-88ef-98f49fcb980c/addn_hosts - 0 addresses
Feb 20 09:59:40 np0005625203.localdomain dnsmasq-dhcp[324970]: read /var/lib/neutron/dhcp/c427c754-98c1-4c1f-88ef-98f49fcb980c/host
Feb 20 09:59:40 np0005625203.localdomain podman[326207]: 2026-02-20 09:59:40.68328585 +0000 UTC m=+0.066419406 container kill 8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c427c754-98c1-4c1f-88ef-98f49fcb980c, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 20 09:59:40 np0005625203.localdomain dnsmasq-dhcp[324970]: read /var/lib/neutron/dhcp/c427c754-98c1-4c1f-88ef-98f49fcb980c/opts
Feb 20 09:59:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:40.904 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:40 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:59:40Z|00404|binding|INFO|Releasing lport 1346e854-9f79-4f81-bb67-e77bc80b6d4d from this chassis (sb_readonly=0)
Feb 20 09:59:40 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T09:59:40Z|00405|binding|INFO|Setting lport 1346e854-9f79-4f81-bb67-e77bc80b6d4d down in Southbound
Feb 20 09:59:40 np0005625203.localdomain kernel: device tap1346e854-9f left promiscuous mode
Feb 20 09:59:40 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:59:40.915 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-c427c754-98c1-4c1f-88ef-98f49fcb980c', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c427c754-98c1-4c1f-88ef-98f49fcb980c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '68573218a6b141beb49fbacc5b306c7d', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=49ea52af-09a1-4f6d-be63-7f5675bd0bb1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=1346e854-9f79-4f81-bb67-e77bc80b6d4d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:59:40 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:59:40.917 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 1346e854-9f79-4f81-bb67-e77bc80b6d4d in datapath c427c754-98c1-4c1f-88ef-98f49fcb980c unbound from our chassis
Feb 20 09:59:40 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:59:40.920 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c427c754-98c1-4c1f-88ef-98f49fcb980c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 09:59:40 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:59:40.921 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9afa5a-0825-4186-b63c-65b65c57c2a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 09:59:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:40.938 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:41.463 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:41 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 09:59:41 np0005625203.localdomain podman[326230]: 2026-02-20 09:59:41.760158675 +0000 UTC m=+0.077792458 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Feb 20 09:59:41 np0005625203.localdomain podman[326230]: 2026-02-20 09:59:41.825396453 +0000 UTC m=+0.143030236 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 09:59:41 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 09:59:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:41.858 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:42.314 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:59:42 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:42 np0005625203.localdomain dnsmasq[324970]: exiting on receipt of SIGTERM
Feb 20 09:59:42 np0005625203.localdomain podman[326274]: 2026-02-20 09:59:42.68958818 +0000 UTC m=+0.040821684 container kill 8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c427c754-98c1-4c1f-88ef-98f49fcb980c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 09:59:42 np0005625203.localdomain systemd[1]: libpod-8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540.scope: Deactivated successfully.
Feb 20 09:59:42 np0005625203.localdomain ceph-mon[296066]: pgmap v467: 177 pgs: 177 active+clean; 247 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 3.0 MiB/s rd, 17 MiB/s wr, 274 op/s
Feb 20 09:59:42 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:42 np0005625203.localdomain podman[326292]: 2026-02-20 09:59:42.745680746 +0000 UTC m=+0.041046742 container died 8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c427c754-98c1-4c1f-88ef-98f49fcb980c, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0)
Feb 20 09:59:42 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540-userdata-shm.mount: Deactivated successfully.
Feb 20 09:59:42 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-133d1219889b399b3156b0977916de7f7308b95f44eeda5abb50138cc5c9af41-merged.mount: Deactivated successfully.
Feb 20 09:59:42 np0005625203.localdomain podman[326292]: 2026-02-20 09:59:42.77783154 +0000 UTC m=+0.073197526 container cleanup 8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c427c754-98c1-4c1f-88ef-98f49fcb980c, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 09:59:42 np0005625203.localdomain systemd[1]: libpod-conmon-8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540.scope: Deactivated successfully.
Feb 20 09:59:42 np0005625203.localdomain podman[326291]: 2026-02-20 09:59:42.841272573 +0000 UTC m=+0.133354447 container remove 8c483a247a07db0bad0cc0af1197d3f03f01c6b1c86afc9892900acf34340540 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-c427c754-98c1-4c1f-88ef-98f49fcb980c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 09:59:42 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:59:42.887 262775 INFO neutron.agent.dhcp.agent [None req-5598f4db-0b1e-4bbd-a49a-a38de5b2de7f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:59:42 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 09:59:42.888 262775 INFO neutron.agent.dhcp.agent [None req-5598f4db-0b1e-4bbd-a49a-a38de5b2de7f - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 09:59:42 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:59:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e217 e217: 6 total, 6 up, 6 in
Feb 20 09:59:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:59:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "format": "json"}]: dispatch
Feb 20 09:59:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 09:59:43 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 09:59:43 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 20 09:59:43 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 20 09:59:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 09:59:43 np0005625203.localdomain ceph-mon[296066]: osdmap e217: 6 total, 6 up, 6 in
Feb 20 09:59:43 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2dc427c754\x2d98c1\x2d4c1f\x2d88ef\x2d98f49fcb980c.mount: Deactivated successfully.
Feb 20 09:59:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 09:59:44 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 09:59:44 np0005625203.localdomain podman[326320]: 2026-02-20 09:59:44.771263292 +0000 UTC m=+0.090533462 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 20 09:59:44 np0005625203.localdomain podman[326320]: 2026-02-20 09:59:44.784219303 +0000 UTC m=+0.103489463 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 09:59:44 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 09:59:44 np0005625203.localdomain podman[326321]: 2026-02-20 09:59:44.873732651 +0000 UTC m=+0.185735707 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, io.openshift.expose-services=, version=9.7, build-date=2026-02-05T04:57:10Z, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.buildah.version=1.33.7, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal)
Feb 20 09:59:44 np0005625203.localdomain podman[326321]: 2026-02-20 09:59:44.915565076 +0000 UTC m=+0.227568142 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, io.buildah.version=1.33.7, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, container_name=openstack_network_exporter, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 09:59:44 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 09:59:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:59:45 np0005625203.localdomain ceph-mon[296066]: pgmap v469: 177 pgs: 177 active+clean; 247 MiB data, 1.1 GiB used, 41 GiB / 42 GiB avail; 73 KiB/s rd, 46 KiB/s wr, 121 op/s
Feb 20 09:59:45 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3502069534' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:45 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3502069534' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:45 np0005625203.localdomain sshd[326045]: Received disconnect from 103.48.192.48 port 63727:11: Bye Bye [preauth]
Feb 20 09:59:45 np0005625203.localdomain sshd[326045]: Disconnected from invalid user n8n 103.48.192.48 port 63727 [preauth]
Feb 20 09:59:46 np0005625203.localdomain sudo[326354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 09:59:46 np0005625203.localdomain sudo[326354]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:59:46 np0005625203.localdomain sudo[326354]: pam_unix(sudo:session): session closed for user root
Feb 20 09:59:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:46.496 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:46 np0005625203.localdomain sudo[326372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 09:59:46 np0005625203.localdomain sudo[326372]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:59:46 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb 20 09:59:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:46.860 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:47 np0005625203.localdomain sudo[326372]: pam_unix(sudo:session): session closed for user root
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: pgmap v470: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 567 KiB/s rd, 3.1 MiB/s wr, 233 op/s
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "auth_id": "Joe", "tenant_id": "8fac2513a3ab4162a13f560c6301f671", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f/6bdf5028-e9f4-4a0d-812d-96892d0e92d0", "osd", "allow rw pool=manila_data namespace=fsvolumens_52918c2e-6ed5-45c2-9872-88b3bd77010f", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.Joe", "caps": ["mds", "allow rw path=/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f/6bdf5028-e9f4-4a0d-812d-96892d0e92d0", "osd", "allow rw pool=manila_data namespace=fsvolumens_52918c2e-6ed5-45c2-9872-88b3bd77010f", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/257417761' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/257417761' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:59:47 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 09:59:47 np0005625203.localdomain sudo[326422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 09:59:47 np0005625203.localdomain sudo[326422]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 09:59:47 np0005625203.localdomain sudo[326422]: pam_unix(sudo:session): session closed for user root
Feb 20 09:59:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:59:47 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.1 total, 600.0 interval
                                                          Cumulative writes: 16K writes, 61K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                                          Cumulative WAL: 16K writes, 5542 syncs, 2.94 writes per sync, written: 0.06 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 11K writes, 39K keys, 11K commit groups, 1.0 writes per commit group, ingest: 36.01 MB, 0.06 MB/s
                                                          Interval WAL: 11K writes, 4754 syncs, 2.35 writes per sync, written: 0.04 GB, 0.06 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 09:59:49 np0005625203.localdomain ceph-mon[296066]: pgmap v471: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 471 KiB/s rd, 2.6 MiB/s wr, 193 op/s
Feb 20 09:59:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3243055867' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3243055867' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:49 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 09:59:49 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:59:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:59:49 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:59:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 09:59:50 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 09:59:50 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 20 09:59:50 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 20 09:59:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 09:59:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:59:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "format": "json"}]: dispatch
Feb 20 09:59:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:51 np0005625203.localdomain ceph-mon[296066]: pgmap v472: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 462 KiB/s rd, 2.6 MiB/s wr, 182 op/s
Feb 20 09:59:51 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/813520502' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:51 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/813520502' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:51.497 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:51.862 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 09:59:51 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.1 total, 600.0 interval
                                                          Cumulative writes: 18K writes, 69K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s
                                                          Cumulative WAL: 18K writes, 6335 syncs, 2.98 writes per sync, written: 0.05 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 12K writes, 43K keys, 12K commit groups, 1.0 writes per commit group, ingest: 27.89 MB, 0.05 MB/s
                                                          Interval WAL: 12K writes, 5517 syncs, 2.34 writes per sync, written: 0.03 GB, 0.05 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 09:59:52 np0005625203.localdomain sshd[326440]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 09:59:52 np0005625203.localdomain sshd[326440]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 09:59:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:59:52 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1899022262' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:59:52 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1899022262' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:59:52 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:53 np0005625203.localdomain ceph-mon[296066]: pgmap v473: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 441 KiB/s rd, 2.6 MiB/s wr, 141 op/s
Feb 20 09:59:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1899022262' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1899022262' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:53 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 09:59:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:59:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:59:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb 20 09:59:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 09:59:53 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1818888118' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 09:59:53 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1818888118' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b9bf80c2-3fd5-47b7-9223-4a556ce7ef61", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:59:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b9bf80c2-3fd5-47b7-9223-4a556ce7ef61", "format": "json"}]: dispatch
Feb 20 09:59:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch
Feb 20 09:59:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "auth_id": "Joe", "tenant_id": "f656f9df86ae4c53b02f471da5bd5ad7", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:59:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1818888118' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 09:59:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1818888118' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 09:59:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 09:59:55 np0005625203.localdomain ceph-mon[296066]: pgmap v474: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 416 KiB/s rd, 2.5 MiB/s wr, 133 op/s
Feb 20 09:59:55 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 09:59:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 09:59:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 20 09:59:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 20 09:59:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-622295165", "format": "json"} : dispatch
Feb 20 09:59:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-622295165", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/17cc3f20-7a80-42d5-908d-fb1e4bba5d82", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:59:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-622295165", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/17cc3f20-7a80-42d5-908d-fb1e4bba5d82", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:59:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 09:59:56 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:56.500 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 09:59:56 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 09:59:56 np0005625203.localdomain podman[326443]: 2026-02-20 09:59:56.791201522 +0000 UTC m=+0.096932550 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 09:59:56 np0005625203.localdomain podman[326443]: 2026-02-20 09:59:56.822936114 +0000 UTC m=+0.128667172 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 09:59:56 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 09:59:56 np0005625203.localdomain podman[326442]: 2026-02-20 09:59:56.838558157 +0000 UTC m=+0.144003286 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 09:59:56 np0005625203.localdomain podman[326442]: 2026-02-20 09:59:56.850324711 +0000 UTC m=+0.155769850 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 09:59:56 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 09:59:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:56.864 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:57 np0005625203.localdomain ceph-mon[296066]: pgmap v475: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 398 KiB/s rd, 2.2 MiB/s wr, 161 op/s
Feb 20 09:59:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 09:59:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 09:59:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "auth_id": "tempest-cephx-id-622295165", "tenant_id": "f656f9df86ae4c53b02f471da5bd5ad7", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 09:59:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 09:59:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "format": "json"}]: dispatch
Feb 20 09:59:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 09:59:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:58.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:59:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:59:58.944 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 09:59:58 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 09:59:58.946 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 09:59:58 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:58.946 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 09:59:58 np0005625203.localdomain podman[240359]: time="2026-02-20T09:59:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 09:59:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:59:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157718 "" "Go-http-client/1.1"
Feb 20 09:59:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:09:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18782 "" "Go-http-client/1.1"
Feb 20 09:59:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:59.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 09:59:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:59.365 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 09:59:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:59.365 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 09:59:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:59.365 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 09:59:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:59.366 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 09:59:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:59.366 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 09:59:59 np0005625203.localdomain ceph-mon[296066]: pgmap v476: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 76 KiB/s wr, 80 op/s
Feb 20 09:59:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 09:59:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 09:59:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 09:59:59 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/17cc3f20-7a80-42d5-908d-fb1e4bba5d82],prefix=session evict} (starting...)
Feb 20 09:59:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 09:59:59 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2891047774' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 09:59:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 09:59:59.821 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:00:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:00:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:00.054 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 10:00:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:00.056 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11633MB free_disk=41.70030212402344GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 10:00:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:00.057 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:00:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:00.057 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:00:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:00.119 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 10:00:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:00.119 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 10:00:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:00.147 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:00:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:00:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "auth_id": "Joe", "format": "json"}]: dispatch
Feb 20 10:00:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "auth_id": "Joe", "format": "json"}]: dispatch
Feb 20 10:00:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2891047774' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:00:00 np0005625203.localdomain ceph-mon[296066]: overall HEALTH_OK
Feb 20 10:00:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:00:00 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1697830735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:00:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:00.648 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:00:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:00.654 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 10:00:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:00.669 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 10:00:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:00.671 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 10:00:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:00.671 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:00:01 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "snap_name": "25515e00-8c31-4609-a265-84e19f94da1a", "format": "json"}]: dispatch
Feb 20 10:00:01 np0005625203.localdomain ceph-mon[296066]: pgmap v477: 177 pgs: 177 active+clean; 280 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 54 KiB/s rd, 76 KiB/s wr, 81 op/s
Feb 20 10:00:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1697830735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:00:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:01.503 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:01 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:00:01 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:01.748 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:01.866 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:02 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 10:00:02 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/738363458' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:00:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/738363458' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:00:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1177894248' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:00:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 10:00:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 20 10:00:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 20 10:00:02 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=tempest-cephx-id-622295165,client_metadata.root=/volumes/_nogroup/2e5c48d9-dcb2-469b-91b0-c7f808d95c49/17cc3f20-7a80-42d5-908d-fb1e4bba5d82],prefix=session evict} (starting...)
Feb 20 10:00:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:02.672 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:00:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "f56cf5ff-1882-43b3-8a1d-c3448d693134", "size": 4294967296, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:00:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "f56cf5ff-1882-43b3-8a1d-c3448d693134", "format": "json"}]: dispatch
Feb 20 10:00:03 np0005625203.localdomain ceph-mon[296066]: pgmap v478: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 53 KiB/s rd, 115 KiB/s wr, 83 op/s
Feb 20 10:00:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 10:00:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 10:00:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "auth_id": "tempest-cephx-id-622295165", "format": "json"}]: dispatch
Feb 20 10:00:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-622295165", "format": "json"} : dispatch
Feb 20 10:00:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-622295165"} : dispatch
Feb 20 10:00:03 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-622295165"}]': finished
Feb 20 10:00:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "auth_id": "tempest-cephx-id-622295165", "format": "json"}]: dispatch
Feb 20 10:00:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1389774980' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:00:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:04.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:00:04 np0005625203.localdomain ceph-mon[296066]: pgmap v479: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 73 KiB/s wr, 48 op/s
Feb 20 10:00:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e217 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:00:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:00:05 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:05 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "34a9640c-9d47-4ee3-b8f2-22b14cf6ac88", "size": 3221225472, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:00:05 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "34a9640c-9d47-4ee3-b8f2-22b14cf6ac88", "format": "json"}]: dispatch
Feb 20 10:00:05 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:00:05 np0005625203.localdomain podman[326536]: 2026-02-20 10:00:05.769448478 +0000 UTC m=+0.085271710 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 20 10:00:05 np0005625203.localdomain podman[326536]: 2026-02-20 10:00:05.799285141 +0000 UTC m=+0.115108313 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Feb 20 10:00:05 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:00:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:05.877 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:05 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=Joe,client_metadata.root=/volumes/_nogroup/52918c2e-6ed5-45c2-9872-88b3bd77010f/6bdf5028-e9f4-4a0d-812d-96892d0e92d0],prefix=session evict} (starting...)
Feb 20 10:00:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:06.340 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:00:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:06.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:00:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:06.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 10:00:06 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch
Feb 20 10:00:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 10:00:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:00:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:00:06 np0005625203.localdomain ceph-mon[296066]: pgmap v480: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 30 KiB/s rd, 104 KiB/s wr, 52 op/s
Feb 20 10:00:06 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "auth_id": "Joe", "format": "json"}]: dispatch
Feb 20 10:00:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.Joe", "format": "json"} : dispatch
Feb 20 10:00:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.Joe"} : dispatch
Feb 20 10:00:06 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.Joe"}]': finished
Feb 20 10:00:06 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "auth_id": "Joe", "format": "json"}]: dispatch
Feb 20 10:00:06 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "snap_name": "25515e00-8c31-4609-a265-84e19f94da1a_bb185c5e-7bce-4b96-b50a-2749adcb4cc3", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:06 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "snap_name": "25515e00-8c31-4609-a265-84e19f94da1a", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:06.507 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:06.906 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:00:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:00:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:00:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:00:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:00:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:00:07 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1240294107' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:00:07.673 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:00:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:00:07.674 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:00:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:00:07.674 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:00:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:08.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:00:08 np0005625203.localdomain ceph-mon[296066]: pgmap v481: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 170 B/s rd, 70 KiB/s wr, 9 op/s
Feb 20 10:00:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3814731526' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:00:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/373438564' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e218 e218: 6 total, 6 up, 6 in
Feb 20 10:00:08 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:00:08.947 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 10:00:09 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 10:00:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:09.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:00:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:09.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 10:00:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:09.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 10:00:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:09.361 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e219 e219: 6 total, 6 up, 6 in
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "f56cf5ff-1882-43b3-8a1d-c3448d693134", "format": "json"}]: dispatch
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "f56cf5ff-1882-43b3-8a1d-c3448d693134", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: osdmap e218: 6 total, 6 up, 6 in
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3827509993' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "auth_id": "admin", "tenant_id": "8fac2513a3ab4162a13f560c6301f671", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin", "format": "json"} : dispatch
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "format": "json"}]: dispatch
Feb 20 10:00:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c382485c-3d04-46af-88c4-8eb360a0c45a", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:00:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:10.356 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:00:10 np0005625203.localdomain ceph-mon[296066]: pgmap v483: 177 pgs: 177 active+clean; 281 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 85 KiB/s wr, 12 op/s
Feb 20 10:00:10 np0005625203.localdomain ceph-mon[296066]: osdmap e219: 6 total, 6 up, 6 in
Feb 20 10:00:11 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e220 e220: 6 total, 6 up, 6 in
Feb 20 10:00:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:11.509 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:11.909 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:12 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1439580111' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:12 np0005625203.localdomain ceph-mon[296066]: osdmap e220: 6 total, 6 up, 6 in
Feb 20 10:00:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:00:12 np0005625203.localdomain podman[326555]: 2026-02-20 10:00:12.780134923 +0000 UTC m=+0.092785742 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Feb 20 10:00:12 np0005625203.localdomain podman[326555]: 2026-02-20 10:00:12.886596096 +0000 UTC m=+0.199246925 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 20 10:00:12 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:00:13 np0005625203.localdomain ceph-mon[296066]: pgmap v486: 177 pgs: 177 active+clean; 282 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 132 KiB/s wr, 51 op/s
Feb 20 10:00:13 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "34a9640c-9d47-4ee3-b8f2-22b14cf6ac88", "format": "json"}]: dispatch
Feb 20 10:00:13 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "34a9640c-9d47-4ee3-b8f2-22b14cf6ac88", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:13 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3101837055' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:13.336 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:00:14 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e221 e221: 6 total, 6 up, 6 in
Feb 20 10:00:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b9bf80c2-3fd5-47b7-9223-4a556ce7ef61", "format": "json"}]: dispatch
Feb 20 10:00:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b9bf80c2-3fd5-47b7-9223-4a556ce7ef61", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "auth_id": "david", "tenant_id": "8fac2513a3ab4162a13f560c6301f671", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:00:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb 20 10:00:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62/5d470986-550e-47dc-98aa-07cb4fcc93dc", "osd", "allow rw pool=manila_data namespace=fsvolumens_7f87993f-62fd-4706-b657-9586f12f2a62", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:00:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.david", "caps": ["mds", "allow rw path=/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62/5d470986-550e-47dc-98aa-07cb4fcc93dc", "osd", "allow rw pool=manila_data namespace=fsvolumens_7f87993f-62fd-4706-b657-9586f12f2a62", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:00:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:00:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 10:00:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:00:14 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:00:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:00:15 np0005625203.localdomain ceph-mon[296066]: pgmap v487: 177 pgs: 177 active+clean; 282 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 132 KiB/s wr, 51 op/s
Feb 20 10:00:15 np0005625203.localdomain ceph-mon[296066]: osdmap e221: 6 total, 6 up, 6 in
Feb 20 10:00:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e222 e222: 6 total, 6 up, 6 in
Feb 20 10:00:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:00:15 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:00:15 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 10:00:15 np0005625203.localdomain podman[326582]: 2026-02-20 10:00:15.78509284 +0000 UTC m=+0.098863840 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 20 10:00:15 np0005625203.localdomain podman[326582]: 2026-02-20 10:00:15.846943734 +0000 UTC m=+0.160714744 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 10:00:15 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:00:15 np0005625203.localdomain podman[326583]: 2026-02-20 10:00:15.850995649 +0000 UTC m=+0.159502656 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, vcs-type=git, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z)
Feb 20 10:00:15 np0005625203.localdomain sshd[326621]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:00:15 np0005625203.localdomain podman[326583]: 2026-02-20 10:00:15.933290095 +0000 UTC m=+0.241797092 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, architecture=x86_64, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, org.opencontainers.image.created=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1770267347, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Feb 20 10:00:15 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:00:16 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:00:16 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:16 np0005625203.localdomain ceph-mon[296066]: osdmap e222: 6 total, 6 up, 6 in
Feb 20 10:00:16 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 10:00:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 10:00:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 20 10:00:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 20 10:00:16 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:16.512 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:16 np0005625203.localdomain sshd[326621]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:00:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:16.910 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:17 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 20 10:00:17 np0005625203.localdomain ceph-mon[296066]: pgmap v490: 177 pgs: 177 active+clean; 546 MiB data, 1.9 GiB used, 40 GiB / 42 GiB avail; 152 KiB/s rd, 45 MiB/s wr, 250 op/s
Feb 20 10:00:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 10:00:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:00:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "format": "json"}]: dispatch
Feb 20 10:00:17 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e223 e223: 6 total, 6 up, 6 in
Feb 20 10:00:18 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e224 e224: 6 total, 6 up, 6 in
Feb 20 10:00:18 np0005625203.localdomain ceph-mon[296066]: pgmap v492: 177 pgs: 177 active+clean; 546 MiB data, 1.9 GiB used, 40 GiB / 42 GiB avail; 125 KiB/s rd, 44 MiB/s wr, 198 op/s
Feb 20 10:00:18 np0005625203.localdomain ceph-mon[296066]: osdmap e223: 6 total, 6 up, 6 in
Feb 20 10:00:18 np0005625203.localdomain ceph-mon[296066]: osdmap e224: 6 total, 6 up, 6 in
Feb 20 10:00:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 10:00:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:00:19 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:19 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:00:19 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/675717513' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:19 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch
Feb 20 10:00:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:00:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:00:19 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "a251c3bc-737c-4438-9523-36041c19a61e", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:00:19 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "a251c3bc-737c-4438-9523-36041c19a61e", "format": "json"}]: dispatch
Feb 20 10:00:19 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:19 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/675717513' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:19 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb 20 10:00:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:00:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e225 e225: 6 total, 6 up, 6 in
Feb 20 10:00:20 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "auth_id": "david", "tenant_id": "f656f9df86ae4c53b02f471da5bd5ad7", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:00:20 np0005625203.localdomain ceph-mon[296066]: pgmap v494: 177 pgs: 177 active+clean; 610 MiB data, 2.1 GiB used, 40 GiB / 42 GiB avail; 135 KiB/s rd, 59 MiB/s wr, 218 op/s
Feb 20 10:00:20 np0005625203.localdomain ceph-mon[296066]: osdmap e225: 6 total, 6 up, 6 in
Feb 20 10:00:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:21.515 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:21.912 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:22 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e226 e226: 6 total, 6 up, 6 in
Feb 20 10:00:22 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 10:00:22 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/7f44da3d-55b0-4b00-94b4-a7254a3d21a8/cd0b0378-3f17-48fa-acd7-6ae5ebc115a5],prefix=session evict} (starting...)
Feb 20 10:00:23 np0005625203.localdomain ceph-mon[296066]: pgmap v496: 177 pgs: 177 active+clean; 935 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 110 KiB/s rd, 65 MiB/s wr, 195 op/s
Feb 20 10:00:23 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 10:00:23 np0005625203.localdomain ceph-mon[296066]: osdmap e226: 6 total, 6 up, 6 in
Feb 20 10:00:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 10:00:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 20 10:00:23 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 20 10:00:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e227 e227: 6 total, 6 up, 6 in
Feb 20 10:00:23 np0005625203.localdomain sshd[326623]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:00:23 np0005625203.localdomain sshd[326623]: Invalid user x from 185.196.11.208 port 33290
Feb 20 10:00:23 np0005625203.localdomain sshd[326623]: Received disconnect from 185.196.11.208 port 33290:11: Bye Bye [preauth]
Feb 20 10:00:23 np0005625203.localdomain sshd[326623]: Disconnected from invalid user x 185.196.11.208 port 33290 [preauth]
Feb 20 10:00:24 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 10:00:24 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "auth_id": "david", "format": "json"}]: dispatch
Feb 20 10:00:24 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "auth_id": "david", "format": "json"}]: dispatch
Feb 20 10:00:24 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "a251c3bc-737c-4438-9523-36041c19a61e", "format": "json"}]: dispatch
Feb 20 10:00:24 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "a251c3bc-737c-4438-9523-36041c19a61e", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:24 np0005625203.localdomain ceph-mon[296066]: osdmap e227: 6 total, 6 up, 6 in
Feb 20 10:00:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:00:25 np0005625203.localdomain ceph-mon[296066]: pgmap v499: 177 pgs: 177 active+clean; 935 MiB data, 3.1 GiB used, 39 GiB / 42 GiB avail; 119 KiB/s rd, 70 MiB/s wr, 210 op/s
Feb 20 10:00:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 10:00:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e228 e228: 6 total, 6 up, 6 in
Feb 20 10:00:25 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=david,client_metadata.root=/volumes/_nogroup/7f87993f-62fd-4706-b657-9586f12f2a62/5d470986-550e-47dc-98aa-07cb4fcc93dc],prefix=session evict} (starting...)
Feb 20 10:00:26 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:00:26 np0005625203.localdomain ceph-mon[296066]: osdmap e228: 6 total, 6 up, 6 in
Feb 20 10:00:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:00:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:00:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.david", "format": "json"} : dispatch
Feb 20 10:00:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.david"} : dispatch
Feb 20 10:00:26 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.david"}]': finished
Feb 20 10:00:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:26.558 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:26 np0005625203.localdomain sshd[326625]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:00:26 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e229 e229: 6 total, 6 up, 6 in
Feb 20 10:00:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:26.914 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:27 np0005625203.localdomain sshd[326625]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:00:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:00:27 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:00:27 np0005625203.localdomain podman[326628]: 2026-02-20 10:00:27.181547459 +0000 UTC m=+0.102208794 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 10:00:27 np0005625203.localdomain podman[326628]: 2026-02-20 10:00:27.193281992 +0000 UTC m=+0.113943337 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 10:00:27 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:00:27 np0005625203.localdomain podman[326627]: 2026-02-20 10:00:27.274385381 +0000 UTC m=+0.195126658 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 10:00:27 np0005625203.localdomain podman[326627]: 2026-02-20 10:00:27.28727019 +0000 UTC m=+0.208011467 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 10:00:27 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:00:27 np0005625203.localdomain ceph-mon[296066]: pgmap v501: 177 pgs: 177 active+clean; 1.3 GiB data, 4.2 GiB used, 38 GiB / 42 GiB avail; 156 KiB/s rd, 75 MiB/s wr, 254 op/s
Feb 20 10:00:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "auth_id": "david", "format": "json"}]: dispatch
Feb 20 10:00:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "auth_id": "david", "format": "json"}]: dispatch
Feb 20 10:00:27 np0005625203.localdomain ceph-mon[296066]: osdmap e229: 6 total, 6 up, 6 in
Feb 20 10:00:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e230 e230: 6 total, 6 up, 6 in
Feb 20 10:00:28 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 10:00:28 np0005625203.localdomain podman[240359]: time="2026-02-20T10:00:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:00:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:00:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157718 "" "Go-http-client/1.1"
Feb 20 10:00:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18789 "" "Go-http-client/1.1"
Feb 20 10:00:29 np0005625203.localdomain ceph-mon[296066]: pgmap v503: 177 pgs: 177 active+clean; 1.3 GiB data, 4.2 GiB used, 38 GiB / 42 GiB avail; 137 KiB/s rd, 66 MiB/s wr, 224 op/s
Feb 20 10:00:29 np0005625203.localdomain ceph-mon[296066]: osdmap e230: 6 total, 6 up, 6 in
Feb 20 10:00:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 10:00:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 20 10:00:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 20 10:00:29 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:00:29 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:00:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 10:00:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 10:00:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "format": "json"}]: dispatch
Feb 20 10:00:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7f44da3d-55b0-4b00-94b4-a7254a3d21a8", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e231 e231: 6 total, 6 up, 6 in
Feb 20 10:00:31 np0005625203.localdomain ceph-mon[296066]: pgmap v505: 177 pgs: 177 active+clean; 1.1 GiB data, 3.8 GiB used, 38 GiB / 42 GiB avail; 135 KiB/s rd, 62 MiB/s wr, 234 op/s
Feb 20 10:00:31 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "908a48ac-0297-49f9-ba67-7193ff5e6e97", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:00:31 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "908a48ac-0297-49f9-ba67-7193ff5e6e97", "format": "json"}]: dispatch
Feb 20 10:00:31 np0005625203.localdomain ceph-mon[296066]: osdmap e231: 6 total, 6 up, 6 in
Feb 20 10:00:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:31.561 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:31 np0005625203.localdomain neutron_sriov_agent[255600]: 2026-02-20 10:00:31.868 2 INFO neutron.agent.securitygroups_rpc [req-dff2b32a-81fc-4277-8af6-ada27919a489 req-cd96c648-ee5c-4ce2-9c39-be0fe03b42e9 2ba1a8d771344f0a918e0a8bed2efd06 9fdf2c09b98d48c0bc67cc1c7702a8f4 - - default default] Security group member updated ['9d889f17-f220-427e-bd61-2fb67b868596']
Feb 20 10:00:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:31.917 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:32 np0005625203.localdomain dnsmasq[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/addn_hosts - 1 addresses
Feb 20 10:00:32 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/host
Feb 20 10:00:32 np0005625203.localdomain podman[326691]: 2026-02-20 10:00:32.101142049 +0000 UTC m=+0.056428526 container kill e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d612a55c-b2aa-4665-bf00-3e649d762c79, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Feb 20 10:00:32 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/opts
Feb 20 10:00:32 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 10:00:32 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:00:33 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e232 e232: 6 total, 6 up, 6 in
Feb 20 10:00:33 np0005625203.localdomain ceph-mon[296066]: pgmap v507: 177 pgs: 177 active+clean; 283 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 94 KiB/s rd, 89 KiB/s wr, 181 op/s
Feb 20 10:00:33 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch
Feb 20 10:00:33 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:00:33 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "format": "json"}]: dispatch
Feb 20 10:00:33 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/4233530869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:00:34 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2e5c48d9-dcb2-469b-91b0-c7f808d95c49", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:34 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "908a48ac-0297-49f9-ba67-7193ff5e6e97", "format": "json"}]: dispatch
Feb 20 10:00:34 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "908a48ac-0297-49f9-ba67-7193ff5e6e97", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:34 np0005625203.localdomain ceph-mon[296066]: pgmap v508: 177 pgs: 177 active+clean; 283 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 83 KiB/s rd, 79 KiB/s wr, 160 op/s
Feb 20 10:00:34 np0005625203.localdomain ceph-mon[296066]: osdmap e232: 6 total, 6 up, 6 in
Feb 20 10:00:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e233 e233: 6 total, 6 up, 6 in
Feb 20 10:00:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:00:35 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 10:00:35 np0005625203.localdomain ceph-mon[296066]: mgrmap e51: np0005625202.arwxwo(active, since 12m), standbys: np0005625203.lonygy, np0005625204.exgrzx
Feb 20 10:00:35 np0005625203.localdomain ceph-mon[296066]: osdmap e233: 6 total, 6 up, 6 in
Feb 20 10:00:35 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 10:00:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice", "format": "json"} : dispatch
Feb 20 10:00:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice"} : dispatch
Feb 20 10:00:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice"}]': finished
Feb 20 10:00:35 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice", "format": "json"}]: dispatch
Feb 20 10:00:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e234 e234: 6 total, 6 up, 6 in
Feb 20 10:00:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:36.565 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:00:36 np0005625203.localdomain podman[326712]: 2026-02-20 10:00:36.761576554 +0000 UTC m=+0.077184790 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 10:00:36 np0005625203.localdomain podman[326712]: 2026-02-20 10:00:36.770439658 +0000 UTC m=+0.086047924 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 10:00:36 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:00:36 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "format": "json"}]: dispatch
Feb 20 10:00:36 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "52918c2e-6ed5-45c2-9872-88b3bd77010f", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:36 np0005625203.localdomain ceph-mon[296066]: pgmap v511: 177 pgs: 177 active+clean; 204 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 144 KiB/s rd, 96 KiB/s wr, 246 op/s
Feb 20 10:00:36 np0005625203.localdomain ceph-mon[296066]: osdmap e234: 6 total, 6 up, 6 in
Feb 20 10:00:36 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e235 e235: 6 total, 6 up, 6 in
Feb 20 10:00:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:36.920 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:00:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:00:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:00:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:00:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:00:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:00:37 np0005625203.localdomain ceph-mon[296066]: osdmap e235: 6 total, 6 up, 6 in
Feb 20 10:00:37 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/711179978' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:00:37 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/711179978' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:00:37 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/660812735' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:00:37 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/660812735' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:00:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e236 e236: 6 total, 6 up, 6 in
Feb 20 10:00:39 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:00:39Z|00406|ovn_bfd|INFO|Disabled BFD on interface ovn-0c414b-0
Feb 20 10:00:39 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:00:39Z|00407|ovn_bfd|INFO|Disabled BFD on interface ovn-2275c3-0
Feb 20 10:00:39 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:00:39Z|00408|ovn_bfd|INFO|Disabled BFD on interface ovn-2df8cc-0
Feb 20 10:00:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:39.073 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:39.075 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:39.079 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:39 np0005625203.localdomain ceph-mon[296066]: pgmap v514: 177 pgs: 177 active+clean; 204 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 86 KiB/s rd, 109 KiB/s wr, 136 op/s
Feb 20 10:00:39 np0005625203.localdomain ceph-mon[296066]: osdmap e236: 6 total, 6 up, 6 in
Feb 20 10:00:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 10:00:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:00:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:00:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e237 e237: 6 total, 6 up, 6 in
Feb 20 10:00:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:39.156 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:39 np0005625203.localdomain dnsmasq[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/addn_hosts - 0 addresses
Feb 20 10:00:39 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/host
Feb 20 10:00:39 np0005625203.localdomain podman[326748]: 2026-02-20 10:00:39.186475614 +0000 UTC m=+0.062959918 container kill e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d612a55c-b2aa-4665-bf00-3e649d762c79, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 10:00:39 np0005625203.localdomain dnsmasq-dhcp[325108]: read /var/lib/neutron/dhcp/d612a55c-b2aa-4665-bf00-3e649d762c79/opts
Feb 20 10:00:39 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:00:39Z|00409|binding|INFO|Releasing lport afd29956-146a-4b52-adfb-966c11fcecd7 from this chassis (sb_readonly=0)
Feb 20 10:00:39 np0005625203.localdomain kernel: device tapafd29956-14 left promiscuous mode
Feb 20 10:00:39 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:00:39Z|00410|binding|INFO|Setting lport afd29956-146a-4b52-adfb-966c11fcecd7 down in Southbound
Feb 20 10:00:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:39.472 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:00:39.487 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-d612a55c-b2aa-4665-bf00-3e649d762c79', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d612a55c-b2aa-4665-bf00-3e649d762c79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9fdf2c09b98d48c0bc67cc1c7702a8f4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0faef055-9745-4ab8-b295-a6260661d3dc, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=afd29956-146a-4b52-adfb-966c11fcecd7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 10:00:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:00:39.489 161112 INFO neutron.agent.ovn.metadata.agent [-] Port afd29956-146a-4b52-adfb-966c11fcecd7 in datapath d612a55c-b2aa-4665-bf00-3e649d762c79 unbound from our chassis
Feb 20 10:00:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:00:39.492 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d612a55c-b2aa-4665-bf00-3e649d762c79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 10:00:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:00:39.492 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[6f418f74-ed3e-402e-b332-4755ec05d766]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 10:00:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:39.496 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:00:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:00:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "auth_id": "admin", "format": "json"}]: dispatch
Feb 20 10:00:40 np0005625203.localdomain ceph-mon[296066]: osdmap e237: 6 total, 6 up, 6 in
Feb 20 10:00:41 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "format": "json"}]: dispatch
Feb 20 10:00:41 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7f87993f-62fd-4706-b657-9586f12f2a62", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:41 np0005625203.localdomain ceph-mon[296066]: pgmap v517: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 3.2 KiB/s rd, 94 KiB/s wr, 15 op/s
Feb 20 10:00:41 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2569557107' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:00:41 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2569557107' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:00:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:41.592 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:41 np0005625203.localdomain dnsmasq[325108]: exiting on receipt of SIGTERM
Feb 20 10:00:41 np0005625203.localdomain podman[326789]: 2026-02-20 10:00:41.829618098 +0000 UTC m=+0.045975994 container kill e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d612a55c-b2aa-4665-bf00-3e649d762c79, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 10:00:41 np0005625203.localdomain systemd[1]: libpod-e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55.scope: Deactivated successfully.
Feb 20 10:00:41 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 10:00:41 np0005625203.localdomain podman[326803]: 2026-02-20 10:00:41.909781097 +0000 UTC m=+0.055669233 container died e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d612a55c-b2aa-4665-bf00-3e649d762c79, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Feb 20 10:00:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:41.923 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:41 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55-userdata-shm.mount: Deactivated successfully.
Feb 20 10:00:41 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-3fc1cf7a5b58b7161351dea26d410d0b70ad5edd88b7cedd26a4ea9550ce0202-merged.mount: Deactivated successfully.
Feb 20 10:00:41 np0005625203.localdomain podman[326803]: 2026-02-20 10:00:41.948706262 +0000 UTC m=+0.094594338 container remove e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-d612a55c-b2aa-4665-bf00-3e649d762c79, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 10:00:41 np0005625203.localdomain systemd[1]: libpod-conmon-e21c50f8ab1498fa70a200d0b71e2e48ef47e4efc2c31830cfd062869ef6cc55.scope: Deactivated successfully.
Feb 20 10:00:41 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:00:41.976 262775 INFO neutron.agent.dhcp.agent [None req-4f197ce1-8180-40ed-8bfe-e3c33544222d - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 10:00:41 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2dd612a55c\x2db2aa\x2d4665\x2dbf00\x2d3e649d762c79.mount: Deactivated successfully.
Feb 20 10:00:42 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:00:42.095 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 10:00:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 10:00:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 20 10:00:42 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 20 10:00:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:42.260 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e238 e238: 6 total, 6 up, 6 in
Feb 20 10:00:43 np0005625203.localdomain ceph-mon[296066]: pgmap v518: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 102 KiB/s rd, 72 KiB/s wr, 144 op/s
Feb 20 10:00:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 10:00:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 10:00:43 np0005625203.localdomain ceph-mon[296066]: osdmap e238: 6 total, 6 up, 6 in
Feb 20 10:00:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:00:43 np0005625203.localdomain systemd[1]: tmp-crun.ENsAYh.mount: Deactivated successfully.
Feb 20 10:00:43 np0005625203.localdomain podman[326828]: 2026-02-20 10:00:43.774499988 +0000 UTC m=+0.089782869 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, container_name=ovn_controller)
Feb 20 10:00:43 np0005625203.localdomain podman[326828]: 2026-02-20 10:00:43.876470612 +0000 UTC m=+0.191753473 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller)
Feb 20 10:00:43 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:00:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:00:45 np0005625203.localdomain ceph-mon[296066]: pgmap v520: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 100 KiB/s rd, 70 KiB/s wr, 141 op/s
Feb 20 10:00:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 10:00:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:00:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice_bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:00:46 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch
Feb 20 10:00:46 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2664060444' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:00:46 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2664060444' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:00:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:46.621 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:00:46 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:00:46 np0005625203.localdomain podman[326854]: 2026-02-20 10:00:46.771373984 +0000 UTC m=+0.081566834 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Feb 20 10:00:46 np0005625203.localdomain podman[326854]: 2026-02-20 10:00:46.787467512 +0000 UTC m=+0.097660322 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute)
Feb 20 10:00:46 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:00:46 np0005625203.localdomain podman[326855]: 2026-02-20 10:00:46.875573487 +0000 UTC m=+0.181121774 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, config_id=openstack_network_exporter, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c)
Feb 20 10:00:46 np0005625203.localdomain podman[326855]: 2026-02-20 10:00:46.916412661 +0000 UTC m=+0.221960968 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., release=1770267347, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, build-date=2026-02-05T04:57:10Z, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 10:00:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:46.925 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:46 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:00:47 np0005625203.localdomain ceph-mon[296066]: pgmap v521: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 119 KiB/s rd, 94 KiB/s wr, 170 op/s
Feb 20 10:00:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e239 e239: 6 total, 6 up, 6 in
Feb 20 10:00:47 np0005625203.localdomain sudo[326893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 10:00:47 np0005625203.localdomain sudo[326893]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:00:47 np0005625203.localdomain sudo[326893]: pam_unix(sudo:session): session closed for user root
Feb 20 10:00:47 np0005625203.localdomain sudo[326911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 10:00:47 np0005625203.localdomain sudo[326911]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:00:48 np0005625203.localdomain ceph-mon[296066]: osdmap e239: 6 total, 6 up, 6 in
Feb 20 10:00:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3370685851' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:00:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3370685851' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:00:48 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice_bob", "format": "json"} : dispatch
Feb 20 10:00:48 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice_bob"} : dispatch
Feb 20 10:00:48 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice_bob"}]': finished
Feb 20 10:00:48 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice_bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 10:00:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:00:48 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:48 np0005625203.localdomain sudo[326911]: pam_unix(sudo:session): session closed for user root
Feb 20 10:00:48 np0005625203.localdomain sudo[326962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 10:00:48 np0005625203.localdomain sudo[326962]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:00:48 np0005625203.localdomain sudo[326962]: pam_unix(sudo:session): session closed for user root
Feb 20 10:00:49 np0005625203.localdomain ceph-mon[296066]: pgmap v523: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 111 KiB/s rd, 41 KiB/s wr, 153 op/s
Feb 20 10:00:49 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 10:00:49 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice_bob", "format": "json"}]: dispatch
Feb 20 10:00:49 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "65f6f73f-d610-45f7-b23e-11db4d12fdda", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:00:49 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:00:49 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 10:00:49 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 10:00:49 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:00:49 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 10:00:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:00:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "65f6f73f-d610-45f7-b23e-11db4d12fdda", "format": "json"}]: dispatch
Feb 20 10:00:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3919460560' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:00:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3919460560' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:00:51 np0005625203.localdomain ceph-mon[296066]: pgmap v524: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 76 KiB/s wr, 62 op/s
Feb 20 10:00:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:51.662 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:51.929 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:52 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 10:00:52 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:00:52 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e240 e240: 6 total, 6 up, 6 in
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: pgmap v525: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 91 KiB/s rd, 74 KiB/s wr, 129 op/s
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: osdmap e240: 6 total, 6 up, 6 in
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.417267) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581653417305, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 2908, "num_deletes": 273, "total_data_size": 4087565, "memory_usage": 4222800, "flush_reason": "Manual Compaction"}
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581653433066, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 2672399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29161, "largest_seqno": 32064, "table_properties": {"data_size": 2660779, "index_size": 7293, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3269, "raw_key_size": 29325, "raw_average_key_size": 22, "raw_value_size": 2635779, "raw_average_value_size": 2044, "num_data_blocks": 311, "num_entries": 1289, "num_filter_entries": 1289, "num_deletions": 273, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581543, "oldest_key_time": 1771581543, "file_creation_time": 1771581653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 15856 microseconds, and 7001 cpu microseconds.
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.433120) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 2672399 bytes OK
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.433144) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.435131) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.435150) EVENT_LOG_v1 {"time_micros": 1771581653435145, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.435180) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 4073584, prev total WAL file size 4073584, number of live WAL files 2.
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.436298) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132323939' seq:72057594037927935, type:22 .. '7061786F73003132353531' seq:0, type:0; will stop at (end)
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(2609KB)], [48(18MB)]
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581653436406, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 21603488, "oldest_snapshot_seqno": -1}
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 13873 keys, 19893752 bytes, temperature: kUnknown
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581653527483, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 19893752, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19810072, "index_size": 47931, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34693, "raw_key_size": 369885, "raw_average_key_size": 26, "raw_value_size": 19569982, "raw_average_value_size": 1410, "num_data_blocks": 1820, "num_entries": 13873, "num_filter_entries": 13873, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.527994) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 19893752 bytes
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.529882) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.8 rd, 218.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 18.1 +0.0 blob) out(19.0 +0.0 blob), read-write-amplify(15.5) write-amplify(7.4) OK, records in: 14429, records dropped: 556 output_compression: NoCompression
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.529939) EVENT_LOG_v1 {"time_micros": 1771581653529926, "job": 28, "event": "compaction_finished", "compaction_time_micros": 91235, "compaction_time_cpu_micros": 61816, "output_level": 6, "num_output_files": 1, "total_output_size": 19893752, "num_input_records": 14429, "num_output_records": 13873, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581653530515, "job": 28, "event": "table_file_deletion", "file_number": 50}
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581653532924, "job": 28, "event": "table_file_deletion", "file_number": 48}
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.436115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.533023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.533029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.533032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.533036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:00:53 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:00:53.533038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:00:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "65f6f73f-d610-45f7-b23e-11db4d12fdda", "format": "json"}]: dispatch
Feb 20 10:00:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "65f6f73f-d610-45f7-b23e-11db4d12fdda", "force": true, "format": "json"}]: dispatch
Feb 20 10:00:54 np0005625203.localdomain ceph-mon[296066]: pgmap v527: 177 pgs: 177 active+clean; 205 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 60 KiB/s rd, 43 KiB/s wr, 83 op/s
Feb 20 10:00:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:00:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:00:55 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 10:00:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:55.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:00:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:55.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 20 10:00:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:55.376 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 20 10:00:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 10:00:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 10:00:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 20 10:00:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 20 10:00:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 10:00:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:56.695 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:56 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:56.931 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:00:57 np0005625203.localdomain ceph-mon[296066]: pgmap v528: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 57 KiB/s rd, 68 KiB/s wr, 83 op/s
Feb 20 10:00:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:00:57 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:00:57 np0005625203.localdomain systemd[1]: tmp-crun.iTmmyg.mount: Deactivated successfully.
Feb 20 10:00:57 np0005625203.localdomain podman[326981]: 2026-02-20 10:00:57.798895519 +0000 UTC m=+0.105331920 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 10:00:57 np0005625203.localdomain podman[326980]: 2026-02-20 10:00:57.839856256 +0000 UTC m=+0.150940961 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 10:00:57 np0005625203.localdomain podman[326980]: 2026-02-20 10:00:57.877618174 +0000 UTC m=+0.188702889 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 10:00:57 np0005625203.localdomain podman[326981]: 2026-02-20 10:00:57.880255216 +0000 UTC m=+0.186691587 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 10:00:57 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:00:57 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:00:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e241 e241: 6 total, 6 up, 6 in
Feb 20 10:00:58 np0005625203.localdomain podman[240359]: time="2026-02-20T10:00:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:00:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:00:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 10:00:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18316 "" "Go-http-client/1.1"
Feb 20 10:00:59 np0005625203.localdomain ceph-mon[296066]: pgmap v529: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 48 KiB/s rd, 57 KiB/s wr, 69 op/s
Feb 20 10:00:59 np0005625203.localdomain ceph-mon[296066]: osdmap e241: 6 total, 6 up, 6 in
Feb 20 10:00:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "r", "format": "json"}]: dispatch
Feb 20 10:00:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 10:00:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:00:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.alice bob", "caps": ["mds", "allow r path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow r pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:00:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:00:59.376 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:00:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e242 e242: 6 total, 6 up, 6 in
Feb 20 10:01:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:01:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:00 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:00.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:01:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:00.375 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:01:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:00.375 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:01:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:00.376 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:01:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:00.376 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 10:01:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:00.376 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:01:00 np0005625203.localdomain ceph-mon[296066]: osdmap e242: 6 total, 6 up, 6 in
Feb 20 10:01:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e243 e243: 6 total, 6 up, 6 in
Feb 20 10:01:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:01:00 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3769969177' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:01:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:00.825 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.049 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.051 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11613MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.051 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.052 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.136 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.137 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.161 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:01:01 np0005625203.localdomain ceph-mon[296066]: pgmap v532: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 156 B/s rd, 89 KiB/s wr, 8 op/s
Feb 20 10:01:01 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:01 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "format": "json"}]: dispatch
Feb 20 10:01:01 np0005625203.localdomain ceph-mon[296066]: osdmap e243: 6 total, 6 up, 6 in
Feb 20 10:01:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3769969177' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:01:01 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:01:01 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2951262390' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.625 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.631 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.654 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.656 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.657 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.739 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:01 np0005625203.localdomain CROND[327070]: (root) CMD (run-parts /etc/cron.hourly)
Feb 20 10:01:01 np0005625203.localdomain run-parts[327073]: (/etc/cron.hourly) starting 0anacron
Feb 20 10:01:01 np0005625203.localdomain run-parts[327079]: (/etc/cron.hourly) finished 0anacron
Feb 20 10:01:01 np0005625203.localdomain CROND[327069]: (root) CMDEND (run-parts /etc/cron.hourly)
Feb 20 10:01:01 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=alice bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 10:01:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:01.934 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2951262390' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:01:02 np0005625203.localdomain ceph-mon[296066]: pgmap v534: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 22 KiB/s rd, 63 KiB/s wr, 39 op/s
Feb 20 10:01:02 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 10:01:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.alice bob", "format": "json"} : dispatch
Feb 20 10:01:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.alice bob"} : dispatch
Feb 20 10:01:02 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.alice bob"}]': finished
Feb 20 10:01:02 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "alice bob", "format": "json"}]: dispatch
Feb 20 10:01:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1908739206' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:01:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1908739206' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:01:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:01:02.496 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 10:01:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:01:02.497 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 10:01:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:02.496 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:02.658 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:01:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e244 e244: 6 total, 6 up, 6 in
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.108381) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581663108431, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 466, "num_deletes": 250, "total_data_size": 413035, "memory_usage": 422576, "flush_reason": "Manual Compaction"}
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581663112181, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 270910, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32069, "largest_seqno": 32530, "table_properties": {"data_size": 268258, "index_size": 699, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7400, "raw_average_key_size": 21, "raw_value_size": 262609, "raw_average_value_size": 746, "num_data_blocks": 31, "num_entries": 352, "num_filter_entries": 352, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581653, "oldest_key_time": 1771581653, "file_creation_time": 1771581663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 3845 microseconds, and 1514 cpu microseconds.
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.112227) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 270910 bytes OK
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.112252) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.113729) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.113750) EVENT_LOG_v1 {"time_micros": 1771581663113744, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.113775) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 410099, prev total WAL file size 410099, number of live WAL files 2.
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.116256) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740034303036' seq:72057594037927935, type:22 .. '6D6772737461740034323537' seq:0, type:0; will stop at (end)
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(264KB)], [51(18MB)]
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581663116299, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 20164662, "oldest_snapshot_seqno": -1}
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 13702 keys, 18046322 bytes, temperature: kUnknown
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581663203433, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 18046322, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17968264, "index_size": 42693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34309, "raw_key_size": 366678, "raw_average_key_size": 26, "raw_value_size": 17735638, "raw_average_value_size": 1294, "num_data_blocks": 1601, "num_entries": 13702, "num_filter_entries": 13702, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.203935) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 18046322 bytes
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.206431) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 231.1 rd, 206.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 19.0 +0.0 blob) out(17.2 +0.0 blob), read-write-amplify(141.0) write-amplify(66.6) OK, records in: 14225, records dropped: 523 output_compression: NoCompression
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.206461) EVENT_LOG_v1 {"time_micros": 1771581663206448, "job": 30, "event": "compaction_finished", "compaction_time_micros": 87266, "compaction_time_cpu_micros": 54528, "output_level": 6, "num_output_files": 1, "total_output_size": 18046322, "num_input_records": 14225, "num_output_records": 13702, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581663206670, "job": 30, "event": "table_file_deletion", "file_number": 53}
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581663210046, "job": 30, "event": "table_file_deletion", "file_number": 51}
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.116171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.210111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.210119) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.210122) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.210126) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:01:03.210129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: osdmap e244: 6 total, 6 up, 6 in
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/884298509' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:01:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "snap_name": "ba024a11-8c4d-4adf-9f1b-141c894e0dc3", "format": "json"}]: dispatch
Feb 20 10:01:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:04.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:01:04 np0005625203.localdomain ceph-mon[296066]: pgmap v536: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 24 KiB/s rd, 69 KiB/s wr, 43 op/s
Feb 20 10:01:04 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2041852401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:01:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:01:05 np0005625203.localdomain sshd[327080]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:01:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e245 e245: 6 total, 6 up, 6 in
Feb 20 10:01:05 np0005625203.localdomain sshd[327080]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:01:05 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:01:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 20 10:01:05 np0005625203.localdomain ceph-mon[296066]: osdmap e245: 6 total, 6 up, 6 in
Feb 20 10:01:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:01:05 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:01:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:06.784 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:06.936 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:01:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:01:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:01:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:01:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:01:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:01:07 np0005625203.localdomain ceph-mon[296066]: pgmap v538: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 67 KiB/s rd, 51 KiB/s wr, 99 op/s
Feb 20 10:01:07 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e246 e246: 6 total, 6 up, 6 in
Feb 20 10:01:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:07.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:01:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:07.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:01:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:07.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 10:01:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:01:07.675 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:01:07 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:01:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:01:07.675 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:01:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:01:07.676 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:01:07 np0005625203.localdomain podman[327082]: 2026-02-20 10:01:07.778976113 +0000 UTC m=+0.088534241 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 20 10:01:07 np0005625203.localdomain podman[327082]: 2026-02-20 10:01:07.784858974 +0000 UTC m=+0.094417092 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Feb 20 10:01:07 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:01:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e247 e247: 6 total, 6 up, 6 in
Feb 20 10:01:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "snap_name": "ba024a11-8c4d-4adf-9f1b-141c894e0dc3", "target_sub_name": "97e63579-f59d-4812-9af1-a8d227932ace", "format": "json"}]: dispatch
Feb 20 10:01:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "97e63579-f59d-4812-9af1-a8d227932ace", "format": "json"}]: dispatch
Feb 20 10:01:08 np0005625203.localdomain ceph-mon[296066]: osdmap e246: 6 total, 6 up, 6 in
Feb 20 10:01:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1804254351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:01:08 np0005625203.localdomain ceph-mon[296066]: osdmap e247: 6 total, 6 up, 6 in
Feb 20 10:01:09 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:09 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:09.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:01:09 np0005625203.localdomain ceph-mon[296066]: pgmap v540: 177 pgs: 177 active+clean; 206 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 47 KiB/s wr, 65 op/s
Feb 20 10:01:09 np0005625203.localdomain ceph-mon[296066]: mgrmap e52: np0005625202.arwxwo(active, since 12m), standbys: np0005625203.lonygy, np0005625204.exgrzx
Feb 20 10:01:09 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1460565932' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:01:09 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1460565932' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:01:09 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3375971377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:01:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:01:09.500 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 10:01:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:01:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:10.337 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:01:10 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:10 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "format": "json"}]: dispatch
Feb 20 10:01:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:11.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:01:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:11.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 10:01:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:11.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 10:01:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:11.356 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 10:01:11 np0005625203.localdomain ceph-mon[296066]: pgmap v542: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 44 KiB/s rd, 130 KiB/s wr, 72 op/s
Feb 20 10:01:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:11.809 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:11.938 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:13 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e248 e248: 6 total, 6 up, 6 in
Feb 20 10:01:13 np0005625203.localdomain ceph-mon[296066]: pgmap v543: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 33 KiB/s rd, 80 KiB/s wr, 53 op/s
Feb 20 10:01:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 20 10:01:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb,allow rw path=/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d/d3136894-05be-4407-a87e-f697b6b3efd1", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5,allow rw pool=manila_data namespace=fsvolumens_8be201ef-8dd5-4872-91e4-0290b94f4b6d"]} : dispatch
Feb 20 10:01:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mon", "allow r", "mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb,allow rw path=/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d/d3136894-05be-4407-a87e-f697b6b3efd1", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5,allow rw pool=manila_data namespace=fsvolumens_8be201ef-8dd5-4872-91e4-0290b94f4b6d"]}]': finished
Feb 20 10:01:13 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 20 10:01:13 np0005625203.localdomain ceph-mon[296066]: osdmap e248: 6 total, 6 up, 6 in
Feb 20 10:01:14 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e249 e249: 6 total, 6 up, 6 in
Feb 20 10:01:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "auth_id": "bob", "tenant_id": "8e04bc360fa14db4a793bc5de7a0a299", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:01:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:01:14 np0005625203.localdomain podman[327101]: 2026-02-20 10:01:14.767916153 +0000 UTC m=+0.076257451 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Feb 20 10:01:14 np0005625203.localdomain podman[327101]: 2026-02-20 10:01:14.837578767 +0000 UTC m=+0.145920085 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 10:01:14 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:01:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:01:15 np0005625203.localdomain ceph-mon[296066]: pgmap v545: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 81 KiB/s wr, 54 op/s
Feb 20 10:01:15 np0005625203.localdomain ceph-mon[296066]: osdmap e249: 6 total, 6 up, 6 in
Feb 20 10:01:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e250 e250: 6 total, 6 up, 6 in
Feb 20 10:01:15 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/8be201ef-8dd5-4872-91e4-0290b94f4b6d/d3136894-05be-4407-a87e-f697b6b3efd1],prefix=session evict} (starting...)
Feb 20 10:01:16 np0005625203.localdomain ceph-mon[296066]: osdmap e250: 6 total, 6 up, 6 in
Feb 20 10:01:16 np0005625203.localdomain ceph-mon[296066]: pgmap v548: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 47 KiB/s rd, 35 KiB/s wr, 71 op/s
Feb 20 10:01:16 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "auth_id": "bob", "format": "json"}]: dispatch
Feb 20 10:01:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 20 10:01:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5"]} : dispatch
Feb 20 10:01:16 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth caps", "entity": "client.bob", "caps": ["mds", "allow rw path=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb", "osd", "allow rw pool=manila_data namespace=fsvolumens_500c5eac-e7f6-4365-974d-923d6976cbd5"]}]': finished
Feb 20 10:01:16 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "auth_id": "bob", "format": "json"}]: dispatch
Feb 20 10:01:16 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 10:01:16 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1384276159' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:01:16 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 10:01:16 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/1384276159' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:01:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:16.814 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:16 np0005625203.localdomain sshd[327126]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:01:16 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:16.940 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.204 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:01:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:01:17 np0005625203.localdomain sshd[327126]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:01:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:01:17 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:01:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:17.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:01:17 np0005625203.localdomain podman[327129]: 2026-02-20 10:01:17.392858993 +0000 UTC m=+0.080126130 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.created=2026-02-05T04:57:10Z, distribution-scope=public, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_id=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z)
Feb 20 10:01:17 np0005625203.localdomain podman[327129]: 2026-02-20 10:01:17.43124598 +0000 UTC m=+0.118513117 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2026-02-05T04:57:10Z, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347)
Feb 20 10:01:17 np0005625203.localdomain systemd[1]: tmp-crun.fRpFBc.mount: Deactivated successfully.
Feb 20 10:01:17 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:01:17 np0005625203.localdomain podman[327128]: 2026-02-20 10:01:17.452258411 +0000 UTC m=+0.142446979 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 10:01:17 np0005625203.localdomain podman[327128]: 2026-02-20 10:01:17.460624779 +0000 UTC m=+0.150813337 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Feb 20 10:01:17 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:01:17 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1384276159' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:01:17 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1384276159' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:01:19 np0005625203.localdomain ceph-mon[296066]: pgmap v549: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 11 KiB/s rd, 31 KiB/s wr, 21 op/s
Feb 20 10:01:19 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=bob,client_metadata.root=/volumes/_nogroup/500c5eac-e7f6-4365-974d-923d6976cbd5/25e42ac7-68f5-4e72-8fae-18ed5a220bcb],prefix=session evict} (starting...)
Feb 20 10:01:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:01:20 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "bob", "format": "json"}]: dispatch
Feb 20 10:01:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.bob", "format": "json"} : dispatch
Feb 20 10:01:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.bob"} : dispatch
Feb 20 10:01:20 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.bob"}]': finished
Feb 20 10:01:20 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "auth_id": "bob", "format": "json"}]: dispatch
Feb 20 10:01:21 np0005625203.localdomain ceph-mon[296066]: pgmap v550: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 10 KiB/s rd, 84 KiB/s wr, 24 op/s
Feb 20 10:01:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:21.369 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:01:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:21.369 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 20 10:01:21 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:21 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:21.841 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:21 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:21.941 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:22 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e251 e251: 6 total, 6 up, 6 in
Feb 20 10:01:23 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:23 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "format": "json"}]: dispatch
Feb 20 10:01:23 np0005625203.localdomain ceph-mon[296066]: pgmap v551: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 34 KiB/s rd, 71 KiB/s wr, 55 op/s
Feb 20 10:01:23 np0005625203.localdomain ceph-mon[296066]: osdmap e251: 6 total, 6 up, 6 in
Feb 20 10:01:23 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:01:23Z|00411|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb 20 10:01:24 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "format": "json"}]: dispatch
Feb 20 10:01:24 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8be201ef-8dd5-4872-91e4-0290b94f4b6d", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:01:25 np0005625203.localdomain ceph-mon[296066]: pgmap v553: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 46 KiB/s wr, 38 op/s
Feb 20 10:01:26 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "snap_name": "607e1fd9-dd9b-4387-8d26-3ad0a4a7fc6e", "format": "json"}]: dispatch
Feb 20 10:01:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:26.878 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:26 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:26.942 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:27 np0005625203.localdomain ceph-mon[296066]: pgmap v554: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 57 KiB/s wr, 33 op/s
Feb 20 10:01:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "format": "json"}]: dispatch
Feb 20 10:01:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "500c5eac-e7f6-4365-974d-923d6976cbd5", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:28 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:01:28 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:01:28 np0005625203.localdomain podman[327166]: 2026-02-20 10:01:28.766313372 +0000 UTC m=+0.083010159 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 10:01:28 np0005625203.localdomain podman[327166]: 2026-02-20 10:01:28.775190587 +0000 UTC m=+0.091887334 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 10:01:28 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:01:28 np0005625203.localdomain podman[327167]: 2026-02-20 10:01:28.832774868 +0000 UTC m=+0.144041937 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 10:01:28 np0005625203.localdomain podman[327167]: 2026-02-20 10:01:28.866586284 +0000 UTC m=+0.177853383 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 10:01:28 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:01:28 np0005625203.localdomain podman[240359]: time="2026-02-20T10:01:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:01:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:01:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 10:01:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18301 "" "Go-http-client/1.1"
Feb 20 10:01:29 np0005625203.localdomain ceph-mon[296066]: pgmap v555: 177 pgs: 177 active+clean; 207 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 57 KiB/s wr, 33 op/s
Feb 20 10:01:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "snap_name": "607e1fd9-dd9b-4387-8d26-3ad0a4a7fc6e_7d12ad99-df51-4085-a46f-32f38bb8f276", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "snap_name": "607e1fd9-dd9b-4387-8d26-3ad0a4a7fc6e", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "format": "json"}]: dispatch
Feb 20 10:01:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:01:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:30 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:31 np0005625203.localdomain ceph-mon[296066]: pgmap v556: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 21 KiB/s rd, 53 KiB/s wr, 33 op/s
Feb 20 10:01:31 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:31 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "format": "json"}]: dispatch
Feb 20 10:01:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:31.880 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:31 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:31.943 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:32 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:32 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:32 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e252 e252: 6 total, 6 up, 6 in
Feb 20 10:01:32 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "format": "json"}]: dispatch
Feb 20 10:01:32 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "afaf9e02-272e-4ffc-b272-cdee923c5a1d", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:32 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:33 np0005625203.localdomain ceph-mon[296066]: pgmap v557: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 52 KiB/s wr, 6 op/s
Feb 20 10:01:33 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:33 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "format": "json"}]: dispatch
Feb 20 10:01:33 np0005625203.localdomain ceph-mon[296066]: osdmap e252: 6 total, 6 up, 6 in
Feb 20 10:01:34 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "snap_name": "ddab1265-f000-456b-af11-3f6f3cbbac23", "format": "json"}]: dispatch
Feb 20 10:01:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:34 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:01:35 np0005625203.localdomain ceph-mon[296066]: pgmap v559: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 52 KiB/s wr, 6 op/s
Feb 20 10:01:35 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:01:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3/40c90f7c-2187-4f07-808e-7e98f84f2bd9", "osd", "allow rw pool=manila_data namespace=fsvolumens_3176bf32-687c-42ee-9751-803dd8a1fea3", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:01:35 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3/40c90f7c-2187-4f07-808e-7e98f84f2bd9", "osd", "allow rw pool=manila_data namespace=fsvolumens_3176bf32-687c-42ee-9751-803dd8a1fea3", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:01:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:35 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:36 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "9ee8c4a3-7471-4713-8c2c-cbe769628e3f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:36 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "9ee8c4a3-7471-4713-8c2c-cbe769628e3f", "format": "json"}]: dispatch
Feb 20 10:01:36 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:01:36 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:36.917 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:36 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:36.944 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:01:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:01:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:01:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:01:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:01:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:01:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:37 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:37 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "04824cf7-cd0e-41ba-8d21-04797c7215a6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:37 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "04824cf7-cd0e-41ba-8d21-04797c7215a6", "format": "json"}]: dispatch
Feb 20 10:01:37 np0005625203.localdomain ceph-mon[296066]: pgmap v560: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 75 KiB/s wr, 7 op/s
Feb 20 10:01:37 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e253 e253: 6 total, 6 up, 6 in
Feb 20 10:01:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "16497a5d-c9a5-4ff1-bddc-189b04ad6dac", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "16497a5d-c9a5-4ff1-bddc-189b04ad6dac", "format": "json"}]: dispatch
Feb 20 10:01:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "snap_name": "ddab1265-f000-456b-af11-3f6f3cbbac23_b97127c3-1060-4d01-b54f-99018ca9ac64", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "snap_name": "ddab1265-f000-456b-af11-3f6f3cbbac23", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:38 np0005625203.localdomain ceph-mon[296066]: pgmap v561: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 75 KiB/s wr, 7 op/s
Feb 20 10:01:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "9ee8c4a3-7471-4713-8c2c-cbe769628e3f", "format": "json"}]: dispatch
Feb 20 10:01:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "9ee8c4a3-7471-4713-8c2c-cbe769628e3f", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:38 np0005625203.localdomain ceph-mon[296066]: osdmap e253: 6 total, 6 up, 6 in
Feb 20 10:01:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "97e63579-f59d-4812-9af1-a8d227932ace", "format": "json"}]: dispatch
Feb 20 10:01:38 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:01:38 np0005625203.localdomain podman[327212]: 2026-02-20 10:01:38.784011208 +0000 UTC m=+0.089480158 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 10:01:38 np0005625203.localdomain podman[327212]: 2026-02-20 10:01:38.818397913 +0000 UTC m=+0.123866843 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 10:01:38 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:01:38 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:38 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:39 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e254 e254: 6 total, 6 up, 6 in
Feb 20 10:01:39 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "97e63579-f59d-4812-9af1-a8d227932ace", "format": "json"}]: dispatch
Feb 20 10:01:39 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:39 np0005625203.localdomain ceph-mon[296066]: osdmap e254: 6 total, 6 up, 6 in
Feb 20 10:01:39 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:01:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:01:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch
Feb 20 10:01:39 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished
Feb 20 10:01:39 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=tempest-cephx-id-408485567,client_metadata.root=/volumes/_nogroup/3176bf32-687c-42ee-9751-803dd8a1fea3/40c90f7c-2187-4f07-808e-7e98f84f2bd9],prefix=session evict} (starting...)
Feb 20 10:01:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:01:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:40 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:01:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "04824cf7-cd0e-41ba-8d21-04797c7215a6", "format": "json"}]: dispatch
Feb 20 10:01:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "04824cf7-cd0e-41ba-8d21-04797c7215a6", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "format": "json"}]: dispatch
Feb 20 10:01:40 np0005625203.localdomain ceph-mon[296066]: pgmap v564: 177 pgs: 177 active+clean; 208 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 280 B/s rd, 57 KiB/s wr, 5 op/s
Feb 20 10:01:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "3176bf32-687c-42ee-9751-803dd8a1fea3", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "format": "json"}]: dispatch
Feb 20 10:01:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "27c01f2a-f558-4311-a88e-0bf402c95817", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8782d0aa-7804-4d19-9c02-5e9c58554cbc", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:41 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:41 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:41 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8782d0aa-7804-4d19-9c02-5e9c58554cbc", "format": "json"}]: dispatch
Feb 20 10:01:41 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:41 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "16497a5d-c9a5-4ff1-bddc-189b04ad6dac", "format": "json"}]: dispatch
Feb 20 10:01:41 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "16497a5d-c9a5-4ff1-bddc-189b04ad6dac", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:41 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:41 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:41 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:41 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:41.946 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:01:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:41.948 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:01:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:41.948 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:01:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:41.948 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:01:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:41.961 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:41 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:41.961 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:01:42 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "format": "json"}]: dispatch
Feb 20 10:01:42 np0005625203.localdomain ceph-mon[296066]: pgmap v565: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 639 B/s rd, 151 KiB/s wr, 13 op/s
Feb 20 10:01:42 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:42 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "format": "json"}]: dispatch
Feb 20 10:01:42 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:42 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e255 e255: 6 total, 6 up, 6 in
Feb 20 10:01:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "58028d35-31fb-4b5c-a80b-11eaf91192f8", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "58028d35-31fb-4b5c-a80b-11eaf91192f8", "format": "json"}]: dispatch
Feb 20 10:01:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:43 np0005625203.localdomain ceph-mon[296066]: osdmap e255: 6 total, 6 up, 6 in
Feb 20 10:01:44 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "8782d0aa-7804-4d19-9c02-5e9c58554cbc", "new_size": 2147483648, "format": "json"}]: dispatch
Feb 20 10:01:44 np0005625203.localdomain ceph-mon[296066]: pgmap v567: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 132 KiB/s wr, 12 op/s
Feb 20 10:01:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:01:45 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "snap_name": "62a01af2-9c69-4d90-af43-120db3783b58", "format": "json"}]: dispatch
Feb 20 10:01:45 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:01:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:01:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f/82f72010-739f-40e2-832b-6d6c7666a8fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:01:45 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f/82f72010-739f-40e2-832b-6d6c7666a8fb", "osd", "allow rw pool=manila_data namespace=fsvolumens_2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:01:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:01:45 np0005625203.localdomain podman[327230]: 2026-02-20 10:01:45.767741691 +0000 UTC m=+0.080043287 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Feb 20 10:01:45 np0005625203.localdomain podman[327230]: 2026-02-20 10:01:45.853807044 +0000 UTC m=+0.166108610 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 10:01:45 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:01:46 np0005625203.localdomain ceph-mon[296066]: pgmap v568: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 167 KiB/s wr, 17 op/s
Feb 20 10:01:46 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "58028d35-31fb-4b5c-a80b-11eaf91192f8", "format": "json"}]: dispatch
Feb 20 10:01:46 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "58028d35-31fb-4b5c-a80b-11eaf91192f8", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:46.962 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:01:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:46.963 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:01:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:46.964 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:01:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:46.964 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:01:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:46.986 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:46.987 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:01:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8782d0aa-7804-4d19-9c02-5e9c58554cbc", "format": "json"}]: dispatch
Feb 20 10:01:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8782d0aa-7804-4d19-9c02-5e9c58554cbc", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:01:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:01:47 np0005625203.localdomain podman[327254]: 2026-02-20 10:01:47.775993652 +0000 UTC m=+0.089889222 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Feb 20 10:01:47 np0005625203.localdomain podman[327254]: 2026-02-20 10:01:47.812312656 +0000 UTC m=+0.126208186 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3)
Feb 20 10:01:47 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:01:47 np0005625203.localdomain podman[327255]: 2026-02-20 10:01:47.831499359 +0000 UTC m=+0.140704884 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, architecture=x86_64, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, io.openshift.tags=minimal rhel9, vcs-type=git)
Feb 20 10:01:47 np0005625203.localdomain podman[327255]: 2026-02-20 10:01:47.846454282 +0000 UTC m=+0.155659807 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, architecture=x86_64, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., release=1770267347, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Feb 20 10:01:47 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:01:49 np0005625203.localdomain ceph-mon[296066]: pgmap v569: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 148 KiB/s wr, 15 op/s
Feb 20 10:01:49 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot clone", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "snap_name": "62a01af2-9c69-4d90-af43-120db3783b58", "target_sub_name": "e59c51a9-964d-4ac9-9a67-d46c3cec7b52", "format": "json"}]: dispatch
Feb 20 10:01:49 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e59c51a9-964d-4ac9-9a67-d46c3cec7b52", "format": "json"}]: dispatch
Feb 20 10:01:49 np0005625203.localdomain sudo[327291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 10:01:49 np0005625203.localdomain sudo[327291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:01:49 np0005625203.localdomain sudo[327291]: pam_unix(sudo:session): session closed for user root
Feb 20 10:01:49 np0005625203.localdomain sudo[327309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 10:01:49 np0005625203.localdomain sudo[327309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:01:49 np0005625203.localdomain sudo[327309]: pam_unix(sudo:session): session closed for user root
Feb 20 10:01:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:01:50 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 10:01:50 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 10:01:50 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:01:50 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 10:01:50 np0005625203.localdomain sudo[327358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 10:01:50 np0005625203.localdomain sudo[327358]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:01:50 np0005625203.localdomain sudo[327358]: pam_unix(sudo:session): session closed for user root
Feb 20 10:01:51 np0005625203.localdomain ceph-mon[296066]: pgmap v570: 177 pgs: 177 active+clean; 209 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 921 B/s rd, 126 KiB/s wr, 13 op/s
Feb 20 10:01:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:51.988 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:01:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:51.990 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:01:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:51.991 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:01:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:51.991 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:01:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:52.024 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:52.025 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:01:53 np0005625203.localdomain ceph-mon[296066]: pgmap v571: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 99 KiB/s wr, 10 op/s
Feb 20 10:01:53 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ca81ba90-4b7a-404f-aec6-b73bd51b8d77", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:53 np0005625203.localdomain sshd[327376]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:01:53 np0005625203.localdomain sshd[327376]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:01:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:53 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:54 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=tempest-cephx-id-408485567,client_metadata.root=/volumes/_nogroup/2e325a34-23a3-4cc0-9c2e-4df32d08d36f/82f72010-739f-40e2-832b-6d6c7666a8fb],prefix=session evict} (starting...)
Feb 20 10:01:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:01:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:01:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch
Feb 20 10:01:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished
Feb 20 10:01:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:54 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:01:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ca81ba90-4b7a-404f-aec6-b73bd51b8d77", "format": "json"}]: dispatch
Feb 20 10:01:55 np0005625203.localdomain ceph-mon[296066]: pgmap v572: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 970 B/s rd, 94 KiB/s wr, 10 op/s
Feb 20 10:01:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:01:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:01:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "format": "json"}]: dispatch
Feb 20 10:01:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "2e325a34-23a3-4cc0-9c2e-4df32d08d36f", "force": true, "format": "json"}]: dispatch
Feb 20 10:01:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "format": "json"}]: dispatch
Feb 20 10:01:55 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:55 np0005625203.localdomain ceph-mon[296066]: mgrmap e53: np0005625202.arwxwo(active, since 13m), standbys: np0005625203.lonygy, np0005625204.exgrzx
Feb 20 10:01:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:56 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:56 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume resize", "vol_name": "cephfs", "sub_name": "ca81ba90-4b7a-404f-aec6-b73bd51b8d77", "new_size": 1073741824, "no_shrink": true, "format": "json"}]: dispatch
Feb 20 10:01:56 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:57.026 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:01:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:57.028 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:01:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:57.028 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:01:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:57.028 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:01:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:57.063 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:01:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:57.064 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:01:57 np0005625203.localdomain ceph-mon[296066]: pgmap v573: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 134 KiB/s wr, 13 op/s
Feb 20 10:01:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "817a644a-a040-452f-9ef0-baf961087441", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "817a644a-a040-452f-9ef0-baf961087441", "format": "json"}]: dispatch
Feb 20 10:01:57 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:57 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:58 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1559371662", "format": "json"} : dispatch
Feb 20 10:01:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1559371662", "caps": ["mds", "allow rw path=/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c/790d1933-deae-4553-8b28-0d1b6f0fc428", "osd", "allow rw pool=manila_data namespace=fsvolumens_50918518-d5fc-4598-a9e4-c7aeadda4e5c", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:01:58 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-1559371662", "caps": ["mds", "allow rw path=/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c/790d1933-deae-4553-8b28-0d1b6f0fc428", "osd", "allow rw pool=manila_data namespace=fsvolumens_50918518-d5fc-4598-a9e4-c7aeadda4e5c", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:01:58 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=tempest-cephx-id-1559371662,client_metadata.root=/volumes/_nogroup/50918518-d5fc-4598-a9e4-c7aeadda4e5c/790d1933-deae-4553-8b28-0d1b6f0fc428],prefix=session evict} (starting...)
Feb 20 10:01:58 np0005625203.localdomain podman[240359]: time="2026-02-20T10:01:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:01:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:01:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 10:01:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18320 "" "Go-http-client/1.1"
Feb 20 10:01:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:01:59 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "b164674c-a82b-4878-a588-09120b66d1e5", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:01:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "b164674c-a82b-4878-a588-09120b66d1e5", "format": "json"}]: dispatch
Feb 20 10:01:59 np0005625203.localdomain ceph-mon[296066]: pgmap v574: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 95 KiB/s wr, 9 op/s
Feb 20 10:01:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "auth_id": "tempest-cephx-id-1559371662", "tenant_id": "e2c7618200d34da3a2f64f252dae7492", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:01:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "auth_id": "tempest-cephx-id-1559371662", "format": "json"}]: dispatch
Feb 20 10:01:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-1559371662", "format": "json"} : dispatch
Feb 20 10:01:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-1559371662"} : dispatch
Feb 20 10:01:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-1559371662"}]': finished
Feb 20 10:01:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "auth_id": "tempest-cephx-id-1559371662", "format": "json"}]: dispatch
Feb 20 10:01:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:01:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:01:59.358 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:01:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:01:59 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:01:59 np0005625203.localdomain systemd[1]: tmp-crun.MTQxDu.mount: Deactivated successfully.
Feb 20 10:01:59 np0005625203.localdomain podman[327378]: 2026-02-20 10:01:59.78786186 +0000 UTC m=+0.096371762 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 10:01:59 np0005625203.localdomain podman[327379]: 2026-02-20 10:01:59.835926867 +0000 UTC m=+0.142783428 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 10:01:59 np0005625203.localdomain podman[327378]: 2026-02-20 10:01:59.872387015 +0000 UTC m=+0.180896927 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 10:01:59 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:01:59 np0005625203.localdomain podman[327379]: 2026-02-20 10:01:59.923619551 +0000 UTC m=+0.230476062 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 10:01:59 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:02:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:02:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "format": "json"}]: dispatch
Feb 20 10:02:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "50918518-d5fc-4598-a9e4-c7aeadda4e5c", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ca81ba90-4b7a-404f-aec6-b73bd51b8d77", "format": "json"}]: dispatch
Feb 20 10:02:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ca81ba90-4b7a-404f-aec6-b73bd51b8d77", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ac8c96bc-6c82-492e-a472-5bf05b6575ac", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:02:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ac8c96bc-6c82-492e-a472-5bf05b6575ac", "format": "json"}]: dispatch
Feb 20 10:02:01 np0005625203.localdomain ceph-mon[296066]: pgmap v575: 177 pgs: 177 active+clean; 210 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 95 KiB/s wr, 9 op/s
Feb 20 10:02:01 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:02:01 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5/72089254-fcf7-474f-aaf6-ba53e49ce9b2", "osd", "allow rw pool=manila_data namespace=fsvolumens_b164674c-a82b-4878-a588-09120b66d1e5", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:02:01 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5/72089254-fcf7-474f-aaf6-ba53e49ce9b2", "osd", "allow rw pool=manila_data namespace=fsvolumens_b164674c-a82b-4878-a588-09120b66d1e5", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:02:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:01.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:02:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:01.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:02:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:01.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:02:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:01.362 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:02:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:01.362 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 10:02:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:01.362 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:02:01 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:02:01 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2402776497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:02:01 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:01.837 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:02:01 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:02:01 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.052 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.054 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11590MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.055 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.055 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.064 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4994-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.066 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.067 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.067 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.102 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.103 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.333 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.334 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 10:02:02 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "b164674c-a82b-4878-a588-09120b66d1e5", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:02:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2402776497' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:02:02 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/644961451' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:02:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/644961451' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.503 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Refreshing inventories for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Feb 20 10:02:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:02.576 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.577 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:02 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:02.578 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.629 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Updating ProviderTree inventory for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.629 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Updating inventory in ProviderTree for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.644 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Refreshing aggregate associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.665 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Refreshing trait associations for resource provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e, traits: COMPUTE_IMAGE_TYPE_ARI,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,HW_CPU_X86_MMX,HW_CPU_X86_AMD_SVM,HW_CPU_X86_BMI,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VOLUME_EXTEND,COMPUTE_TRUSTED_CERTS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSSE3,HW_CPU_X86_CLMUL,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSE,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE41,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SHA,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE2,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Feb 20 10:02:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:02.684 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:02:03 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:02:03 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2401010244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:02:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:03.135 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:02:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:03.144 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 10:02:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:03.171 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 10:02:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:03.175 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 10:02:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:03.175 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:02:03 np0005625203.localdomain ceph-mon[296066]: pgmap v576: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 162 KiB/s wr, 15 op/s
Feb 20 10:02:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "8e58eb7a-11d1-42b2-ad27-1874f562bebd", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:02:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "8e58eb7a-11d1-42b2-ad27-1874f562bebd", "format": "json"}]: dispatch
Feb 20 10:02:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2401010244' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:02:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:04.177 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:02:04 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=tempest-cephx-id-408485567,client_metadata.root=/volumes/_nogroup/b164674c-a82b-4878-a588-09120b66d1e5/72089254-fcf7-474f-aaf6-ba53e49ce9b2],prefix=session evict} (starting...)
Feb 20 10:02:04 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ac8c96bc-6c82-492e-a472-5bf05b6575ac", "format": "json"}]: dispatch
Feb 20 10:02:04 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ac8c96bc-6c82-492e-a472-5bf05b6575ac", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:04 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:02:04 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch
Feb 20 10:02:04 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished
Feb 20 10:02:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:02:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:05.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:02:05 np0005625203.localdomain ceph-mon[296066]: pgmap v577: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 118 KiB/s wr, 11 op/s
Feb 20 10:02:05 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "b164674c-a82b-4878-a588-09120b66d1e5", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:02:05 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "b164674c-a82b-4878-a588-09120b66d1e5", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:02:05 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "b164674c-a82b-4878-a588-09120b66d1e5", "format": "json"}]: dispatch
Feb 20 10:02:05 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "b164674c-a82b-4878-a588-09120b66d1e5", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:05 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2750696122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:02:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:02:05 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:06 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:02:06 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:02:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:02:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:02:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:02:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:02:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:02:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:07.095 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:07.100 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:07.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:02:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:07.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:02:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:07.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 10:02:07 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "format": "json"}]: dispatch
Feb 20 10:02:07 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "8e58eb7a-11d1-42b2-ad27-1874f562bebd", "format": "json"}]: dispatch
Feb 20 10:02:07 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "8e58eb7a-11d1-42b2-ad27-1874f562bebd", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:07 np0005625203.localdomain ceph-mon[296066]: pgmap v578: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.1 KiB/s rd, 164 KiB/s wr, 16 op/s
Feb 20 10:02:07 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "817a644a-a040-452f-9ef0-baf961087441", "format": "json"}]: dispatch
Feb 20 10:02:07 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "817a644a-a040-452f-9ef0-baf961087441", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:07 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2507481714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:02:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:02:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:02:07 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:02:07 np0005625203.localdomain sshd[327469]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:02:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:07.676 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:02:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:07.676 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:02:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:07.676 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:02:08 np0005625203.localdomain sshd[327469]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:02:08 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:02:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:09.344 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:02:09 np0005625203.localdomain ceph-mon[296066]: pgmap v579: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 112 KiB/s wr, 11 op/s
Feb 20 10:02:09 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2191019720' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:02:09 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:02:09 np0005625203.localdomain podman[327471]: 2026-02-20 10:02:09.765983982 +0000 UTC m=+0.079224552 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Feb 20 10:02:09 np0005625203.localdomain podman[327471]: 2026-02-20 10:02:09.800320404 +0000 UTC m=+0.113560934 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 10:02:09 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:02:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:02:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:10.337 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:02:10 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "snap_name": "acd1b19e-a73f-46df-b23c-a5b5d955cb9c", "format": "json"}]: dispatch
Feb 20 10:02:10 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "d1f2e6a9-4e51-4824-ae46-7f0ab0897ac3", "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:02:10 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "97e63579-f59d-4812-9af1-a8d227932ace", "format": "json"}]: dispatch
Feb 20 10:02:10 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "97e63579-f59d-4812-9af1-a8d227932ace", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:10 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1875780044' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:02:11 np0005625203.localdomain ceph-mon[296066]: pgmap v580: 177 pgs: 177 active+clean; 211 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 682 B/s rd, 112 KiB/s wr, 11 op/s
Feb 20 10:02:11 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:02:11 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=tempest-cephx-id-408485567,client_metadata.root=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db],prefix=session evict} (starting...)
Feb 20 10:02:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:12.102 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:02:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:12.104 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:02:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:12.104 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:02:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:12.105 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:02:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:12.131 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:12.132 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:02:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:02:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch
Feb 20 10:02:12 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished
Feb 20 10:02:12 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:02:12 np0005625203.localdomain ceph-mon[296066]: pgmap v581: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 164 KiB/s wr, 16 op/s
Feb 20 10:02:12 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "d1f2e6a9-4e51-4824-ae46-7f0ab0897ac3", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:12 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "snap_name": "ba024a11-8c4d-4adf-9f1b-141c894e0dc3_0b43dd8b-5f2b-4576-9ac7-b968c8ab5c96", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:12 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "snap_name": "ba024a11-8c4d-4adf-9f1b-141c894e0dc3", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:12 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:12.580 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 10:02:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:13.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:02:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:13.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 10:02:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:13.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 10:02:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:13.356 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 10:02:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "snap_name": "acd1b19e-a73f-46df-b23c-a5b5d955cb9c_200d92f6-c0eb-4bdf-bc03-a206af0c82a7", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "snap_name": "acd1b19e-a73f-46df-b23c-a5b5d955cb9c", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:02:15 np0005625203.localdomain ceph-mon[296066]: pgmap v582: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 938 B/s rd, 97 KiB/s wr, 10 op/s
Feb 20 10:02:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:02:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:02:15 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:02:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:15.352 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:02:16 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:02:16 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup create", "vol_name": "cephfs", "group_name": "f80fe8d3-ac9e-4618-9a8e-1111bb6ccdac", "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:02:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:02:16 np0005625203.localdomain systemd[1]: tmp-crun.rSksXw.mount: Deactivated successfully.
Feb 20 10:02:16 np0005625203.localdomain podman[327490]: 2026-02-20 10:02:16.765714718 +0000 UTC m=+0.084629710 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 10:02:16 np0005625203.localdomain podman[327490]: 2026-02-20 10:02:16.833224856 +0000 UTC m=+0.152160239 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Feb 20 10:02:16 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:02:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:17.132 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:17.136 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "format": "json"}]: dispatch
Feb 20 10:02:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5df751b6-6f21-4d96-be03-f1dd0a841066", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "format": "json"}]: dispatch
Feb 20 10:02:17 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e2d3b5dd-fcdf-415f-b600-a30d6ac0ed75", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:17 np0005625203.localdomain ceph-mon[296066]: pgmap v583: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 143 KiB/s wr, 16 op/s
Feb 20 10:02:17 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e256 e256: 6 total, 6 up, 6 in
Feb 20 10:02:18 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=tempest-cephx-id-408485567,client_metadata.root=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db],prefix=session evict} (starting...)
Feb 20 10:02:18 np0005625203.localdomain ceph-mon[296066]: osdmap e256: 6 total, 6 up, 6 in
Feb 20 10:02:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:02:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch
Feb 20 10:02:18 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished
Feb 20 10:02:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:02:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:02:18 np0005625203.localdomain podman[327515]: 2026-02-20 10:02:18.757941233 +0000 UTC m=+0.078689296 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 10:02:18 np0005625203.localdomain podman[327515]: 2026-02-20 10:02:18.799504989 +0000 UTC m=+0.120253012 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2)
Feb 20 10:02:18 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:02:18 np0005625203.localdomain podman[327516]: 2026-02-20 10:02:18.816580057 +0000 UTC m=+0.132276224 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, version=9.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, org.opencontainers.image.created=2026-02-05T04:57:10Z, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.expose-services=)
Feb 20 10:02:18 np0005625203.localdomain podman[327516]: 2026-02-20 10:02:18.831680144 +0000 UTC m=+0.147376311 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1770267347, io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, architecture=x86_64, container_name=openstack_network_exporter)
Feb 20 10:02:18 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:02:19 np0005625203.localdomain ceph-mon[296066]: pgmap v585: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 117 KiB/s wr, 13 op/s
Feb 20 10:02:19 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:02:19 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:02:19 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolumegroup rm", "vol_name": "cephfs", "group_name": "f80fe8d3-ac9e-4618-9a8e-1111bb6ccdac", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:02:21 np0005625203.localdomain ceph-mon[296066]: pgmap v586: 177 pgs: 177 active+clean; 212 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 117 KiB/s wr, 13 op/s
Feb 20 10:02:21 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:02:21 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:22.137 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:02:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:22.139 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:02:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:22.139 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:02:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:22.139 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:02:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:22.168 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:22.168 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:02:22 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:02:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:02:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:02:22 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:02:22 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e257 e257: 6 total, 6 up, 6 in
Feb 20 10:02:23 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "af681d3a-0a9b-41df-acfc-a86a81e0ae12", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:02:23 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "af681d3a-0a9b-41df-acfc-a86a81e0ae12", "format": "json"}]: dispatch
Feb 20 10:02:23 np0005625203.localdomain ceph-mon[296066]: pgmap v587: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 818 B/s rd, 99 KiB/s wr, 11 op/s
Feb 20 10:02:23 np0005625203.localdomain ceph-mon[296066]: osdmap e257: 6 total, 6 up, 6 in
Feb 20 10:02:24 np0005625203.localdomain ceph-mon[296066]: pgmap v589: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 55 KiB/s wr, 5 op/s
Feb 20 10:02:24 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=tempest-cephx-id-408485567,client_metadata.root=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db],prefix=session evict} (starting...)
Feb 20 10:02:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:02:25 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:02:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:02:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch
Feb 20 10:02:25 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished
Feb 20 10:02:25 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:02:25 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "af681d3a-0a9b-41df-acfc-a86a81e0ae12", "format": "json"}]: dispatch
Feb 20 10:02:25 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "af681d3a-0a9b-41df-acfc-a86a81e0ae12", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:26 np0005625203.localdomain ceph-mon[296066]: pgmap v590: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 726 B/s rd, 111 KiB/s wr, 11 op/s
Feb 20 10:02:26 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:02:26 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:27.169 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:02:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:27.171 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:02:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:27.171 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:02:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:27.171 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:02:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:27.196 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:27.196 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:02:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:02:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "format": "json"}]: dispatch
Feb 20 10:02:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:02:28 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:28 np0005625203.localdomain podman[240359]: time="2026-02-20T10:02:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:02:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:02:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 10:02:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18307 "" "Go-http-client/1.1"
Feb 20 10:02:29 np0005625203.localdomain ceph-mon[296066]: pgmap v591: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 93 KiB/s wr, 9 op/s
Feb 20 10:02:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "99dd1946-0a71-4d2e-9886-17bc70e2e74a", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:02:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "99dd1946-0a71-4d2e-9886-17bc70e2e74a", "format": "json"}]: dispatch
Feb 20 10:02:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:02:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"} : dispatch
Feb 20 10:02:29 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth get-or-create", "entity": "client.tempest-cephx-id-408485567", "caps": ["mds", "allow rw path=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db", "osd", "allow rw pool=manila_data namespace=fsvolumens_cd0ea541-924c-41c6-95b4-11e7d85bd173", "mon", "allow r"], "format": "json"}]': finished
Feb 20 10:02:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:02:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume authorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "tenant_id": "ad1dd0d43ec7421b9a4a06c39e3657c6", "access_level": "rw", "format": "json"}]: dispatch
Feb 20 10:02:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:02:30 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:02:30 np0005625203.localdomain systemd[1]: tmp-crun.vWMBQE.mount: Deactivated successfully.
Feb 20 10:02:30 np0005625203.localdomain podman[327551]: 2026-02-20 10:02:30.770073269 +0000 UTC m=+0.083989000 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 10:02:30 np0005625203.localdomain podman[327551]: 2026-02-20 10:02:30.778158009 +0000 UTC m=+0.092073730 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 10:02:30 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:02:30 np0005625203.localdomain systemd[1]: tmp-crun.w0MaZc.mount: Deactivated successfully.
Feb 20 10:02:30 np0005625203.localdomain podman[327552]: 2026-02-20 10:02:30.828686142 +0000 UTC m=+0.138492026 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 10:02:30 np0005625203.localdomain podman[327552]: 2026-02-20 10:02:30.865289584 +0000 UTC m=+0.175095468 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 10:02:30 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:02:31 np0005625203.localdomain ceph-mon[296066]: pgmap v592: 177 pgs: 177 active+clean; 213 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 94 KiB/s wr, 9 op/s
Feb 20 10:02:31 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "73ef4b22-cb69-44e4-9b94-352c732420be", "format": "json"}]: dispatch
Feb 20 10:02:31 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session evict {filters=[auth_name=tempest-cephx-id-408485567,client_metadata.root=/volumes/_nogroup/cd0ea541-924c-41c6-95b4-11e7d85bd173/2edc2d31-21fe-4d10-b523-1775f0f273db],prefix=session evict} (starting...)
Feb 20 10:02:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:32.197 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:02:32 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "99dd1946-0a71-4d2e-9886-17bc70e2e74a", "format": "json"}]: dispatch
Feb 20 10:02:32 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "99dd1946-0a71-4d2e-9886-17bc70e2e74a", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:32 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.tempest-cephx-id-408485567", "format": "json"} : dispatch
Feb 20 10:02:32 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"} : dispatch
Feb 20 10:02:32 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd='[{"prefix": "auth rm", "entity": "client.tempest-cephx-id-408485567"}]': finished
Feb 20 10:02:33 np0005625203.localdomain ceph-mon[296066]: pgmap v593: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 108 KiB/s wr, 10 op/s
Feb 20 10:02:33 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume deauthorize", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:02:33 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume evict", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "auth_id": "tempest-cephx-id-408485567", "format": "json"}]: dispatch
Feb 20 10:02:34 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:02:34 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:34 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "1e66955e-3f1a-40d4-80db-e506894b4fe7", "format": "json"}]: dispatch
Feb 20 10:02:34 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:02:35 np0005625203.localdomain ceph-mon[296066]: pgmap v594: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 388 B/s rd, 103 KiB/s wr, 9 op/s
Feb 20 10:02:35 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:02:35 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "format": "json"}]: dispatch
Feb 20 10:02:36 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "format": "json"}]: dispatch
Feb 20 10:02:36 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "cd0ea541-924c-41c6-95b4-11e7d85bd173", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:02:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:02:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:02:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:02:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:02:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:02:37 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:37.200 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:02:37 np0005625203.localdomain ceph-mon[296066]: pgmap v595: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 125 KiB/s wr, 12 op/s
Feb 20 10:02:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "snap_name": "8b9be1a2-e688-4f67-bcb5-d692465b7436", "format": "json"}]: dispatch
Feb 20 10:02:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "1e66955e-3f1a-40d4-80db-e506894b4fe7_5d64b720-4a43-4716-92a1-dde09c619e84", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "1e66955e-3f1a-40d4-80db-e506894b4fe7", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:39 np0005625203.localdomain ceph-mon[296066]: pgmap v596: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 84 KiB/s wr, 7 op/s
Feb 20 10:02:39 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e59c51a9-964d-4ac9-9a67-d46c3cec7b52", "format": "json"}]: dispatch
Feb 20 10:02:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:02:40 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:02:40 np0005625203.localdomain podman[327600]: 2026-02-20 10:02:40.768634899 +0000 UTC m=+0.083570856 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 20 10:02:40 np0005625203.localdomain podman[327600]: 2026-02-20 10:02:40.801371732 +0000 UTC m=+0.116307699 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 20 10:02:40 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:02:41 np0005625203.localdomain ceph-mon[296066]: pgmap v597: 177 pgs: 177 active+clean; 214 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 426 B/s rd, 84 KiB/s wr, 8 op/s
Feb 20 10:02:41 np0005625203.localdomain sshd[327619]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:02:42 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:02:42 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:42 np0005625203.localdomain sshd[327619]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:02:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:42.202 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:02:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:42.204 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:02:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:42.204 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:02:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:42.204 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:02:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:42.242 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:42.243 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:02:42 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:43 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:02:43.211 262775 INFO neutron.agent.linux.ip_lib [None req-d18b8cdd-e367-4940-af87-49b8139bac3a - - - - - -] Device tap129edf83-67 cannot be used as it has no MAC address
Feb 20 10:02:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:43.232 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:43 np0005625203.localdomain kernel: device tap129edf83-67 entered promiscuous mode
Feb 20 10:02:43 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581763.2415] manager: (tap129edf83-67): new Generic device (/org/freedesktop/NetworkManager/Devices/77)
Feb 20 10:02:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:43.240 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:02:43Z|00412|binding|INFO|Claiming lport 129edf83-674e-4a86-8c9a-71a38b4ec937 for this chassis.
Feb 20 10:02:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:02:43Z|00413|binding|INFO|129edf83-674e-4a86-8c9a-71a38b4ec937: Claiming unknown
Feb 20 10:02:43 np0005625203.localdomain systemd-udevd[327631]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 10:02:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:43.256 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-196b7714-c901-4c39-aa8f-d421998415ce', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-196b7714-c901-4c39-aa8f-d421998415ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc8fc0e37a6140138157135bf1aff7f6', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=189ffce8-2d30-4cdc-a115-5e08169b10e0, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=129edf83-674e-4a86-8c9a-71a38b4ec937) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 10:02:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:43.258 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 129edf83-674e-4a86-8c9a-71a38b4ec937 in datapath 196b7714-c901-4c39-aa8f-d421998415ce bound to our chassis
Feb 20 10:02:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:43.260 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 27bc48ce-97bd-422c-98ab-771028797fe5 IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 10:02:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:43.261 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 196b7714-c901-4c39-aa8f-d421998415ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 10:02:43 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:43.262 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[aa05c07f-3767-4530-8153-840c24051a53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 10:02:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:43.271 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:02:43Z|00414|binding|INFO|Setting lport 129edf83-674e-4a86-8c9a-71a38b4ec937 ovn-installed in OVS
Feb 20 10:02:43 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:02:43Z|00415|binding|INFO|Setting lport 129edf83-674e-4a86-8c9a-71a38b4ec937 up in Southbound
Feb 20 10:02:43 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap129edf83-67: No such device
Feb 20 10:02:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:43.277 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:43.279 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:43 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap129edf83-67: No such device
Feb 20 10:02:43 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap129edf83-67: No such device
Feb 20 10:02:43 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap129edf83-67: No such device
Feb 20 10:02:43 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap129edf83-67: No such device
Feb 20 10:02:43 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap129edf83-67: No such device
Feb 20 10:02:43 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap129edf83-67: No such device
Feb 20 10:02:43 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap129edf83-67: No such device
Feb 20 10:02:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:43.350 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:43 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:43.373 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:43 np0005625203.localdomain ceph-mon[296066]: pgmap v598: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 110 KiB/s wr, 10 op/s
Feb 20 10:02:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e59c51a9-964d-4ac9-9a67-d46c3cec7b52", "format": "json"}]: dispatch
Feb 20 10:02:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "55bcf7f1-01f4-42f0-8f90-e7ef14438392", "format": "json"}]: dispatch
Feb 20 10:02:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "snap_name": "8b9be1a2-e688-4f67-bcb5-d692465b7436_5acfc6c8-db4e-4899-8b99-eb2dcdeecf08", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "snap_name": "8b9be1a2-e688-4f67-bcb5-d692465b7436", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:44 np0005625203.localdomain podman[327702]: 
Feb 20 10:02:44 np0005625203.localdomain podman[327702]: 2026-02-20 10:02:44.267731533 +0000 UTC m=+0.085597230 container create a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-196b7714-c901-4c39-aa8f-d421998415ce, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 20 10:02:44 np0005625203.localdomain systemd[1]: Started libpod-conmon-a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d.scope.
Feb 20 10:02:44 np0005625203.localdomain podman[327702]: 2026-02-20 10:02:44.227215319 +0000 UTC m=+0.045081056 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 10:02:44 np0005625203.localdomain systemd[1]: tmp-crun.2dRCEw.mount: Deactivated successfully.
Feb 20 10:02:44 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 10:02:44 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007a9cd35acef2de36e64785fa5f5b45ff573923f9f7470e1a0160b65d74174d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 10:02:44 np0005625203.localdomain podman[327702]: 2026-02-20 10:02:44.369270594 +0000 UTC m=+0.187136301 container init a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-196b7714-c901-4c39-aa8f-d421998415ce, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20260127)
Feb 20 10:02:44 np0005625203.localdomain podman[327702]: 2026-02-20 10:02:44.377968713 +0000 UTC m=+0.195834420 container start a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-196b7714-c901-4c39-aa8f-d421998415ce, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 10:02:44 np0005625203.localdomain dnsmasq[327721]: started, version 2.85 cachesize 150
Feb 20 10:02:44 np0005625203.localdomain dnsmasq[327721]: DNS service limited to local subnets
Feb 20 10:02:44 np0005625203.localdomain dnsmasq[327721]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 10:02:44 np0005625203.localdomain dnsmasq[327721]: warning: no upstream servers configured
Feb 20 10:02:44 np0005625203.localdomain dnsmasq-dhcp[327721]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 10:02:44 np0005625203.localdomain dnsmasq[327721]: read /var/lib/neutron/dhcp/196b7714-c901-4c39-aa8f-d421998415ce/addn_hosts - 0 addresses
Feb 20 10:02:44 np0005625203.localdomain dnsmasq-dhcp[327721]: read /var/lib/neutron/dhcp/196b7714-c901-4c39-aa8f-d421998415ce/host
Feb 20 10:02:44 np0005625203.localdomain dnsmasq-dhcp[327721]: read /var/lib/neutron/dhcp/196b7714-c901-4c39-aa8f-d421998415ce/opts
Feb 20 10:02:44 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e59c51a9-964d-4ac9-9a67-d46c3cec7b52", "format": "json"}]: dispatch
Feb 20 10:02:44 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e59c51a9-964d-4ac9-9a67-d46c3cec7b52", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:44 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:02:44.515 262775 INFO neutron.agent.dhcp.agent [None req-7f35eedf-cc59-41b0-aa47-7a7755b58573 - - - - - -] DHCP configuration for ports {'04e7d53c-d49e-46ad-8cc6-f0d0b2e6e430'} is completed
Feb 20 10:02:44 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:02:44.558 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:02:44Z, description=, device_id=e3518fc5-5c1f-48e2-8acd-06c1ed21b401, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4dd5c70>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4dd5df0>], id=87f3519a-d439-4bbf-a263-b324b52b6d28, ip_allocation=immediate, mac_address=fa:16:3e:ff:95:05, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T10:02:40Z, description=, dns_domain=, id=196b7714-c901-4c39-aa8f-d421998415ce, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPITest-1785126314-network, port_security_enabled=True, project_id=bc8fc0e37a6140138157135bf1aff7f6, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33962, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3774, status=ACTIVE, subnets=['2ef82a49-4698-4fac-a8c2-0076f6011d8e'], tags=[], tenant_id=bc8fc0e37a6140138157135bf1aff7f6, updated_at=2026-02-20T10:02:41Z, vlan_transparent=None, network_id=196b7714-c901-4c39-aa8f-d421998415ce, port_security_enabled=False, project_id=bc8fc0e37a6140138157135bf1aff7f6, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3782, status=DOWN, tags=[], tenant_id=bc8fc0e37a6140138157135bf1aff7f6, updated_at=2026-02-20T10:02:44Z on network 196b7714-c901-4c39-aa8f-d421998415ce
Feb 20 10:02:44 np0005625203.localdomain dnsmasq[327721]: read /var/lib/neutron/dhcp/196b7714-c901-4c39-aa8f-d421998415ce/addn_hosts - 1 addresses
Feb 20 10:02:44 np0005625203.localdomain dnsmasq-dhcp[327721]: read /var/lib/neutron/dhcp/196b7714-c901-4c39-aa8f-d421998415ce/host
Feb 20 10:02:44 np0005625203.localdomain dnsmasq-dhcp[327721]: read /var/lib/neutron/dhcp/196b7714-c901-4c39-aa8f-d421998415ce/opts
Feb 20 10:02:44 np0005625203.localdomain podman[327739]: 2026-02-20 10:02:44.782072725 +0000 UTC m=+0.069486210 container kill a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-196b7714-c901-4c39-aa8f-d421998415ce, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 10:02:45 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:02:45.058 262775 INFO neutron.agent.dhcp.agent [None req-beeafbac-4963-45f7-8ca1-e36b2bf3168f - - - - - -] DHCP configuration for ports {'87f3519a-d439-4bbf-a263-b324b52b6d28'} is completed
Feb 20 10:02:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:02:45 np0005625203.localdomain ceph-mon[296066]: pgmap v599: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 597 B/s rd, 61 KiB/s wr, 6 op/s
Feb 20 10:02:45 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "format": "json"}]: dispatch
Feb 20 10:02:45 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "61e1c482-9c73-4bad-8fd8-0dc280a18a86", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:45 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:02:45.814 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:02:44Z, description=, device_id=e3518fc5-5c1f-48e2-8acd-06c1ed21b401, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e78fa0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e78be0>], id=87f3519a-d439-4bbf-a263-b324b52b6d28, ip_allocation=immediate, mac_address=fa:16:3e:ff:95:05, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T10:02:40Z, description=, dns_domain=, id=196b7714-c901-4c39-aa8f-d421998415ce, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPITest-1785126314-network, port_security_enabled=True, project_id=bc8fc0e37a6140138157135bf1aff7f6, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=33962, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3774, status=ACTIVE, subnets=['2ef82a49-4698-4fac-a8c2-0076f6011d8e'], tags=[], tenant_id=bc8fc0e37a6140138157135bf1aff7f6, updated_at=2026-02-20T10:02:41Z, vlan_transparent=None, network_id=196b7714-c901-4c39-aa8f-d421998415ce, port_security_enabled=False, project_id=bc8fc0e37a6140138157135bf1aff7f6, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3782, status=DOWN, tags=[], tenant_id=bc8fc0e37a6140138157135bf1aff7f6, updated_at=2026-02-20T10:02:44Z on network 196b7714-c901-4c39-aa8f-d421998415ce
Feb 20 10:02:46 np0005625203.localdomain dnsmasq[327721]: read /var/lib/neutron/dhcp/196b7714-c901-4c39-aa8f-d421998415ce/addn_hosts - 1 addresses
Feb 20 10:02:46 np0005625203.localdomain dnsmasq-dhcp[327721]: read /var/lib/neutron/dhcp/196b7714-c901-4c39-aa8f-d421998415ce/host
Feb 20 10:02:46 np0005625203.localdomain dnsmasq-dhcp[327721]: read /var/lib/neutron/dhcp/196b7714-c901-4c39-aa8f-d421998415ce/opts
Feb 20 10:02:46 np0005625203.localdomain podman[327775]: 2026-02-20 10:02:46.032027167 +0000 UTC m=+0.058794200 container kill a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-196b7714-c901-4c39-aa8f-d421998415ce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Feb 20 10:02:46 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:02:46.331 262775 INFO neutron.agent.dhcp.agent [None req-93e3b82b-3f65-4aa1-9dc5-f3128049858c - - - - - -] DHCP configuration for ports {'87f3519a-d439-4bbf-a263-b324b52b6d28'} is completed
Feb 20 10:02:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:02:46Z|00416|ovn_bfd|INFO|Enabled BFD on interface ovn-0c414b-0
Feb 20 10:02:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:02:46Z|00417|ovn_bfd|INFO|Enabled BFD on interface ovn-2275c3-0
Feb 20 10:02:46 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:02:46Z|00418|ovn_bfd|INFO|Enabled BFD on interface ovn-2df8cc-0
Feb 20 10:02:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:46.604 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:46.610 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:46 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "55bcf7f1-01f4-42f0-8f90-e7ef14438392_264ca94b-ea48-4597-a2c2-a0904eedca44", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:46 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "55bcf7f1-01f4-42f0-8f90-e7ef14438392", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:46 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:46.619 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:02:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:02:47 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:47.245 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:47.450 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:47.494 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:47 np0005625203.localdomain ceph-mon[296066]: pgmap v600: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 852 B/s rd, 97 KiB/s wr, 8 op/s
Feb 20 10:02:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4d276c40-22b5-4463-ba95-b271179ed697", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:02:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4d276c40-22b5-4463-ba95-b271179ed697", "format": "json"}]: dispatch
Feb 20 10:02:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "snap_name": "62a01af2-9c69-4d90-af43-120db3783b58_ffe8c3f4-7f8d-4e50-9ffc-1b3c43840e07", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "snap_name": "62a01af2-9c69-4d90-af43-120db3783b58", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:02:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "format": "json"}]: dispatch
Feb 20 10:02:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:02:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e258 e258: 6 total, 6 up, 6 in
Feb 20 10:02:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:02:47 np0005625203.localdomain systemd[1]: tmp-crun.adE2B4.mount: Deactivated successfully.
Feb 20 10:02:47 np0005625203.localdomain podman[327798]: 2026-02-20 10:02:47.781764148 +0000 UTC m=+0.097162566 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 10:02:47 np0005625203.localdomain podman[327798]: 2026-02-20 10:02:47.85645851 +0000 UTC m=+0.171856928 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Feb 20 10:02:47 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:02:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:48.180 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:48 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:48.310 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:48 np0005625203.localdomain ceph-mon[296066]: osdmap e258: 6 total, 6 up, 6 in
Feb 20 10:02:48 np0005625203.localdomain ceph-mon[296066]: pgmap v602: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 716 B/s rd, 74 KiB/s wr, 6 op/s
Feb 20 10:02:48 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "88e0b4cc-0c15-4363-8336-4c11c356e179", "format": "json"}]: dispatch
Feb 20 10:02:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e259 e259: 6 total, 6 up, 6 in
Feb 20 10:02:49 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:49.077 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:02:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:02:49 np0005625203.localdomain podman[327825]: 2026-02-20 10:02:49.249015382 +0000 UTC m=+0.079401117 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, vendor=Red Hat, Inc., version=9.7, vcs-type=git, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2026-02-05T04:57:10Z, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, container_name=openstack_network_exporter)
Feb 20 10:02:49 np0005625203.localdomain podman[327825]: 2026-02-20 10:02:49.264315655 +0000 UTC m=+0.094701410 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, config_id=openstack_network_exporter, container_name=openstack_network_exporter, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9/ubi-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 10:02:49 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:02:49 np0005625203.localdomain podman[327824]: 2026-02-20 10:02:49.362481513 +0000 UTC m=+0.195624613 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 10:02:49 np0005625203.localdomain podman[327824]: 2026-02-20 10:02:49.373545635 +0000 UTC m=+0.206688735 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Feb 20 10:02:49 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:02:49 np0005625203.localdomain ceph-mon[296066]: osdmap e259: 6 total, 6 up, 6 in
Feb 20 10:02:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:02:50 np0005625203.localdomain sudo[327859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 10:02:50 np0005625203.localdomain sudo[327859]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:02:50 np0005625203.localdomain sudo[327859]: pam_unix(sudo:session): session closed for user root
Feb 20 10:02:50 np0005625203.localdomain sudo[327877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 10:02:50 np0005625203.localdomain sudo[327877]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:02:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "4d276c40-22b5-4463-ba95-b271179ed697", "snap_name": "22d02fe9-b677-4d4d-ab93-02062be24947", "format": "json"}]: dispatch
Feb 20 10:02:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "format": "json"}]: dispatch
Feb 20 10:02:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "5f3b4204-930b-4bc2-9cff-7e982286eac6", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:50 np0005625203.localdomain ceph-mon[296066]: pgmap v604: 177 pgs: 177 active+clean; 215 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 383 B/s rd, 53 KiB/s wr, 4 op/s
Feb 20 10:02:50 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "snap_name": "f9be8701-f7e0-4195-b63f-b542fb246c4d", "format": "json"}]: dispatch
Feb 20 10:02:50 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:02:50Z|00419|ovn_bfd|INFO|Disabled BFD on interface ovn-0c414b-0
Feb 20 10:02:50 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:02:50Z|00420|ovn_bfd|INFO|Disabled BFD on interface ovn-2275c3-0
Feb 20 10:02:50 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:02:50Z|00421|ovn_bfd|INFO|Disabled BFD on interface ovn-2df8cc-0
Feb 20 10:02:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:50.750 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:50.753 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:50 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:50.771 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:50 np0005625203.localdomain dnsmasq[327721]: read /var/lib/neutron/dhcp/196b7714-c901-4c39-aa8f-d421998415ce/addn_hosts - 0 addresses
Feb 20 10:02:50 np0005625203.localdomain dnsmasq-dhcp[327721]: read /var/lib/neutron/dhcp/196b7714-c901-4c39-aa8f-d421998415ce/host
Feb 20 10:02:50 np0005625203.localdomain podman[327920]: 2026-02-20 10:02:50.863830231 +0000 UTC m=+0.061261645 container kill a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-196b7714-c901-4c39-aa8f-d421998415ce, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Feb 20 10:02:50 np0005625203.localdomain dnsmasq-dhcp[327721]: read /var/lib/neutron/dhcp/196b7714-c901-4c39-aa8f-d421998415ce/opts
Feb 20 10:02:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:51.041 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:51 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:02:51Z|00422|binding|INFO|Releasing lport 129edf83-674e-4a86-8c9a-71a38b4ec937 from this chassis (sb_readonly=0)
Feb 20 10:02:51 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:02:51Z|00423|binding|INFO|Setting lport 129edf83-674e-4a86-8c9a-71a38b4ec937 down in Southbound
Feb 20 10:02:51 np0005625203.localdomain kernel: device tap129edf83-67 left promiscuous mode
Feb 20 10:02:51 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:51.049 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-196b7714-c901-4c39-aa8f-d421998415ce', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-196b7714-c901-4c39-aa8f-d421998415ce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'bc8fc0e37a6140138157135bf1aff7f6', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=189ffce8-2d30-4cdc-a115-5e08169b10e0, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=129edf83-674e-4a86-8c9a-71a38b4ec937) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 10:02:51 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:51.051 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 129edf83-674e-4a86-8c9a-71a38b4ec937 in datapath 196b7714-c901-4c39-aa8f-d421998415ce unbound from our chassis
Feb 20 10:02:51 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:51.055 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 196b7714-c901-4c39-aa8f-d421998415ce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 10:02:51 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:02:51.057 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[f2abbf01-e978-4ba3-a3e4-292a05b3274e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 10:02:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:51.059 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:51 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:51.060 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:51 np0005625203.localdomain sudo[327877]: pam_unix(sudo:session): session closed for user root
Feb 20 10:02:51 np0005625203.localdomain sudo[327965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 10:02:51 np0005625203.localdomain sudo[327965]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:02:51 np0005625203.localdomain sudo[327965]: pam_unix(sudo:session): session closed for user root
Feb 20 10:02:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 10:02:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 10:02:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:02:51 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 10:02:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:52.283 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:52.287 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:52 np0005625203.localdomain dnsmasq[327721]: exiting on receipt of SIGTERM
Feb 20 10:02:52 np0005625203.localdomain podman[328000]: 2026-02-20 10:02:52.692326691 +0000 UTC m=+0.058824621 container kill a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-196b7714-c901-4c39-aa8f-d421998415ce, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Feb 20 10:02:52 np0005625203.localdomain systemd[1]: libpod-a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d.scope: Deactivated successfully.
Feb 20 10:02:52 np0005625203.localdomain ceph-mon[296066]: pgmap v605: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.4 KiB/s rd, 143 KiB/s wr, 12 op/s
Feb 20 10:02:52 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "88e0b4cc-0c15-4363-8336-4c11c356e179_b7815798-7284-4b52-8029-4e2d096c3965", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:52 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "88e0b4cc-0c15-4363-8336-4c11c356e179", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:52 np0005625203.localdomain podman[328012]: 2026-02-20 10:02:52.762579674 +0000 UTC m=+0.058508741 container died a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-196b7714-c901-4c39-aa8f-d421998415ce, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 20 10:02:52 np0005625203.localdomain podman[328012]: 2026-02-20 10:02:52.795638077 +0000 UTC m=+0.091567094 container cleanup a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-196b7714-c901-4c39-aa8f-d421998415ce, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Feb 20 10:02:52 np0005625203.localdomain systemd[1]: libpod-conmon-a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d.scope: Deactivated successfully.
Feb 20 10:02:52 np0005625203.localdomain podman[328014]: 2026-02-20 10:02:52.84486561 +0000 UTC m=+0.132551582 container remove a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-196b7714-c901-4c39-aa8f-d421998415ce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 10:02:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:02:52.876 262775 INFO neutron.agent.dhcp.agent [None req-4fc379d2-27b5-4784-be2d-d197a580f69b - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 10:02:52 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:02:52.885 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 10:02:53 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:53.052 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e260 e260: 6 total, 6 up, 6 in
Feb 20 10:02:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-007a9cd35acef2de36e64785fa5f5b45ff573923f9f7470e1a0160b65d74174d-merged.mount: Deactivated successfully.
Feb 20 10:02:53 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a898a10ede78c33c2d043a51d5dfe16ffad07b028cccb2e92a36e88b36d8433d-userdata-shm.mount: Deactivated successfully.
Feb 20 10:02:53 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d196b7714\x2dc901\x2d4c39\x2daa8f\x2dd421998415ce.mount: Deactivated successfully.
Feb 20 10:02:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e261 e261: 6 total, 6 up, 6 in
Feb 20 10:02:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4d276c40-22b5-4463-ba95-b271179ed697", "snap_name": "22d02fe9-b677-4d4d-ab93-02062be24947_54d3ff10-d9fa-4f52-83b0-fa0cfd9ed293", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:54 np0005625203.localdomain ceph-mon[296066]: osdmap e260: 6 total, 6 up, 6 in
Feb 20 10:02:54 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "4d276c40-22b5-4463-ba95-b271179ed697", "snap_name": "22d02fe9-b677-4d4d-ab93-02062be24947", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:02:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:02:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e262 e262: 6 total, 6 up, 6 in
Feb 20 10:02:56 np0005625203.localdomain ceph-mon[296066]: pgmap v607: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 119 KiB/s wr, 11 op/s
Feb 20 10:02:56 np0005625203.localdomain ceph-mon[296066]: osdmap e261: 6 total, 6 up, 6 in
Feb 20 10:02:56 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "snap_name": "f9be8701-f7e0-4195-b63f-b542fb246c4d_982c66f3-6550-45d5-a41b-e698844f2247", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:56 np0005625203.localdomain ceph-mon[296066]: mgrmap e54: np0005625202.arwxwo(active, since 14m), standbys: np0005625203.lonygy, np0005625204.exgrzx
Feb 20 10:02:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e263 e263: 6 total, 6 up, 6 in
Feb 20 10:02:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:57.287 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:02:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "snap_name": "f9be8701-f7e0-4195-b63f-b542fb246c4d", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "faea6402-68f1-475f-9cea-5f24a4d2c2b9", "format": "json"}]: dispatch
Feb 20 10:02:57 np0005625203.localdomain ceph-mon[296066]: pgmap v609: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.7 KiB/s rd, 177 KiB/s wr, 16 op/s
Feb 20 10:02:57 np0005625203.localdomain ceph-mon[296066]: osdmap e262: 6 total, 6 up, 6 in
Feb 20 10:02:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4d276c40-22b5-4463-ba95-b271179ed697", "format": "json"}]: dispatch
Feb 20 10:02:57 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4d276c40-22b5-4463-ba95-b271179ed697", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:57 np0005625203.localdomain ceph-mon[296066]: osdmap e263: 6 total, 6 up, 6 in
Feb 20 10:02:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e264 e264: 6 total, 6 up, 6 in
Feb 20 10:02:58 np0005625203.localdomain podman[240359]: time="2026-02-20T10:02:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:02:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:02:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 10:02:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18312 "" "Go-http-client/1.1"
Feb 20 10:02:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "format": "json"}]: dispatch
Feb 20 10:02:59 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "e03d5299-b683-42d5-861f-4ae99be1be37", "force": true, "format": "json"}]: dispatch
Feb 20 10:02:59 np0005625203.localdomain ceph-mon[296066]: pgmap v612: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 905 B/s rd, 115 KiB/s wr, 9 op/s
Feb 20 10:02:59 np0005625203.localdomain ceph-mon[296066]: osdmap e264: 6 total, 6 up, 6 in
Feb 20 10:02:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:02:59.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:02:59 np0005625203.localdomain sshd[328039]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:03:00 np0005625203.localdomain sshd[328039]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "faea6402-68f1-475f-9cea-5f24a4d2c2b9_25b67a37-0482-4bde-b478-a143700074f7", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "faea6402-68f1-475f-9cea-5f24a4d2c2b9", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.227404) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581780227485, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2575, "num_deletes": 259, "total_data_size": 4390922, "memory_usage": 4562096, "flush_reason": "Manual Compaction"}
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581780243929, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 2872263, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32535, "largest_seqno": 35105, "table_properties": {"data_size": 2861951, "index_size": 6369, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2885, "raw_key_size": 25539, "raw_average_key_size": 22, "raw_value_size": 2839917, "raw_average_value_size": 2478, "num_data_blocks": 273, "num_entries": 1146, "num_filter_entries": 1146, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581663, "oldest_key_time": 1771581663, "file_creation_time": 1771581780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 16577 microseconds, and 7835 cpu microseconds.
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.243986) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 2872263 bytes OK
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.244012) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.245946) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.245966) EVENT_LOG_v1 {"time_micros": 1771581780245960, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.245991) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 4378674, prev total WAL file size 4378674, number of live WAL files 2.
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.247233) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132353530' seq:72057594037927935, type:22 .. '7061786F73003132383032' seq:0, type:0; will stop at (end)
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(2804KB)], [54(17MB)]
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581780247366, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 20918585, "oldest_snapshot_seqno": -1}
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 14307 keys, 19489593 bytes, temperature: kUnknown
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581780336361, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 19489593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19406665, "index_size": 46091, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35781, "raw_key_size": 381407, "raw_average_key_size": 26, "raw_value_size": 19162629, "raw_average_value_size": 1339, "num_data_blocks": 1735, "num_entries": 14307, "num_filter_entries": 14307, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581780, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.336720) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 19489593 bytes
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.338390) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 234.8 rd, 218.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 17.2 +0.0 blob) out(18.6 +0.0 blob), read-write-amplify(14.1) write-amplify(6.8) OK, records in: 14848, records dropped: 541 output_compression: NoCompression
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.338418) EVENT_LOG_v1 {"time_micros": 1771581780338404, "job": 32, "event": "compaction_finished", "compaction_time_micros": 89110, "compaction_time_cpu_micros": 56938, "output_level": 6, "num_output_files": 1, "total_output_size": 19489593, "num_input_records": 14848, "num_output_records": 14307, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581780339172, "job": 32, "event": "table_file_deletion", "file_number": 56}
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581780342028, "job": 32, "event": "table_file_deletion", "file_number": 54}
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.247043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.342090) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.342096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.342099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.342103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:03:00 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:00.342106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:03:01 np0005625203.localdomain ceph-mon[296066]: pgmap v614: 177 pgs: 177 active+clean; 216 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 777 B/s rd, 100 KiB/s wr, 9 op/s
Feb 20 10:03:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:03:01 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:03:01 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:03:01 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:01 np0005625203.localdomain podman[328041]: 2026-02-20 10:03:01.778523048 +0000 UTC m=+0.082787802 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 10:03:01 np0005625203.localdomain podman[328042]: 2026-02-20 10:03:01.835842221 +0000 UTC m=+0.135622166 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 10:03:01 np0005625203.localdomain podman[328042]: 2026-02-20 10:03:01.846226993 +0000 UTC m=+0.146006918 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 10:03:01 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:03:01 np0005625203.localdomain podman[328041]: 2026-02-20 10:03:01.866300964 +0000 UTC m=+0.170565738 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 10:03:01 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:03:02 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/536207374' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:03:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/536207374' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:03:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e265 e265: 6 total, 6 up, 6 in
Feb 20 10:03:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:02.289 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:03:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:02.291 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:03:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:02.291 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:03:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:02.291 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:03:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:02.340 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:02.340 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:03:03 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e266 e266: 6 total, 6 up, 6 in
Feb 20 10:03:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "ccc69125-8271-465f-a7cf-99b18598188c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:03:03 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "ccc69125-8271-465f-a7cf-99b18598188c", "format": "json"}]: dispatch
Feb 20 10:03:03 np0005625203.localdomain ceph-mon[296066]: pgmap v615: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 100 KiB/s wr, 7 op/s
Feb 20 10:03:03 np0005625203.localdomain ceph-mon[296066]: osdmap e265: 6 total, 6 up, 6 in
Feb 20 10:03:03 np0005625203.localdomain ceph-mon[296066]: osdmap e266: 6 total, 6 up, 6 in
Feb 20 10:03:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:03.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:03:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:03.359 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:03:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:03.359 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:03:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:03.360 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:03:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:03.360 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 10:03:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:03.360 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:03:03 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:03:03 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2069351798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:03:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:03.784 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:03:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:04.002 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 10:03:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:04.004 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11573MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 10:03:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:04.005 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:03:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:04.005 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:03:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:04.071 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 10:03:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:04.072 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 10:03:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:04.091 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:03:04 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "7729a724-86fe-4d43-a1c1-675788136bbb", "format": "json"}]: dispatch
Feb 20 10:03:04 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2069351798' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:03:04 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:03:04 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/495270293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:03:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:04.517 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:03:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:04.524 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 10:03:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:04.538 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 10:03:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:04.540 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 10:03:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:04.541 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:03:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:03:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:05.541 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:03:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:05.541 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:03:06 np0005625203.localdomain sshd[328129]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:03:06 np0005625203.localdomain ceph-mon[296066]: pgmap v618: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 100 KiB/s wr, 7 op/s
Feb 20 10:03:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/495270293' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:03:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:03:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:03:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:03:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:03:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:03:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:03:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:07.340 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:07 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "ccc69125-8271-465f-a7cf-99b18598188c", "snap_name": "7873ec63-b44a-47a7-8bbe-8f944c5b9a9d", "format": "json"}]: dispatch
Feb 20 10:03:07 np0005625203.localdomain ceph-mon[296066]: pgmap v619: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.3 KiB/s rd, 124 KiB/s wr, 9 op/s
Feb 20 10:03:07 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1846256840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:03:07 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2235935279' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:03:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:07.676 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:03:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:07.677 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:03:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:07.677 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:03:07 np0005625203.localdomain sshd[328129]: Invalid user oracle from 34.131.211.42 port 37234
Feb 20 10:03:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:07.742 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:07.741 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 10:03:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:07.743 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 10:03:07 np0005625203.localdomain sshd[328129]: Received disconnect from 34.131.211.42 port 37234:11: Bye Bye [preauth]
Feb 20 10:03:07 np0005625203.localdomain sshd[328129]: Disconnected from invalid user oracle 34.131.211.42 port 37234 [preauth]
Feb 20 10:03:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e267 e267: 6 total, 6 up, 6 in
Feb 20 10:03:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:08.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:03:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:08.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:03:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:08.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 10:03:08 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:03:08 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:09 np0005625203.localdomain ceph-mon[296066]: pgmap v620: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1.2 KiB/s rd, 117 KiB/s wr, 8 op/s
Feb 20 10:03:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "7729a724-86fe-4d43-a1c1-675788136bbb_64b5f976-f108-4782-8a7f-3fdbde23df86", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "7729a724-86fe-4d43-a1c1-675788136bbb", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:09 np0005625203.localdomain ceph-mon[296066]: osdmap e267: 6 total, 6 up, 6 in
Feb 20 10:03:09 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:09.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:03:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:03:10 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "c93903cf-3015-40b5-a970-9c042e7db919", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:03:10 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "c93903cf-3015-40b5-a970-9c042e7db919", "format": "json"}]: dispatch
Feb 20 10:03:10 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2808071945' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:03:11 np0005625203.localdomain ceph-mon[296066]: pgmap v622: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 549 B/s rd, 45 KiB/s wr, 3 op/s
Feb 20 10:03:11 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3652150787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:03:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:11.337 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:03:11 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:03:11 np0005625203.localdomain systemd[1]: tmp-crun.RP1F1Q.mount: Deactivated successfully.
Feb 20 10:03:11 np0005625203.localdomain podman[328131]: 2026-02-20 10:03:11.770002241 +0000 UTC m=+0.087296062 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 20 10:03:11 np0005625203.localdomain podman[328131]: 2026-02-20 10:03:11.80551203 +0000 UTC m=+0.122805851 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 10:03:11 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:03:12 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e268 e268: 6 total, 6 up, 6 in
Feb 20 10:03:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:12.342 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:13 np0005625203.localdomain ceph-mon[296066]: pgmap v623: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 480 B/s rd, 80 KiB/s wr, 5 op/s
Feb 20 10:03:13 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "c93903cf-3015-40b5-a970-9c042e7db919", "format": "json"}]: dispatch
Feb 20 10:03:13 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "c93903cf-3015-40b5-a970-9c042e7db919", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:13 np0005625203.localdomain ceph-mon[296066]: osdmap e268: 6 total, 6 up, 6 in
Feb 20 10:03:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:13.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:03:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:13.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 10:03:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:13.343 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 10:03:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:13.360 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 10:03:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "73ef4b22-cb69-44e4-9b94-352c732420be_2d5a5261-1879-492b-a5b5-9562795ecfa9", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:14 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "snap_name": "73ef4b22-cb69-44e4-9b94-352c732420be", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:14 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:14.745 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 10:03:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:03:15 np0005625203.localdomain ceph-mon[296066]: pgmap v625: 177 pgs: 177 active+clean; 217 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 43 KiB/s wr, 2 op/s
Feb 20 10:03:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:03:15 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:16 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "949fccd6-b67a-49fe-87fe-ecca6c52f167", "size": 2147483648, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:03:16 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "949fccd6-b67a-49fe-87fe-ecca6c52f167", "format": "json"}]: dispatch
Feb 20 10:03:16 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:16 np0005625203.localdomain sshd[328149]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.205 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.210 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:03:17.210 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:03:17 np0005625203.localdomain ceph-mon[296066]: pgmap v626: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 97 KiB/s wr, 6 op/s
Feb 20 10:03:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:17.345 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:17 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e269 e269: 6 total, 6 up, 6 in
Feb 20 10:03:17 np0005625203.localdomain sshd[328149]: Invalid user iksi from 185.196.11.208 port 47888
Feb 20 10:03:17 np0005625203.localdomain sshd[328149]: Received disconnect from 185.196.11.208 port 47888:11: Bye Bye [preauth]
Feb 20 10:03:17 np0005625203.localdomain sshd[328149]: Disconnected from invalid user iksi 185.196.11.208 port 47888 [preauth]
Feb 20 10:03:18 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e270 e270: 6 total, 6 up, 6 in
Feb 20 10:03:18 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "format": "json"}]: dispatch
Feb 20 10:03:18 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "59be1d8c-08e1-4dc1-89a5-72e5d1dcad7d", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:18 np0005625203.localdomain ceph-mon[296066]: osdmap e269: 6 total, 6 up, 6 in
Feb 20 10:03:18 np0005625203.localdomain ceph-mon[296066]: osdmap e270: 6 total, 6 up, 6 in
Feb 20 10:03:18 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:03:18 np0005625203.localdomain podman[328151]: 2026-02-20 10:03:18.768120007 +0000 UTC m=+0.081008857 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Feb 20 10:03:18 np0005625203.localdomain podman[328151]: 2026-02-20 10:03:18.848400301 +0000 UTC m=+0.161289141 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Feb 20 10:03:18 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:03:19 np0005625203.localdomain ceph-mon[296066]: pgmap v628: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 97 KiB/s wr, 6 op/s
Feb 20 10:03:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:03:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:03:19 np0005625203.localdomain podman[328177]: 2026-02-20 10:03:19.753459942 +0000 UTC m=+0.068710138 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 10:03:19 np0005625203.localdomain systemd[1]: tmp-crun.LIYUOr.mount: Deactivated successfully.
Feb 20 10:03:19 np0005625203.localdomain podman[328178]: 2026-02-20 10:03:19.779010142 +0000 UTC m=+0.087408885 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9/ubi-minimal, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, build-date=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_id=openstack_network_exporter, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, io.buildah.version=1.33.7, distribution-scope=public, vendor=Red Hat, Inc., org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 20 10:03:19 np0005625203.localdomain podman[328178]: 2026-02-20 10:03:19.818245976 +0000 UTC m=+0.126644709 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, version=9.7, distribution-scope=public, config_id=openstack_network_exporter, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 20 10:03:19 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:03:19 np0005625203.localdomain podman[328177]: 2026-02-20 10:03:19.837254614 +0000 UTC m=+0.152504830 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Feb 20 10:03:19 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:03:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:03:20 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "949fccd6-b67a-49fe-87fe-ecca6c52f167", "format": "json"}]: dispatch
Feb 20 10:03:20 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "949fccd6-b67a-49fe-87fe-ecca6c52f167", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:21 np0005625203.localdomain ceph-mon[296066]: pgmap v630: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 549 B/s rd, 58 KiB/s wr, 4 op/s
Feb 20 10:03:22 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:03:22 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:22.347 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:03:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:22.348 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:22.348 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:03:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:22.348 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:03:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:22.349 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:03:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:22.350 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:22 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:23 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e271 e271: 6 total, 6 up, 6 in
Feb 20 10:03:23 np0005625203.localdomain ceph-mon[296066]: pgmap v631: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 131 KiB/s wr, 7 op/s
Feb 20 10:03:23 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:03:23 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006", "format": "json"}]: dispatch
Feb 20 10:03:23 np0005625203.localdomain ceph-mon[296066]: osdmap e271: 6 total, 6 up, 6 in
Feb 20 10:03:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:03:25 np0005625203.localdomain ceph-mon[296066]: pgmap v633: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 642 B/s rd, 97 KiB/s wr, 5 op/s
Feb 20 10:03:25 np0005625203.localdomain ceph-mon[296066]: mgrmap e55: np0005625202.arwxwo(active, since 15m), standbys: np0005625203.lonygy, np0005625204.exgrzx
Feb 20 10:03:26 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006", "format": "json"}]: dispatch
Feb 20 10:03:26 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "d3ef7044-f1cf-4b2a-bbcd-13fc0eea0006", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:27 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:03:27 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:27.351 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:27 np0005625203.localdomain ceph-mon[296066]: pgmap v634: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 767 B/s rd, 97 KiB/s wr, 6 op/s
Feb 20 10:03:27 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:28 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:03:28 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "format": "json"}]: dispatch
Feb 20 10:03:28 np0005625203.localdomain ceph-mon[296066]: pgmap v635: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 645 B/s rd, 81 KiB/s wr, 5 op/s
Feb 20 10:03:28 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:03:28 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:28 np0005625203.localdomain podman[240359]: time="2026-02-20T10:03:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:03:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:03:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 10:03:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18313 "" "Go-http-client/1.1"
Feb 20 10:03:29 np0005625203.localdomain sshd[328216]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:03:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "29715001-dc5b-4019-b356-72b85ec77e38", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:03:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "29715001-dc5b-4019-b356-72b85ec77e38", "format": "json"}]: dispatch
Feb 20 10:03:29 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:29 np0005625203.localdomain sshd[328216]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:03:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:03:30 np0005625203.localdomain ceph-mon[296066]: pgmap v636: 177 pgs: 177 active+clean; 218 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 78 KiB/s wr, 5 op/s
Feb 20 10:03:30 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot create", "vol_name": "cephfs", "sub_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "snap_name": "16d51dc9-4db6-4a28-af31-2025dc25f7ce", "format": "json"}]: dispatch
Feb 20 10:03:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:32.353 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:03:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:32.354 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:03:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:32.355 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:03:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:32.355 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:03:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:32.356 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:03:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:32.358 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:03:32 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:03:32 np0005625203.localdomain podman[328219]: 2026-02-20 10:03:32.76731023 +0000 UTC m=+0.081914658 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 10:03:32 np0005625203.localdomain podman[328219]: 2026-02-20 10:03:32.780421685 +0000 UTC m=+0.095026123 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 10:03:32 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:03:32 np0005625203.localdomain ceph-mon[296066]: pgmap v637: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 67 KiB/s wr, 4 op/s
Feb 20 10:03:32 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "29715001-dc5b-4019-b356-72b85ec77e38", "format": "json"}]: dispatch
Feb 20 10:03:32 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "29715001-dc5b-4019-b356-72b85ec77e38", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:32 np0005625203.localdomain podman[328218]: 2026-02-20 10:03:32.821718684 +0000 UTC m=+0.138554161 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 10:03:32 np0005625203.localdomain podman[328218]: 2026-02-20 10:03:32.831321541 +0000 UTC m=+0.148157058 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 10:03:32 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:03:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:03:35 np0005625203.localdomain ceph-mon[296066]: pgmap v638: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 389 B/s rd, 64 KiB/s wr, 4 op/s
Feb 20 10:03:35 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "snap_name": "16d51dc9-4db6-4a28-af31-2025dc25f7ce_1e999bb9-0d1d-43f6-a0e7-cfabc45a8c84", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:35 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "snap_name": "16d51dc9-4db6-4a28-af31-2025dc25f7ce", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Feb 20 10:03:35 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:36 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume create", "vol_name": "cephfs", "sub_name": "4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c", "size": 1073741824, "namespace_isolated": true, "mode": "0755", "format": "json"}]: dispatch
Feb 20 10:03:36 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume getpath", "vol_name": "cephfs", "sub_name": "4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c", "format": "json"}]: dispatch
Feb 20 10:03:36 np0005625203.localdomain ceph-mon[296066]: from='client.25631 172.18.0.34:0/3503108328' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Feb 20 10:03:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:03:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:03:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:03:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:03:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:03:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:03:37 np0005625203.localdomain ceph-mon[296066]: pgmap v639: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 96 KiB/s wr, 5 op/s
Feb 20 10:03:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e272 e272: 6 total, 6 up, 6 in
Feb 20 10:03:37 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:37.357 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "format": "json"}]: dispatch
Feb 20 10:03:38 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "7e87e16b-7877-4c1f-b090-d18ae7e41b0b", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:38 np0005625203.localdomain ceph-mon[296066]: osdmap e272: 6 total, 6 up, 6 in
Feb 20 10:03:39 np0005625203.localdomain ceph-mon[296066]: pgmap v641: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 99 KiB/s wr, 5 op/s
Feb 20 10:03:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:03:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c", "format": "json"}]: dispatch
Feb 20 10:03:40 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "4cb8de8b-c2e5-47d8-bfa6-1319d78b7e6c", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:41 np0005625203.localdomain ceph-mon[296066]: pgmap v642: 177 pgs: 177 active+clean; 219 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 409 B/s rd, 99 KiB/s wr, 5 op/s
Feb 20 10:03:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:42.359 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:42 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:03:42 np0005625203.localdomain podman[328261]: 2026-02-20 10:03:42.778337392 +0000 UTC m=+0.082364731 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Feb 20 10:03:42 np0005625203.localdomain podman[328261]: 2026-02-20 10:03:42.811300072 +0000 UTC m=+0.115327441 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 10:03:42 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:03:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e273 e273: 6 total, 6 up, 6 in
Feb 20 10:03:43 np0005625203.localdomain ceph-mon[296066]: pgmap v643: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 614 B/s rd, 108 KiB/s wr, 5 op/s
Feb 20 10:03:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ccc69125-8271-465f-a7cf-99b18598188c", "snap_name": "7873ec63-b44a-47a7-8bbe-8f944c5b9a9d_f5381e37-8883-4e7e-9251-e9f21c220e6c", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:43 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume snapshot rm", "vol_name": "cephfs", "sub_name": "ccc69125-8271-465f-a7cf-99b18598188c", "snap_name": "7873ec63-b44a-47a7-8bbe-8f944c5b9a9d", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:43 np0005625203.localdomain ceph-mon[296066]: osdmap e273: 6 total, 6 up, 6 in
Feb 20 10:03:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:03:45 np0005625203.localdomain ceph-mon[296066]: pgmap v645: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 75 KiB/s wr, 4 op/s
Feb 20 10:03:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:47.361 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs clone status", "vol_name": "cephfs", "clone_name": "ccc69125-8271-465f-a7cf-99b18598188c", "format": "json"}]: dispatch
Feb 20 10:03:47 np0005625203.localdomain ceph-mon[296066]: from='client.25631 -' entity='client.openstack' cmd=[{"prefix": "fs subvolume rm", "vol_name": "cephfs", "sub_name": "ccc69125-8271-465f-a7cf-99b18598188c", "force": true, "format": "json"}]: dispatch
Feb 20 10:03:47 np0005625203.localdomain ceph-mon[296066]: pgmap v646: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 969 B/s rd, 100 KiB/s wr, 7 op/s
Feb 20 10:03:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e274 e274: 6 total, 6 up, 6 in
Feb 20 10:03:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:47.637 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:47 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:47.636 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, ssl=[], options={'arp_ns_explicit_output': 'true', 'fdb_removal_limit': '0', 'ignore_lsp_down': 'false', 'mac_binding_removal_limit': '0', 'mac_prefix': 'c2:c2:14', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:d0:69:88:97:47'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 10:03:47 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:47.638 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Feb 20 10:03:48 np0005625203.localdomain ceph-mon[296066]: osdmap e274: 6 total, 6 up, 6 in
Feb 20 10:03:49 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:03:49 np0005625203.localdomain podman[328279]: 2026-02-20 10:03:49.259558308 +0000 UTC m=+0.095291840 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Feb 20 10:03:49 np0005625203.localdomain podman[328279]: 2026-02-20 10:03:49.375373263 +0000 UTC m=+0.211106805 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Feb 20 10:03:49 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:03:49 np0005625203.localdomain ceph-mon[296066]: pgmap v648: 177 pgs: 177 active+clean; 220 MiB data, 1.2 GiB used, 41 GiB / 42 GiB avail; 1023 B/s rd, 105 KiB/s wr, 6 op/s
Feb 20 10:03:49 np0005625203.localdomain sshd[328304]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:03:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:03:50 np0005625203.localdomain sshd[328304]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:03:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:03:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:03:50 np0005625203.localdomain podman[328307]: 2026-02-20 10:03:50.565173124 +0000 UTC m=+0.078020137 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, managed_by=edpm_ansible, release=1770267347, io.openshift.expose-services=, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.7)
Feb 20 10:03:50 np0005625203.localdomain podman[328307]: 2026-02-20 10:03:50.581480699 +0000 UTC m=+0.094327672 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, release=1770267347, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, build-date=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, distribution-scope=public, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 20 10:03:50 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:03:50 np0005625203.localdomain podman[328306]: 2026-02-20 10:03:50.680161013 +0000 UTC m=+0.192813870 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Feb 20 10:03:50 np0005625203.localdomain podman[328306]: 2026-02-20 10:03:50.716591271 +0000 UTC m=+0.229244118 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 10:03:50 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:03:51 np0005625203.localdomain ceph-mon[296066]: pgmap v649: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 511 B/s rd, 31 KiB/s wr, 4 op/s
Feb 20 10:03:51 np0005625203.localdomain sudo[328346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 10:03:51 np0005625203.localdomain sudo[328346]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:03:51 np0005625203.localdomain sudo[328346]: pam_unix(sudo:session): session closed for user root
Feb 20 10:03:51 np0005625203.localdomain sudo[328364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 10:03:51 np0005625203.localdomain sudo[328364]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:03:52 np0005625203.localdomain sudo[328364]: pam_unix(sudo:session): session closed for user root
Feb 20 10:03:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:52.362 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:52 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 10:03:52 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 10:03:52 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:03:52 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 10:03:52 np0005625203.localdomain sudo[328414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 10:03:52 np0005625203.localdomain sudo[328414]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:03:52 np0005625203.localdomain sudo[328414]: pam_unix(sudo:session): session closed for user root
Feb 20 10:03:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e275 e275: 6 total, 6 up, 6 in
Feb 20 10:03:53 np0005625203.localdomain ceph-mon[296066]: pgmap v650: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 722 B/s rd, 60 KiB/s wr, 5 op/s
Feb 20 10:03:53 np0005625203.localdomain ceph-mon[296066]: osdmap e275: 6 total, 6 up, 6 in
Feb 20 10:03:54 np0005625203.localdomain ceph-mon[296066]: pgmap v652: 177 pgs: 177 active+clean; 220 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 255 B/s rd, 32 KiB/s wr, 1 op/s
Feb 20 10:03:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:03:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:03:55 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:03:55.081 262775 INFO neutron.agent.linux.ip_lib [None req-cad783c4-872a-487e-b9c7-58f1209e012c - - - - - -] Device tap804c0cb3-39 cannot be used as it has no MAC address
Feb 20 10:03:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:55.105 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:55 np0005625203.localdomain kernel: device tap804c0cb3-39 entered promiscuous mode
Feb 20 10:03:55 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581835.1159] manager: (tap804c0cb3-39): new Generic device (/org/freedesktop/NetworkManager/Devices/78)
Feb 20 10:03:55 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:03:55Z|00424|binding|INFO|Claiming lport 804c0cb3-39d6-4344-89f9-a8e1065998b1 for this chassis.
Feb 20 10:03:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:55.118 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:55 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:03:55Z|00425|binding|INFO|804c0cb3-39d6-4344-89f9-a8e1065998b1: Claiming unknown
Feb 20 10:03:55 np0005625203.localdomain systemd-udevd[328442]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 10:03:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:55.128 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-b48ee4e5-3b06-47bf-864e-3255f62768c0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b48ee4e5-3b06-47bf-864e-3255f62768c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b72c01c3bfb4852b1b096bf3c216228', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f023e590-7126-4c37-b417-5a87ebac58c8, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=804c0cb3-39d6-4344-89f9-a8e1065998b1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 10:03:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:55.131 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 804c0cb3-39d6-4344-89f9-a8e1065998b1 in datapath b48ee4e5-3b06-47bf-864e-3255f62768c0 bound to our chassis
Feb 20 10:03:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:55.133 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network b48ee4e5-3b06-47bf-864e-3255f62768c0 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 10:03:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:55.135 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[8b48b2d4-e813-4484-a2db-ce87c9b466e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 10:03:55 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap804c0cb3-39: No such device
Feb 20 10:03:55 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:03:55Z|00426|binding|INFO|Setting lport 804c0cb3-39d6-4344-89f9-a8e1065998b1 ovn-installed in OVS
Feb 20 10:03:55 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:03:55Z|00427|binding|INFO|Setting lport 804c0cb3-39d6-4344-89f9-a8e1065998b1 up in Southbound
Feb 20 10:03:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:55.149 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:55 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap804c0cb3-39: No such device
Feb 20 10:03:55 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap804c0cb3-39: No such device
Feb 20 10:03:55 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap804c0cb3-39: No such device
Feb 20 10:03:55 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap804c0cb3-39: No such device
Feb 20 10:03:55 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap804c0cb3-39: No such device
Feb 20 10:03:55 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap804c0cb3-39: No such device
Feb 20 10:03:55 np0005625203.localdomain virtnodedevd[228478]: ethtool ioctl error on tap804c0cb3-39: No such device
Feb 20 10:03:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:55.194 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:55 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:55.225 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:55 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:03:55.642 161112 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=1e4d60e6-0be0-4143-b488-1b391fbc71ef, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Feb 20 10:03:56 np0005625203.localdomain podman[328513]: 
Feb 20 10:03:56 np0005625203.localdomain podman[328513]: 2026-02-20 10:03:56.354688849 +0000 UTC m=+0.088003974 container create 68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b48ee4e5-3b06-47bf-864e-3255f62768c0, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 10:03:56 np0005625203.localdomain systemd[1]: Started libpod-conmon-68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1.scope.
Feb 20 10:03:56 np0005625203.localdomain systemd[1]: tmp-crun.ItF0F5.mount: Deactivated successfully.
Feb 20 10:03:56 np0005625203.localdomain podman[328513]: 2026-02-20 10:03:56.311626187 +0000 UTC m=+0.044941312 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 10:03:56 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 10:03:56 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c90ec696c966b121e85f9c962581bfa79be93ce24232f8f040a6ef0c757e0c4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 10:03:56 np0005625203.localdomain podman[328513]: 2026-02-20 10:03:56.438010159 +0000 UTC m=+0.171325294 container init 68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b48ee4e5-3b06-47bf-864e-3255f62768c0, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0)
Feb 20 10:03:56 np0005625203.localdomain podman[328513]: 2026-02-20 10:03:56.446932625 +0000 UTC m=+0.180247760 container start 68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b48ee4e5-3b06-47bf-864e-3255f62768c0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 10:03:56 np0005625203.localdomain dnsmasq[328532]: started, version 2.85 cachesize 150
Feb 20 10:03:56 np0005625203.localdomain dnsmasq[328532]: DNS service limited to local subnets
Feb 20 10:03:56 np0005625203.localdomain dnsmasq[328532]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 10:03:56 np0005625203.localdomain dnsmasq[328532]: warning: no upstream servers configured
Feb 20 10:03:56 np0005625203.localdomain dnsmasq-dhcp[328532]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 10:03:56 np0005625203.localdomain dnsmasq[328532]: read /var/lib/neutron/dhcp/b48ee4e5-3b06-47bf-864e-3255f62768c0/addn_hosts - 0 addresses
Feb 20 10:03:56 np0005625203.localdomain dnsmasq-dhcp[328532]: read /var/lib/neutron/dhcp/b48ee4e5-3b06-47bf-864e-3255f62768c0/host
Feb 20 10:03:56 np0005625203.localdomain dnsmasq-dhcp[328532]: read /var/lib/neutron/dhcp/b48ee4e5-3b06-47bf-864e-3255f62768c0/opts
Feb 20 10:03:56 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:03:56.937 262775 INFO neutron.agent.dhcp.agent [None req-f129df94-bfd4-4379-9cac-d4e6ab5c9e6f - - - - - -] DHCP configuration for ports {'bfbd9b82-82f3-402c-83bd-744602a0880a'} is completed
Feb 20 10:03:57 np0005625203.localdomain ceph-mon[296066]: pgmap v653: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 246 B/s rd, 47 KiB/s wr, 2 op/s
Feb 20 10:03:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:57.364 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.256370) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581838256450, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1194, "num_deletes": 265, "total_data_size": 1782344, "memory_usage": 1804784, "flush_reason": "Manual Compaction"}
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581838265072, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 1169299, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35110, "largest_seqno": 36299, "table_properties": {"data_size": 1164271, "index_size": 2435, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12322, "raw_average_key_size": 20, "raw_value_size": 1153574, "raw_average_value_size": 1935, "num_data_blocks": 106, "num_entries": 596, "num_filter_entries": 596, "num_deletions": 265, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581781, "oldest_key_time": 1771581781, "file_creation_time": 1771581838, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 8756 microseconds, and 4320 cpu microseconds.
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.265131) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 1169299 bytes OK
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.265155) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.267661) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.267681) EVENT_LOG_v1 {"time_micros": 1771581838267675, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.267703) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1776349, prev total WAL file size 1776673, number of live WAL files 2.
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.268551) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323732' seq:72057594037927935, type:22 .. '6C6F676D0034353234' seq:0, type:0; will stop at (end)
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(1141KB)], [57(18MB)]
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581838268628, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 20658892, "oldest_snapshot_seqno": -1}
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 14355 keys, 20442618 bytes, temperature: kUnknown
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581838359610, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 20442618, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20357930, "index_size": 47723, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35909, "raw_key_size": 383754, "raw_average_key_size": 26, "raw_value_size": 20111620, "raw_average_value_size": 1401, "num_data_blocks": 1798, "num_entries": 14355, "num_filter_entries": 14355, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581838, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.359860) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 20442618 bytes
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.361946) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 226.9 rd, 224.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 18.6 +0.0 blob) out(19.5 +0.0 blob), read-write-amplify(35.2) write-amplify(17.5) OK, records in: 14903, records dropped: 548 output_compression: NoCompression
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.361973) EVENT_LOG_v1 {"time_micros": 1771581838361961, "job": 34, "event": "compaction_finished", "compaction_time_micros": 91034, "compaction_time_cpu_micros": 55756, "output_level": 6, "num_output_files": 1, "total_output_size": 20442618, "num_input_records": 14903, "num_output_records": 14355, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581838362266, "job": 34, "event": "table_file_deletion", "file_number": 59}
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581838365384, "job": 34, "event": "table_file_deletion", "file_number": 57}
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.268441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.365439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.365603) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.365610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.365613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:03:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:03:58.365616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:03:58 np0005625203.localdomain podman[240359]: time="2026-02-20T10:03:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:03:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:03:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 157718 "" "Go-http-client/1.1"
Feb 20 10:03:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18773 "" "Go-http-client/1.1"
Feb 20 10:03:59 np0005625203.localdomain ceph-mon[296066]: pgmap v654: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 39 KiB/s wr, 2 op/s
Feb 20 10:03:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:03:59.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:03:59 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:03:59.980 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:03:59Z, description=, device_id=0fbc926f-1dd2-40aa-9227-c636fc57c1ff, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e63820>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4e63b20>], id=409b266e-93ee-4b95-be96-7d73b1eba841, ip_allocation=immediate, mac_address=fa:16:3e:a0:32:53, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T10:03:52Z, description=, dns_domain=, id=b48ee4e5-3b06-47bf-864e-3255f62768c0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-125679221-network, port_security_enabled=True, project_id=5b72c01c3bfb4852b1b096bf3c216228, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27830, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3921, status=ACTIVE, subnets=['b9e5bd63-fc09-4837-bb69-d48ab0e2c425'], tags=[], tenant_id=5b72c01c3bfb4852b1b096bf3c216228, updated_at=2026-02-20T10:03:54Z, vlan_transparent=None, network_id=b48ee4e5-3b06-47bf-864e-3255f62768c0, port_security_enabled=False, project_id=5b72c01c3bfb4852b1b096bf3c216228, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3928, status=DOWN, tags=[], tenant_id=5b72c01c3bfb4852b1b096bf3c216228, updated_at=2026-02-20T10:03:59Z on network b48ee4e5-3b06-47bf-864e-3255f62768c0
Feb 20 10:04:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:04:00 np0005625203.localdomain dnsmasq[328532]: read /var/lib/neutron/dhcp/b48ee4e5-3b06-47bf-864e-3255f62768c0/addn_hosts - 1 addresses
Feb 20 10:04:00 np0005625203.localdomain dnsmasq-dhcp[328532]: read /var/lib/neutron/dhcp/b48ee4e5-3b06-47bf-864e-3255f62768c0/host
Feb 20 10:04:00 np0005625203.localdomain podman[328549]: 2026-02-20 10:04:00.252369354 +0000 UTC m=+0.062019310 container kill 68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b48ee4e5-3b06-47bf-864e-3255f62768c0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Feb 20 10:04:00 np0005625203.localdomain dnsmasq-dhcp[328532]: read /var/lib/neutron/dhcp/b48ee4e5-3b06-47bf-864e-3255f62768c0/opts
Feb 20 10:04:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:04:00.498 262775 INFO neutron.agent.dhcp.agent [None req-5c03eecc-183e-43d1-b927-5b79952e214d - - - - - -] DHCP configuration for ports {'409b266e-93ee-4b95-be96-7d73b1eba841'} is completed
Feb 20 10:04:00 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:04:00.643 262775 INFO neutron.agent.dhcp.agent [-] Trigger reload_allocations for port admin_state_up=True, allowed_address_pairs=[], binding:host_id=, binding:profile=, binding:vif_details=, binding:vif_type=unbound, binding:vnic_type=normal, created_at=2026-02-20T10:03:59Z, description=, device_id=0fbc926f-1dd2-40aa-9227-c636fc57c1ff, device_owner=network:router_interface, dns_assignment=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4df3af0>], dns_domain=, dns_name=, extra_dhcp_opts=[], fixed_ips=[<neutron.agent.linux.dhcp.DictModel object at 0x7f7da4df3d90>], id=409b266e-93ee-4b95-be96-7d73b1eba841, ip_allocation=immediate, mac_address=fa:16:3e:a0:32:53, name=, network=admin_state_up=True, availability_zone_hints=[], availability_zones=[], created_at=2026-02-20T10:03:52Z, description=, dns_domain=, id=b48ee4e5-3b06-47bf-864e-3255f62768c0, ipv4_address_scope=None, ipv6_address_scope=None, l2_adjacency=True, mtu=1442, name=tempest-TelemetryAlarmingAPIAdminTest-125679221-network, port_security_enabled=True, project_id=5b72c01c3bfb4852b1b096bf3c216228, provider:network_type=geneve, provider:physical_network=None, provider:segmentation_id=27830, qos_policy_id=None, revision_number=2, router:external=False, shared=False, standard_attr_id=3921, status=ACTIVE, subnets=['b9e5bd63-fc09-4837-bb69-d48ab0e2c425'], tags=[], tenant_id=5b72c01c3bfb4852b1b096bf3c216228, updated_at=2026-02-20T10:03:54Z, vlan_transparent=None, network_id=b48ee4e5-3b06-47bf-864e-3255f62768c0, port_security_enabled=False, project_id=5b72c01c3bfb4852b1b096bf3c216228, qos_network_policy_id=None, qos_policy_id=None, resource_request=None, revision_number=1, security_groups=[], standard_attr_id=3928, status=DOWN, tags=[], tenant_id=5b72c01c3bfb4852b1b096bf3c216228, updated_at=2026-02-20T10:03:59Z on network b48ee4e5-3b06-47bf-864e-3255f62768c0
Feb 20 10:04:00 np0005625203.localdomain dnsmasq[328532]: read /var/lib/neutron/dhcp/b48ee4e5-3b06-47bf-864e-3255f62768c0/addn_hosts - 1 addresses
Feb 20 10:04:00 np0005625203.localdomain dnsmasq-dhcp[328532]: read /var/lib/neutron/dhcp/b48ee4e5-3b06-47bf-864e-3255f62768c0/host
Feb 20 10:04:00 np0005625203.localdomain dnsmasq-dhcp[328532]: read /var/lib/neutron/dhcp/b48ee4e5-3b06-47bf-864e-3255f62768c0/opts
Feb 20 10:04:00 np0005625203.localdomain podman[328588]: 2026-02-20 10:04:00.85435611 +0000 UTC m=+0.056880843 container kill 68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b48ee4e5-3b06-47bf-864e-3255f62768c0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true)
Feb 20 10:04:01 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:04:01.176 262775 INFO neutron.agent.dhcp.agent [None req-2633beb8-f0ed-4b1f-862d-33967117c34c - - - - - -] DHCP configuration for ports {'409b266e-93ee-4b95-be96-7d73b1eba841'} is completed
Feb 20 10:04:01 np0005625203.localdomain ceph-mon[296066]: pgmap v655: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 204 B/s rd, 39 KiB/s wr, 1 op/s
Feb 20 10:04:02 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:02Z|00428|ovn_bfd|INFO|Enabled BFD on interface ovn-0c414b-0
Feb 20 10:04:02 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:02Z|00429|ovn_bfd|INFO|Enabled BFD on interface ovn-2275c3-0
Feb 20 10:04:02 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:02Z|00430|ovn_bfd|INFO|Enabled BFD on interface ovn-2df8cc-0
Feb 20 10:04:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:02.213 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:02.226 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:02.232 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:02.273 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:02.311 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1304243398' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:04:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/1304243398' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:04:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:02.326 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:02.367 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:03.121 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:03.186 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:03.242 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:03.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:04:03 np0005625203.localdomain ceph-mon[296066]: pgmap v656: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s
Feb 20 10:04:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:03.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:04:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:03.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:04:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:03.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:04:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:03.362 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 10:04:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:03.362 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:04:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:04:03 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:04:03 np0005625203.localdomain podman[328630]: 2026-02-20 10:04:03.76856556 +0000 UTC m=+0.085741155 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 10:04:03 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:04:03 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/244285166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:04:03 np0005625203.localdomain podman[328630]: 2026-02-20 10:04:03.777651641 +0000 UTC m=+0.094827236 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Feb 20 10:04:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:03.787 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:04:03 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:04:03 np0005625203.localdomain podman[328631]: 2026-02-20 10:04:03.878023399 +0000 UTC m=+0.193397058 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 10:04:03 np0005625203.localdomain podman[328631]: 2026-02-20 10:04:03.885799899 +0000 UTC m=+0.201173538 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 10:04:03 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:04:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:04.016 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 10:04:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:04.018 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11554MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 10:04:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:04.018 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:04:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:04.019 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:04:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:04.098 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 10:04:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:04.099 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 10:04:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:04.117 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:04:04 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/244285166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:04:04 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:04:04 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2811233635' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:04:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:04.563 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:04:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:04.569 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 10:04:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:04.582 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 10:04:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:04.583 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 10:04:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:04.583 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:04:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:04:05 np0005625203.localdomain ceph-mon[296066]: pgmap v657: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s
Feb 20 10:04:05 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2811233635' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:04:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:06.584 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:04:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:06.585 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:04:06 np0005625203.localdomain ceph-mon[296066]: pgmap v658: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 13 KiB/s wr, 0 op/s
Feb 20 10:04:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:04:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:04:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:04:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:04:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:04:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:04:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:07.368 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:07.373 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:07.677 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:04:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:07.678 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:04:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:07.678 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:04:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:08.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:04:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:08.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:04:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:08.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 10:04:08 np0005625203.localdomain ceph-mon[296066]: pgmap v659: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s wr, 0 op/s
Feb 20 10:04:08 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3556237270' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:04:08 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:08Z|00431|ovn_bfd|INFO|Disabled BFD on interface ovn-0c414b-0
Feb 20 10:04:08 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:08Z|00432|ovn_bfd|INFO|Disabled BFD on interface ovn-2275c3-0
Feb 20 10:04:08 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:08Z|00433|ovn_bfd|INFO|Disabled BFD on interface ovn-2df8cc-0
Feb 20 10:04:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:09.024 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:09.029 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:09.040 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:09 np0005625203.localdomain dnsmasq[328532]: read /var/lib/neutron/dhcp/b48ee4e5-3b06-47bf-864e-3255f62768c0/addn_hosts - 0 addresses
Feb 20 10:04:09 np0005625203.localdomain dnsmasq-dhcp[328532]: read /var/lib/neutron/dhcp/b48ee4e5-3b06-47bf-864e-3255f62768c0/host
Feb 20 10:04:09 np0005625203.localdomain podman[328718]: 2026-02-20 10:04:09.12176721 +0000 UTC m=+0.042882129 container kill 68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b48ee4e5-3b06-47bf-864e-3255f62768c0, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Feb 20 10:04:09 np0005625203.localdomain dnsmasq-dhcp[328532]: read /var/lib/neutron/dhcp/b48ee4e5-3b06-47bf-864e-3255f62768c0/opts
Feb 20 10:04:09 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:09Z|00434|binding|INFO|Releasing lport 804c0cb3-39d6-4344-89f9-a8e1065998b1 from this chassis (sb_readonly=0)
Feb 20 10:04:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:09.301 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:09 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:09Z|00435|binding|INFO|Setting lport 804c0cb3-39d6-4344-89f9-a8e1065998b1 down in Southbound
Feb 20 10:04:09 np0005625203.localdomain kernel: device tap804c0cb3-39 left promiscuous mode
Feb 20 10:04:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:09.309 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-b48ee4e5-3b06-47bf-864e-3255f62768c0', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b48ee4e5-3b06-47bf-864e-3255f62768c0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5b72c01c3bfb4852b1b096bf3c216228', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f023e590-7126-4c37-b417-5a87ebac58c8, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=804c0cb3-39d6-4344-89f9-a8e1065998b1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 10:04:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:09.311 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 804c0cb3-39d6-4344-89f9-a8e1065998b1 in datapath b48ee4e5-3b06-47bf-864e-3255f62768c0 unbound from our chassis
Feb 20 10:04:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:09.314 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b48ee4e5-3b06-47bf-864e-3255f62768c0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 10:04:09 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:09.314 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[691c2349-3ba7-470a-ab2f-30cb2520628c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 10:04:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:09.320 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:09 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3887188390' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:04:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:04:10 np0005625203.localdomain ceph-mon[296066]: pgmap v660: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s wr, 0 op/s
Feb 20 10:04:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:11.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:04:11 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/974015829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:04:11 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3485150094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:04:12 np0005625203.localdomain dnsmasq[328532]: exiting on receipt of SIGTERM
Feb 20 10:04:12 np0005625203.localdomain podman[328757]: 2026-02-20 10:04:12.130811376 +0000 UTC m=+0.059209794 container kill 68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b48ee4e5-3b06-47bf-864e-3255f62768c0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 10:04:12 np0005625203.localdomain systemd[1]: libpod-68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1.scope: Deactivated successfully.
Feb 20 10:04:12 np0005625203.localdomain podman[328771]: 2026-02-20 10:04:12.201114932 +0000 UTC m=+0.057079458 container died 68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b48ee4e5-3b06-47bf-864e-3255f62768c0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 10:04:12 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1-userdata-shm.mount: Deactivated successfully.
Feb 20 10:04:12 np0005625203.localdomain podman[328771]: 2026-02-20 10:04:12.237638862 +0000 UTC m=+0.093603348 container cleanup 68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b48ee4e5-3b06-47bf-864e-3255f62768c0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4)
Feb 20 10:04:12 np0005625203.localdomain systemd[1]: libpod-conmon-68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1.scope: Deactivated successfully.
Feb 20 10:04:12 np0005625203.localdomain podman[328778]: 2026-02-20 10:04:12.281318955 +0000 UTC m=+0.125201747 container remove 68641c5cbfbe5395c39d63c423d685d99f25849c9567879fd141ee57249b9fa1 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-b48ee4e5-3b06-47bf-864e-3255f62768c0, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Feb 20 10:04:12 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:04:12.306 262775 INFO neutron.agent.dhcp.agent [None req-52d2abe5-dc39-41c7-a91d-621b83d7f5a1 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 10:04:12 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:04:12.370 262775 INFO neutron.agent.dhcp.agent [-] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 10:04:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:12.393 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:12.606 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:12 np0005625203.localdomain ceph-mon[296066]: pgmap v661: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s wr, 0 op/s
Feb 20 10:04:12 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:04:13 np0005625203.localdomain podman[328802]: 2026-02-20 10:04:13.022700574 +0000 UTC m=+0.085707184 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260127)
Feb 20 10:04:13 np0005625203.localdomain podman[328802]: 2026-02-20 10:04:13.053728834 +0000 UTC m=+0.116735444 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 10:04:13 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:04:13 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-8c90ec696c966b121e85f9c962581bfa79be93ce24232f8f040a6ef0c757e0c4-merged.mount: Deactivated successfully.
Feb 20 10:04:13 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2db48ee4e5\x2d3b06\x2d47bf\x2d864e\x2d3255f62768c0.mount: Deactivated successfully.
Feb 20 10:04:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:13.337 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:04:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:13.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:04:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:13.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 10:04:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:13.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 10:04:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:13.353 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 10:04:14 np0005625203.localdomain sshd[328820]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:04:14 np0005625203.localdomain sshd[328820]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:04:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:04:15 np0005625203.localdomain ceph-mon[296066]: pgmap v662: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s wr, 0 op/s
Feb 20 10:04:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:17.395 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:17 np0005625203.localdomain ceph-mon[296066]: pgmap v663: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 2.0 KiB/s wr, 0 op/s
Feb 20 10:04:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:19.349 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:04:19 np0005625203.localdomain ceph-mon[296066]: pgmap v664: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:04:19 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:04:19 np0005625203.localdomain systemd[1]: tmp-crun.iXNorL.mount: Deactivated successfully.
Feb 20 10:04:19 np0005625203.localdomain podman[328822]: 2026-02-20 10:04:19.778135399 +0000 UTC m=+0.094381343 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 10:04:19 np0005625203.localdomain podman[328822]: 2026-02-20 10:04:19.843143491 +0000 UTC m=+0.159389415 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 20 10:04:19 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:04:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:04:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:04:20 np0005625203.localdomain podman[328848]: 2026-02-20 10:04:20.765514463 +0000 UTC m=+0.080763421 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, io.buildah.version=1.33.7, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.7, maintainer=Red Hat, Inc., org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, name=ubi9/ubi-minimal, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, release=1770267347, com.redhat.component=ubi9-minimal-container, build-date=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers)
Feb 20 10:04:20 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:04:20 np0005625203.localdomain podman[328848]: 2026-02-20 10:04:20.8074205 +0000 UTC m=+0.122669438 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, url=https://catalog.redhat.com/en/search?searchType=containers, release=1770267347, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.7, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 10:04:20 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:04:20 np0005625203.localdomain podman[328866]: 2026-02-20 10:04:20.868000816 +0000 UTC m=+0.080963288 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true)
Feb 20 10:04:20 np0005625203.localdomain podman[328866]: 2026-02-20 10:04:20.8842988 +0000 UTC m=+0.097261312 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 20 10:04:20 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:04:21 np0005625203.localdomain ceph-mon[296066]: pgmap v665: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:04:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:22.399 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:04:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:22.401 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:04:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:22.402 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:04:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:22.402 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:04:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:22.429 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:22.430 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:04:23 np0005625203.localdomain ceph-mon[296066]: pgmap v666: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:04:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:04:25 np0005625203.localdomain ceph-mon[296066]: pgmap v667: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:04:26 np0005625203.localdomain ceph-mon[296066]: pgmap v668: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:04:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:27.431 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:04:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:27.432 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:27.432 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:04:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:27.433 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:04:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:27.434 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:04:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:27.436 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:28 np0005625203.localdomain podman[240359]: time="2026-02-20T10:04:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:04:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:04:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 10:04:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18309 "" "Go-http-client/1.1"
Feb 20 10:04:29 np0005625203.localdomain ceph-mon[296066]: pgmap v669: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:04:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:04:31 np0005625203.localdomain ceph-mon[296066]: pgmap v670: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:04:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:32.434 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:34 np0005625203.localdomain ceph-mon[296066]: pgmap v671: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:04:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:04:34 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:04:34 np0005625203.localdomain podman[328887]: 2026-02-20 10:04:34.679822694 +0000 UTC m=+0.079173442 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 10:04:34 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:04:34.679 262775 INFO neutron.agent.linux.ip_lib [None req-d1675050-fa73-4c89-afbb-39a65d17e5fd - - - - - -] Device tap9e46f26c-a6 cannot be used as it has no MAC address
Feb 20 10:04:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:34.698 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:34 np0005625203.localdomain kernel: device tap9e46f26c-a6 entered promiscuous mode
Feb 20 10:04:34 np0005625203.localdomain NetworkManager[5968]: <info>  [1771581874.7076] manager: (tap9e46f26c-a6): new Generic device (/org/freedesktop/NetworkManager/Devices/79)
Feb 20 10:04:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:34.711 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:34 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:34Z|00436|binding|INFO|Claiming lport 9e46f26c-a67e-487f-ac4c-7bf5f9cfb1fc for this chassis.
Feb 20 10:04:34 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:34Z|00437|binding|INFO|9e46f26c-a67e-487f-ac4c-7bf5f9cfb1fc: Claiming unknown
Feb 20 10:04:34 np0005625203.localdomain systemd-udevd[328924]: Network interface NamePolicy= disabled on kernel command line.
Feb 20 10:04:34 np0005625203.localdomain systemd-journald[48285]: Data hash table of /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal has a fill level at 75.0 (53723 of 71630 items, 25165824 file size, 468 bytes per hash table item), suggesting rotation.
Feb 20 10:04:34 np0005625203.localdomain systemd-journald[48285]: /run/log/journal/01f46965e72fd8a157841feaa66c8d52/system.journal: Journal header limits reached or header out-of-date, rotating.
Feb 20 10:04:34 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 10:04:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:34.721 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/16', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50ba22298b9844c2b9853d9ca1060aa4', 'neutron:revision_number': '1', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ebc1bbd4-a1c7-4833-9ee1-0630ad3c717c, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=9e46f26c-a67e-487f-ac4c-7bf5f9cfb1fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 10:04:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:34.723 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 9e46f26c-a67e-487f-ac4c-7bf5f9cfb1fc in datapath 0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee bound to our chassis
Feb 20 10:04:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:34.723 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:34.727 161112 DEBUG neutron.agent.ovn.metadata.agent [-] Port 57d6459d-da53-41ea-9106-ba58a22b422f IP addresses were not retrieved from the Port_Binding MAC column ['unknown'] _get_port_ips /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:536
Feb 20 10:04:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:34.727 161112 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Feb 20 10:04:34 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:34Z|00438|binding|INFO|Setting lport 9e46f26c-a67e-487f-ac4c-7bf5f9cfb1fc ovn-installed in OVS
Feb 20 10:04:34 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:34Z|00439|binding|INFO|Setting lport 9e46f26c-a67e-487f-ac4c-7bf5f9cfb1fc up in Southbound
Feb 20 10:04:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:34.731 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:34 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:34.729 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f35439-c44e-424f-920d-c04e8cbd5d8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 10:04:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:34.758 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:34 np0005625203.localdomain podman[328888]: 2026-02-20 10:04:34.760950345 +0000 UTC m=+0.154957588 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 10:04:34 np0005625203.localdomain podman[328887]: 2026-02-20 10:04:34.76596274 +0000 UTC m=+0.165313498 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 10:04:34 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:04:34 np0005625203.localdomain podman[328888]: 2026-02-20 10:04:34.792930845 +0000 UTC m=+0.186938058 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 10:04:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:34.804 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:34 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:04:34 np0005625203.localdomain rsyslogd[758]: imjournal: journal files changed, reloading...  [v8.2102.0-111.el9 try https://www.rsyslog.com/e/0 ]
Feb 20 10:04:34 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:34.830 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:04:35 np0005625203.localdomain ceph-mon[296066]: pgmap v672: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:04:35 np0005625203.localdomain podman[328991]: 
Feb 20 10:04:35 np0005625203.localdomain podman[328991]: 2026-02-20 10:04:35.743577043 +0000 UTC m=+0.086632623 container create a915e13d8b7fe1eeb67f2e826206731a563f1a18176d068f88a7f92d2795f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 10:04:35 np0005625203.localdomain systemd[1]: Started libpod-conmon-a915e13d8b7fe1eeb67f2e826206731a563f1a18176d068f88a7f92d2795f292.scope.
Feb 20 10:04:35 np0005625203.localdomain systemd[1]: tmp-crun.jKstbW.mount: Deactivated successfully.
Feb 20 10:04:35 np0005625203.localdomain podman[328991]: 2026-02-20 10:04:35.702725658 +0000 UTC m=+0.045781268 image pull  quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified
Feb 20 10:04:35 np0005625203.localdomain systemd[1]: Started libcrun container.
Feb 20 10:04:35 np0005625203.localdomain kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1280e05283388bb38941eeb5f636f39b7f056ac32a7d8dbdad12917c9608b1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Feb 20 10:04:35 np0005625203.localdomain podman[328991]: 2026-02-20 10:04:35.828327206 +0000 UTC m=+0.171382796 container init a915e13d8b7fe1eeb67f2e826206731a563f1a18176d068f88a7f92d2795f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 10:04:35 np0005625203.localdomain podman[328991]: 2026-02-20 10:04:35.846310702 +0000 UTC m=+0.189366292 container start a915e13d8b7fe1eeb67f2e826206731a563f1a18176d068f88a7f92d2795f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Feb 20 10:04:35 np0005625203.localdomain dnsmasq[329009]: started, version 2.85 cachesize 150
Feb 20 10:04:35 np0005625203.localdomain dnsmasq[329009]: DNS service limited to local subnets
Feb 20 10:04:35 np0005625203.localdomain dnsmasq[329009]: compile time options: IPv6 GNU-getopt DBus no-UBus no-i18n IDN2 DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth cryptohash DNSSEC loop-detect inotify dumpfile
Feb 20 10:04:35 np0005625203.localdomain dnsmasq[329009]: warning: no upstream servers configured
Feb 20 10:04:35 np0005625203.localdomain dnsmasq-dhcp[329009]: DHCP, static leases only on 10.100.0.0, lease time 1d
Feb 20 10:04:35 np0005625203.localdomain dnsmasq[329009]: read /var/lib/neutron/dhcp/0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee/addn_hosts - 0 addresses
Feb 20 10:04:35 np0005625203.localdomain dnsmasq-dhcp[329009]: read /var/lib/neutron/dhcp/0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee/host
Feb 20 10:04:35 np0005625203.localdomain dnsmasq-dhcp[329009]: read /var/lib/neutron/dhcp/0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee/opts
Feb 20 10:04:36 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e276 e276: 6 total, 6 up, 6 in
Feb 20 10:04:36 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:04:36.401 262775 INFO neutron.agent.dhcp.agent [None req-519b7c09-9657-40b8-bdcb-54802fdd190e - - - - - -] DHCP configuration for ports {'3e1f2ebd-239b-40db-b495-e743f26518d3'} is completed
Feb 20 10:04:36 np0005625203.localdomain systemd[1]: tmp-crun.J5pA9n.mount: Deactivated successfully.
Feb 20 10:04:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:04:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:04:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:04:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:04:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:04:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:04:37 np0005625203.localdomain ceph-mon[296066]: pgmap v673: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 4.2 KiB/s rd, 682 B/s wr, 6 op/s
Feb 20 10:04:37 np0005625203.localdomain ceph-mon[296066]: osdmap e276: 6 total, 6 up, 6 in
Feb 20 10:04:37 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e277 e277: 6 total, 6 up, 6 in
Feb 20 10:04:37 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:37.437 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:38 np0005625203.localdomain ceph-mon[296066]: osdmap e277: 6 total, 6 up, 6 in
Feb 20 10:04:38 np0005625203.localdomain sshd[329010]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:04:39 np0005625203.localdomain sshd[329010]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:04:39 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:39Z|00440|binding|INFO|Removing iface tap9e46f26c-a6 ovn-installed in OVS
Feb 20 10:04:39 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:04:39Z|00441|binding|INFO|Removing lport 9e46f26c-a67e-487f-ac4c-7bf5f9cfb1fc ovn-installed in OVS
Feb 20 10:04:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:39.412 161112 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 57d6459d-da53-41ea-9106-ba58a22b422f with type ""
Feb 20 10:04:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:39.413 161112 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'np0005625203.localdomain'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/16', 'neutron:device_id': 'dhcp8f7c09ee-1b0c-5712-9fa4-dd6cf89a7df1-0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee', 'neutron:device_owner': 'network:dhcp', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '50ba22298b9844c2b9853d9ca1060aa4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'np0005625203.localdomain'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ebc1bbd4-a1c7-4833-9ee1-0630ad3c717c, chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f52a50619d0>], logical_port=9e46f26c-a67e-487f-ac4c-7bf5f9cfb1fc) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Feb 20 10:04:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:39.414 161112 INFO neutron.agent.ovn.metadata.agent [-] Port 9e46f26c-a67e-487f-ac4c-7bf5f9cfb1fc in datapath 0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee unbound from our chassis
Feb 20 10:04:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:39.415 161112 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Feb 20 10:04:39 np0005625203.localdomain ceph-mon[296066]: pgmap v676: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 6.4 KiB/s rd, 1023 B/s wr, 9 op/s
Feb 20 10:04:39 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:04:39.934 305605 DEBUG oslo.privsep.daemon [-] privsep: reply[c138309a-18ba-4977-b6e0-7c7427e70dde]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Feb 20 10:04:39 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:39.936 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:39 np0005625203.localdomain dnsmasq[329009]: exiting on receipt of SIGTERM
Feb 20 10:04:39 np0005625203.localdomain podman[329029]: 2026-02-20 10:04:39.937866268 +0000 UTC m=+0.441868839 container kill a915e13d8b7fe1eeb67f2e826206731a563f1a18176d068f88a7f92d2795f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Feb 20 10:04:39 np0005625203.localdomain systemd[1]: libpod-a915e13d8b7fe1eeb67f2e826206731a563f1a18176d068f88a7f92d2795f292.scope: Deactivated successfully.
Feb 20 10:04:39 np0005625203.localdomain podman[329042]: 2026-02-20 10:04:39.973059328 +0000 UTC m=+0.028742541 container died a915e13d8b7fe1eeb67f2e826206731a563f1a18176d068f88a7f92d2795f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Feb 20 10:04:39 np0005625203.localdomain systemd[1]: tmp-crun.mrYqTb.mount: Deactivated successfully.
Feb 20 10:04:40 np0005625203.localdomain podman[329042]: 2026-02-20 10:04:40.001455557 +0000 UTC m=+0.057138760 container cleanup a915e13d8b7fe1eeb67f2e826206731a563f1a18176d068f88a7f92d2795f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 10:04:40 np0005625203.localdomain systemd[1]: libpod-conmon-a915e13d8b7fe1eeb67f2e826206731a563f1a18176d068f88a7f92d2795f292.scope: Deactivated successfully.
Feb 20 10:04:40 np0005625203.localdomain podman[329049]: 2026-02-20 10:04:40.01870563 +0000 UTC m=+0.062793085 container remove a915e13d8b7fe1eeb67f2e826206731a563f1a18176d068f88a7f92d2795f292 (image=quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified, name=neutron-dnsmasq-qdhcp-0c5b7d5a-d314-43a1-b1b6-94d07aaad4ee, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260127)
Feb 20 10:04:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:40.027 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:40 np0005625203.localdomain kernel: device tap9e46f26c-a6 left promiscuous mode
Feb 20 10:04:40 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:40.040 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:04:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:04:40.099 262775 INFO neutron.agent.dhcp.agent [None req-69efd85d-d636-4e03-bf07-d25f0c8c7c57 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 10:04:40 np0005625203.localdomain neutron_dhcp_agent[262771]: 2026-02-20 10:04:40.100 262775 INFO neutron.agent.dhcp.agent [None req-69efd85d-d636-4e03-bf07-d25f0c8c7c57 - - - - - -] Network not present, action: clean_devices, action_kwargs: {}
Feb 20 10:04:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay-3e1280e05283388bb38941eeb5f636f39b7f056ac32a7d8dbdad12917c9608b1-merged.mount: Deactivated successfully.
Feb 20 10:04:40 np0005625203.localdomain systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a915e13d8b7fe1eeb67f2e826206731a563f1a18176d068f88a7f92d2795f292-userdata-shm.mount: Deactivated successfully.
Feb 20 10:04:40 np0005625203.localdomain systemd[1]: run-netns-qdhcp\x2d0c5b7d5a\x2dd314\x2d43a1\x2db1b6\x2d94d07aaad4ee.mount: Deactivated successfully.
Feb 20 10:04:40 np0005625203.localdomain ceph-mon[296066]: pgmap v677: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 8.0 KiB/s rd, 1.6 MiB/s wr, 12 op/s
Feb 20 10:04:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:42.477 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:42 np0005625203.localdomain ceph-mon[296066]: pgmap v678: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 39 KiB/s rd, 2.6 MiB/s wr, 55 op/s
Feb 20 10:04:43 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 e278: 6 total, 6 up, 6 in
Feb 20 10:04:43 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:04:43 np0005625203.localdomain systemd[1]: tmp-crun.BiWKbF.mount: Deactivated successfully.
Feb 20 10:04:43 np0005625203.localdomain podman[329071]: 2026-02-20 10:04:43.775161823 +0000 UTC m=+0.089893194 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3)
Feb 20 10:04:43 np0005625203.localdomain podman[329071]: 2026-02-20 10:04:43.804972525 +0000 UTC m=+0.119703876 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Feb 20 10:04:43 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:04:44 np0005625203.localdomain ceph-mon[296066]: osdmap e278: 6 total, 6 up, 6 in
Feb 20 10:04:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:04:45 np0005625203.localdomain ceph-mon[296066]: pgmap v680: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 35 KiB/s rd, 2.7 MiB/s wr, 48 op/s
Feb 20 10:04:47 np0005625203.localdomain ceph-mon[296066]: pgmap v681: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 31 KiB/s rd, 2.4 MiB/s wr, 43 op/s
Feb 20 10:04:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:47.479 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:49 np0005625203.localdomain ceph-mon[296066]: pgmap v682: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 26 KiB/s rd, 2.0 MiB/s wr, 36 op/s
Feb 20 10:04:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:04:50 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:04:50 np0005625203.localdomain podman[329090]: 2026-02-20 10:04:50.763288962 +0000 UTC m=+0.079100810 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Feb 20 10:04:50 np0005625203.localdomain podman[329090]: 2026-02-20 10:04:50.801226196 +0000 UTC m=+0.117038074 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, tcib_managed=true, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 10:04:50 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:04:51 np0005625203.localdomain ceph-mon[296066]: pgmap v683: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 25 KiB/s rd, 821 KiB/s wr, 33 op/s
Feb 20 10:04:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:04:51 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:04:51 np0005625203.localdomain systemd[1]: tmp-crun.bTLVgS.mount: Deactivated successfully.
Feb 20 10:04:51 np0005625203.localdomain podman[329117]: 2026-02-20 10:04:51.864016675 +0000 UTC m=+0.174077650 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1770267347, io.openshift.expose-services=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, architecture=x86_64, org.opencontainers.image.created=2026-02-05T04:57:10Z, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-type=git, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vendor=Red Hat, Inc.)
Feb 20 10:04:51 np0005625203.localdomain podman[329116]: 2026-02-20 10:04:51.825932477 +0000 UTC m=+0.139592673 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Feb 20 10:04:51 np0005625203.localdomain podman[329117]: 2026-02-20 10:04:51.900751732 +0000 UTC m=+0.210812687 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9/ubi-minimal, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, io.buildah.version=1.33.7, release=1770267347, build-date=2026-02-05T04:57:10Z, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z)
Feb 20 10:04:51 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:04:51 np0005625203.localdomain podman[329116]: 2026-02-20 10:04:51.957601612 +0000 UTC m=+0.271261788 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Feb 20 10:04:51 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:04:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:52.483 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:04:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:52.484 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:04:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:52.484 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:04:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:52.484 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:04:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:52.504 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:04:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:52.505 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:04:52 np0005625203.localdomain sudo[329156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 10:04:52 np0005625203.localdomain sudo[329156]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:04:52 np0005625203.localdomain sudo[329156]: pam_unix(sudo:session): session closed for user root
Feb 20 10:04:52 np0005625203.localdomain sudo[329174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 check-host
Feb 20 10:04:52 np0005625203.localdomain sudo[329174]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:04:53 np0005625203.localdomain sudo[329174]: pam_unix(sudo:session): session closed for user root
Feb 20 10:04:53 np0005625203.localdomain ceph-mon[296066]: pgmap v684: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:04:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:04:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:04:53 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:04:53 np0005625203.localdomain sudo[329214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 10:04:53 np0005625203.localdomain sudo[329214]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:04:53 np0005625203.localdomain sudo[329214]: pam_unix(sudo:session): session closed for user root
Feb 20 10:04:53 np0005625203.localdomain sudo[329232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 10:04:53 np0005625203.localdomain sudo[329232]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:04:54 np0005625203.localdomain sudo[329232]: pam_unix(sudo:session): session closed for user root
Feb 20 10:04:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:04:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:04:54 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:04:54 np0005625203.localdomain sudo[329282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 10:04:54 np0005625203.localdomain sudo[329282]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:04:54 np0005625203.localdomain sudo[329282]: pam_unix(sudo:session): session closed for user root
Feb 20 10:04:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:04:55 np0005625203.localdomain ceph-mon[296066]: pgmap v685: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:04:55 np0005625203.localdomain ceph-mon[296066]: mgrmap e56: np0005625202.arwxwo(active, since 16m), standbys: np0005625203.lonygy, np0005625204.exgrzx
Feb 20 10:04:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 10:04:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 10:04:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:04:55 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 10:04:57 np0005625203.localdomain ceph-mon[296066]: pgmap v686: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Feb 20 10:04:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:57.506 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.310658) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581898310694, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1035, "num_deletes": 251, "total_data_size": 1541318, "memory_usage": 1563632, "flush_reason": "Manual Compaction"}
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581898318448, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 1012543, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36304, "largest_seqno": 37334, "table_properties": {"data_size": 1008141, "index_size": 2065, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9542, "raw_average_key_size": 18, "raw_value_size": 999022, "raw_average_value_size": 1982, "num_data_blocks": 86, "num_entries": 504, "num_filter_entries": 504, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581838, "oldest_key_time": 1771581838, "file_creation_time": 1771581898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 7832 microseconds, and 3523 cpu microseconds.
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.318490) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 1012543 bytes OK
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.318511) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.320245) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.320263) EVENT_LOG_v1 {"time_micros": 1771581898320257, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.320283) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 1536138, prev total WAL file size 1536138, number of live WAL files 2.
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.320934) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760031353238' seq:72057594037927935, type:22 .. '6B760031373739' seq:0, type:0; will stop at (end)
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(988KB)], [60(19MB)]
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581898321029, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 21455161, "oldest_snapshot_seqno": -1}
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 14328 keys, 20410079 bytes, temperature: kUnknown
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581898410428, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 20410079, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 20325766, "index_size": 47412, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35845, "raw_key_size": 384678, "raw_average_key_size": 26, "raw_value_size": 20079923, "raw_average_value_size": 1401, "num_data_blocks": 1768, "num_entries": 14328, "num_filter_entries": 14328, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.410782) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 20410079 bytes
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.412976) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 239.8 rd, 228.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 19.5 +0.0 blob) out(19.5 +0.0 blob), read-write-amplify(41.3) write-amplify(20.2) OK, records in: 14859, records dropped: 531 output_compression: NoCompression
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.413004) EVENT_LOG_v1 {"time_micros": 1771581898412992, "job": 36, "event": "compaction_finished", "compaction_time_micros": 89481, "compaction_time_cpu_micros": 52167, "output_level": 6, "num_output_files": 1, "total_output_size": 20410079, "num_input_records": 14859, "num_output_records": 14328, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581898413309, "job": 36, "event": "table_file_deletion", "file_number": 62}
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581898416336, "job": 36, "event": "table_file_deletion", "file_number": 60}
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.320831) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.416396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.416402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.416406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.416409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:04:58.416412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 10:04:58 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                           ** DB Stats **
                                                           Uptime(secs): 1200.0 total, 600.0 interval
                                                           Cumulative writes: 4850 writes, 37K keys, 4850 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.05 MB/s
                                                           Cumulative WAL: 4850 writes, 4850 syncs, 1.00 writes per sync, written: 0.06 GB, 0.05 MB/s
                                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           Interval writes: 2507 writes, 13K keys, 2507 commit groups, 1.0 writes per commit group, ingest: 18.60 MB, 0.03 MB/s
                                                           Interval WAL: 2507 writes, 2507 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                                           
                                                           ** Compaction Stats [default] **
                                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    164.2      0.26              0.12        18    0.015       0      0       0.0       0.0
                                                             L6      1/0   19.46 MB   0.0      0.3     0.0      0.3       0.3      0.0       0.0   7.0    214.2    198.6      1.51              0.86        17    0.089    224K   8828       0.0       0.0
                                                            Sum      1/0   19.46 MB   0.0      0.3     0.0      0.3       0.3      0.1       0.0   8.0    182.5    193.5      1.78              0.98        35    0.051    224K   8828       0.0       0.0
                                                            Int      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0  12.0    184.2    189.2      0.85              0.49        16    0.053    114K   4320       0.0       0.0
                                                           
                                                           ** Compaction Stats [default] **
                                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                            Low      0/0    0.00 KB   0.0      0.3     0.0      0.3       0.3      0.0       0.0   0.0    214.2    198.6      1.51              0.86        17    0.089    224K   8828       0.0       0.0
                                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    166.0      0.26              0.12        17    0.015       0      0       0.0       0.0
                                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.003       0      0       0.0       0.0
                                                           
                                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                                           
                                                           Uptime(secs): 1200.0 total, 600.0 interval
                                                           Flush(GB): cumulative 0.042, interval 0.013
                                                           AddFile(GB): cumulative 0.000, interval 0.000
                                                           AddFile(Total Files): cumulative 0, interval 0
                                                           AddFile(L0 Files): cumulative 0, interval 0
                                                           AddFile(Keys): cumulative 0, interval 0
                                                           Cumulative compaction: 0.34 GB write, 0.29 MB/s write, 0.32 GB read, 0.27 MB/s read, 1.8 seconds
                                                           Interval compaction: 0.16 GB write, 0.27 MB/s write, 0.15 GB read, 0.26 MB/s read, 0.9 seconds
                                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                                           Block cache BinnedLRUCache@0x5619bae7b350#2 capacity: 304.00 MB usage: 25.67 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 0.00021 secs_since: 0
                                                           Block cache entry stats(count,size,portion): DataBlock(1374,24.20 MB,7.95949%) FilterBlock(35,665.17 KB,0.213678%) IndexBlock(35,841.36 KB,0.270276%) Misc(1,0.00 KB,0%)
                                                           
                                                           ** File Read Latency Histogram By Level [default] **
Feb 20 10:04:58 np0005625203.localdomain podman[240359]: time="2026-02-20T10:04:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:04:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:04:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 10:04:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18305 "" "Go-http-client/1.1"
Feb 20 10:04:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:04:59.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:04:59 np0005625203.localdomain ceph-mon[296066]: pgmap v687: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Feb 20 10:04:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:05:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:05:00 np0005625203.localdomain ceph-mon[296066]: pgmap v688: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Feb 20 10:05:00 np0005625203.localdomain sshd[329300]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:00 np0005625203.localdomain sshd[329300]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:05:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:02.509 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:05:02 np0005625203.localdomain ceph-mon[296066]: pgmap v689: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Feb 20 10:05:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3731268602' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:05:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/3731268602' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:05:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:03.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:05:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:03.362 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:05:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:03.362 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:05:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:03.363 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:05:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:03.363 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 10:05:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:03.363 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:05:03 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:05:03 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/309175783' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:05:03 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:03.808 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:05:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:04.005 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 10:05:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:04.007 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11576MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 10:05:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:04.007 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:05:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:04.008 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:05:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:04.070 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 10:05:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:04.071 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 10:05:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:04.101 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:05:04 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/309175783' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:05:04 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:05:04 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/206249604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:05:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:04.546 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:05:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:04.554 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 10:05:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:04.575 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 10:05:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:04.577 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 10:05:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:04.578 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:05:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:05:05 np0005625203.localdomain ceph-mon[296066]: pgmap v690: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Feb 20 10:05:05 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/206249604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:05:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:05:05 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:05:05 np0005625203.localdomain podman[329348]: 2026-02-20 10:05:05.890506199 +0000 UTC m=+0.083066602 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Feb 20 10:05:05 np0005625203.localdomain podman[329348]: 2026-02-20 10:05:05.898339131 +0000 UTC m=+0.090899514 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 10:05:05 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:05:05 np0005625203.localdomain podman[329347]: 2026-02-20 10:05:05.989118561 +0000 UTC m=+0.180796957 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 10:05:06 np0005625203.localdomain podman[329347]: 2026-02-20 10:05:06.025418985 +0000 UTC m=+0.217097441 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Feb 20 10:05:06 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:05:06 np0005625203.localdomain ceph-mon[296066]: pgmap v691: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 426 B/s wr, 0 op/s
Feb 20 10:05:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:05:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:05:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:05:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:05:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:05:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:05:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:07.513 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:05:07.679 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:05:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:05:07.679 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:05:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:05:07.680 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:05:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:08.580 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:05:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:08.580 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:05:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:08.581 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:05:08 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:08.581 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 10:05:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:05:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:10.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:05:10 np0005625203.localdomain ceph-mon[296066]: pgmap v692: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Feb 20 10:05:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:11.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:05:11 np0005625203.localdomain ceph-mon[296066]: pgmap v693: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Feb 20 10:05:11 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2880488553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:05:11 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/226919907' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:05:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:12.516 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:12.518 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:12.518 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:05:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:12.518 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:12.538 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:05:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:12.539 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:12 np0005625203.localdomain ceph-mon[296066]: pgmap v694: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail; 0 B/s wr, 0 op/s
Feb 20 10:05:12 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1236691951' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:05:12 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1551072278' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:05:12 np0005625203.localdomain sshd[329394]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:13 np0005625203.localdomain sshd[329394]: Accepted publickey for zuul from 38.102.83.114 port 57990 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 10:05:13 np0005625203.localdomain systemd-logind[759]: New session 72 of user zuul.
Feb 20 10:05:13 np0005625203.localdomain systemd[1]: Started Session 72 of User zuul.
Feb 20 10:05:13 np0005625203.localdomain sshd[329394]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 10:05:13 np0005625203.localdomain sudo[329414]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c echo BECOME-SUCCESS-daklorskissxukhrqakldltslxwtrikp ; /usr/bin/python3
Feb 20 10:05:13 np0005625203.localdomain sudo[329414]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 10:05:13 np0005625203.localdomain python3[329416]: ansible-ansible.legacy.command Invoked with _raw_params=subscription-manager unregister
                                                           _uses_shell=True zuul_log_id=fa163ef9-e89a-9e25-3a25-00000000000c-1-overcloudnovacompute1 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Feb 20 10:05:13 np0005625203.localdomain sudo[329414]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:14 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:05:14 np0005625203.localdomain systemd[1]: tmp-crun.LxX6xS.mount: Deactivated successfully.
Feb 20 10:05:14 np0005625203.localdomain podman[329419]: 2026-02-20 10:05:14.778499611 +0000 UTC m=+0.087799809 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Feb 20 10:05:14 np0005625203.localdomain podman[329419]: 2026-02-20 10:05:14.810495511 +0000 UTC m=+0.119795679 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Feb 20 10:05:14 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:05:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:05:15 np0005625203.localdomain ceph-mon[296066]: pgmap v695: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:15.337 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:05:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:15.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:05:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:15.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 10:05:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:15.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 10:05:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:15.365 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 10:05:17 np0005625203.localdomain ovn_controller[155241]: 2026-02-20T10:05:17Z|00442|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.206 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.207 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.208 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.iops, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.209 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.210 12 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.210 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.210 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.210 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.210 12 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceilometer_agent_compute[235709]: 2026-02-20 10:05:17.210 12 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle poll_and_notify /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:193
Feb 20 10:05:17 np0005625203.localdomain ceph-mon[296066]: pgmap v696: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:17.537 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:05:18 np0005625203.localdomain sshd[329394]: pam_unix(sshd:session): session closed for user zuul
Feb 20 10:05:19 np0005625203.localdomain systemd[1]: session-72.scope: Deactivated successfully.
Feb 20 10:05:19 np0005625203.localdomain systemd-logind[759]: Session 72 logged out. Waiting for processes to exit.
Feb 20 10:05:19 np0005625203.localdomain systemd-logind[759]: Removed session 72.
Feb 20 10:05:19 np0005625203.localdomain ceph-mon[296066]: pgmap v697: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:05:21 np0005625203.localdomain ceph-mon[296066]: pgmap v698: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:21 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:05:21 np0005625203.localdomain podman[329437]: 2026-02-20 10:05:21.770861052 +0000 UTC m=+0.085805808 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true)
Feb 20 10:05:21 np0005625203.localdomain podman[329437]: 2026-02-20 10:05:21.861857698 +0000 UTC m=+0.176802424 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Feb 20 10:05:21 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:05:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:22.540 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:05:22 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:05:22 np0005625203.localdomain podman[329462]: 2026-02-20 10:05:22.746169283 +0000 UTC m=+0.067289124 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 10:05:22 np0005625203.localdomain podman[329462]: 2026-02-20 10:05:22.752589461 +0000 UTC m=+0.073709342 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260127, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute)
Feb 20 10:05:22 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:05:22 np0005625203.localdomain podman[329463]: 2026-02-20 10:05:22.804334653 +0000 UTC m=+0.121977847 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1770267347, io.openshift.expose-services=, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, name=ubi9/ubi-minimal, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Feb 20 10:05:22 np0005625203.localdomain podman[329463]: 2026-02-20 10:05:22.841267407 +0000 UTC m=+0.158910571 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, version=9.7, build-date=2026-02-05T04:57:10Z, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., release=1770267347, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-minimal-container, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.openshift.expose-services=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Feb 20 10:05:22 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:05:23 np0005625203.localdomain ceph-mon[296066]: pgmap v699: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:05:25 np0005625203.localdomain ceph-mon[296066]: pgmap v700: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:27 np0005625203.localdomain ceph-mon[296066]: pgmap v701: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:27.543 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:27.545 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:27.545 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:05:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:27.546 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:27.558 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:05:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:27.558 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:28 np0005625203.localdomain sshd[329502]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:28 np0005625203.localdomain sshd[329502]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:05:28 np0005625203.localdomain podman[240359]: time="2026-02-20T10:05:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:05:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:05:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 10:05:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18316 "" "Go-http-client/1.1"
Feb 20 10:05:29 np0005625203.localdomain ceph-mon[296066]: pgmap v702: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:05:31 np0005625203.localdomain sshd[329504]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:31 np0005625203.localdomain sshd[329504]: Accepted publickey for zuul from 38.102.83.114 port 33462 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 10:05:31 np0005625203.localdomain systemd-logind[759]: New session 73 of user zuul.
Feb 20 10:05:31 np0005625203.localdomain systemd[1]: Started Session 73 of User zuul.
Feb 20 10:05:31 np0005625203.localdomain sshd[329504]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 10:05:31 np0005625203.localdomain ceph-mon[296066]: pgmap v703: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:31 np0005625203.localdomain sudo[329508]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /var/log
Feb 20 10:05:31 np0005625203.localdomain sudo[329508]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 10:05:32 np0005625203.localdomain sudo[329508]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:32 np0005625203.localdomain sshd[329507]: Received disconnect from 38.102.83.114 port 33462:11: disconnected by user
Feb 20 10:05:32 np0005625203.localdomain sshd[329507]: Disconnected from user zuul 38.102.83.114 port 33462
Feb 20 10:05:32 np0005625203.localdomain sshd[329504]: pam_unix(sshd:session): session closed for user zuul
Feb 20 10:05:32 np0005625203.localdomain systemd[1]: session-73.scope: Deactivated successfully.
Feb 20 10:05:32 np0005625203.localdomain systemd-logind[759]: Session 73 logged out. Waiting for processes to exit.
Feb 20 10:05:32 np0005625203.localdomain systemd-logind[759]: Removed session 73.
Feb 20 10:05:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:32.559 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:32 np0005625203.localdomain sshd[329526]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:32 np0005625203.localdomain sshd[329526]: Accepted publickey for zuul from 38.102.83.114 port 33474 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 10:05:32 np0005625203.localdomain systemd-logind[759]: New session 74 of user zuul.
Feb 20 10:05:32 np0005625203.localdomain systemd[1]: Started Session 74 of User zuul.
Feb 20 10:05:32 np0005625203.localdomain sshd[329526]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 10:05:33 np0005625203.localdomain sudo[329530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/containers/networks
Feb 20 10:05:33 np0005625203.localdomain sudo[329530]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 10:05:33 np0005625203.localdomain sudo[329530]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:33 np0005625203.localdomain sshd[329529]: Received disconnect from 38.102.83.114 port 33474:11: disconnected by user
Feb 20 10:05:33 np0005625203.localdomain sshd[329529]: Disconnected from user zuul 38.102.83.114 port 33474
Feb 20 10:05:33 np0005625203.localdomain sshd[329526]: pam_unix(sshd:session): session closed for user zuul
Feb 20 10:05:33 np0005625203.localdomain systemd[1]: session-74.scope: Deactivated successfully.
Feb 20 10:05:33 np0005625203.localdomain systemd-logind[759]: Session 74 logged out. Waiting for processes to exit.
Feb 20 10:05:33 np0005625203.localdomain systemd-logind[759]: Removed session 74.
Feb 20 10:05:33 np0005625203.localdomain sshd[329548]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:33 np0005625203.localdomain ceph-mon[296066]: pgmap v704: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:33 np0005625203.localdomain sshd[329548]: Accepted publickey for zuul from 38.102.83.114 port 33482 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 10:05:33 np0005625203.localdomain systemd-logind[759]: New session 75 of user zuul.
Feb 20 10:05:33 np0005625203.localdomain systemd[1]: Started Session 75 of User zuul.
Feb 20 10:05:33 np0005625203.localdomain sshd[329548]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 10:05:33 np0005625203.localdomain sudo[329552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/containers/containers.conf
Feb 20 10:05:33 np0005625203.localdomain sudo[329552]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 10:05:33 np0005625203.localdomain sudo[329552]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:33 np0005625203.localdomain sshd[329551]: Received disconnect from 38.102.83.114 port 33482:11: disconnected by user
Feb 20 10:05:33 np0005625203.localdomain sshd[329551]: Disconnected from user zuul 38.102.83.114 port 33482
Feb 20 10:05:33 np0005625203.localdomain sshd[329548]: pam_unix(sshd:session): session closed for user zuul
Feb 20 10:05:33 np0005625203.localdomain systemd[1]: session-75.scope: Deactivated successfully.
Feb 20 10:05:33 np0005625203.localdomain systemd-logind[759]: Session 75 logged out. Waiting for processes to exit.
Feb 20 10:05:33 np0005625203.localdomain systemd-logind[759]: Removed session 75.
Feb 20 10:05:33 np0005625203.localdomain sshd[329570]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:34 np0005625203.localdomain sshd[329570]: Accepted publickey for zuul from 38.102.83.114 port 33488 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 10:05:34 np0005625203.localdomain systemd-logind[759]: New session 76 of user zuul.
Feb 20 10:05:34 np0005625203.localdomain systemd[1]: Started Session 76 of User zuul.
Feb 20 10:05:34 np0005625203.localdomain sshd[329570]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 10:05:34 np0005625203.localdomain sudo[329574]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/ceph
Feb 20 10:05:34 np0005625203.localdomain sudo[329574]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 10:05:34 np0005625203.localdomain sudo[329574]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:34 np0005625203.localdomain sshd[329573]: Received disconnect from 38.102.83.114 port 33488:11: disconnected by user
Feb 20 10:05:34 np0005625203.localdomain sshd[329573]: Disconnected from user zuul 38.102.83.114 port 33488
Feb 20 10:05:34 np0005625203.localdomain sshd[329570]: pam_unix(sshd:session): session closed for user zuul
Feb 20 10:05:34 np0005625203.localdomain systemd[1]: session-76.scope: Deactivated successfully.
Feb 20 10:05:34 np0005625203.localdomain systemd-logind[759]: Session 76 logged out. Waiting for processes to exit.
Feb 20 10:05:34 np0005625203.localdomain systemd-logind[759]: Removed session 76.
Feb 20 10:05:34 np0005625203.localdomain sshd[329592]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:34 np0005625203.localdomain sshd[329592]: Accepted publickey for zuul from 38.102.83.114 port 33490 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 10:05:34 np0005625203.localdomain systemd-logind[759]: New session 77 of user zuul.
Feb 20 10:05:34 np0005625203.localdomain systemd[1]: Started Session 77 of User zuul.
Feb 20 10:05:34 np0005625203.localdomain sshd[329592]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 10:05:34 np0005625203.localdomain sudo[329596]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/ci
Feb 20 10:05:34 np0005625203.localdomain sudo[329596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 10:05:34 np0005625203.localdomain sudo[329596]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:34 np0005625203.localdomain sshd[329595]: Received disconnect from 38.102.83.114 port 33490:11: disconnected by user
Feb 20 10:05:34 np0005625203.localdomain sshd[329595]: Disconnected from user zuul 38.102.83.114 port 33490
Feb 20 10:05:34 np0005625203.localdomain sshd[329592]: pam_unix(sshd:session): session closed for user zuul
Feb 20 10:05:34 np0005625203.localdomain systemd[1]: session-77.scope: Deactivated successfully.
Feb 20 10:05:34 np0005625203.localdomain systemd-logind[759]: Session 77 logged out. Waiting for processes to exit.
Feb 20 10:05:34 np0005625203.localdomain systemd-logind[759]: Removed session 77.
Feb 20 10:05:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:05:35 np0005625203.localdomain sshd[329614]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:35 np0005625203.localdomain sshd[329614]: Accepted publickey for zuul from 38.102.83.114 port 33506 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 10:05:35 np0005625203.localdomain systemd-logind[759]: New session 78 of user zuul.
Feb 20 10:05:35 np0005625203.localdomain systemd[1]: Started Session 78 of User zuul.
Feb 20 10:05:35 np0005625203.localdomain sshd[329614]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 10:05:35 np0005625203.localdomain sudo[329618]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/yum.conf
Feb 20 10:05:35 np0005625203.localdomain sudo[329618]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 10:05:35 np0005625203.localdomain sudo[329618]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:35 np0005625203.localdomain sshd[329617]: Received disconnect from 38.102.83.114 port 33506:11: disconnected by user
Feb 20 10:05:35 np0005625203.localdomain sshd[329617]: Disconnected from user zuul 38.102.83.114 port 33506
Feb 20 10:05:35 np0005625203.localdomain sshd[329614]: pam_unix(sshd:session): session closed for user zuul
Feb 20 10:05:35 np0005625203.localdomain systemd[1]: session-78.scope: Deactivated successfully.
Feb 20 10:05:35 np0005625203.localdomain systemd-logind[759]: Session 78 logged out. Waiting for processes to exit.
Feb 20 10:05:35 np0005625203.localdomain systemd-logind[759]: Removed session 78.
Feb 20 10:05:35 np0005625203.localdomain ceph-mon[296066]: pgmap v705: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:35 np0005625203.localdomain sshd[329636]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:35 np0005625203.localdomain sshd[329636]: Accepted publickey for zuul from 38.102.83.114 port 49702 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 10:05:35 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:05:35 np0005625203.localdomain systemd-logind[759]: New session 79 of user zuul.
Feb 20 10:05:35 np0005625203.localdomain systemd[1]: Started Session 79 of User zuul.
Feb 20 10:05:36 np0005625203.localdomain sshd[329636]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 10:05:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:05:36 np0005625203.localdomain podman[329639]: 2026-02-20 10:05:36.090999707 +0000 UTC m=+0.095325012 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 10:05:36 np0005625203.localdomain podman[329639]: 2026-02-20 10:05:36.108262641 +0000 UTC m=+0.112587916 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Feb 20 10:05:36 np0005625203.localdomain sudo[329653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/yum.repos.d
Feb 20 10:05:36 np0005625203.localdomain sudo[329653]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 10:05:36 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:05:36 np0005625203.localdomain sudo[329653]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:36 np0005625203.localdomain sshd[329645]: Received disconnect from 38.102.83.114 port 49702:11: disconnected by user
Feb 20 10:05:36 np0005625203.localdomain sshd[329645]: Disconnected from user zuul 38.102.83.114 port 49702
Feb 20 10:05:36 np0005625203.localdomain sshd[329636]: pam_unix(sshd:session): session closed for user zuul
Feb 20 10:05:36 np0005625203.localdomain systemd[1]: session-79.scope: Deactivated successfully.
Feb 20 10:05:36 np0005625203.localdomain systemd-logind[759]: Session 79 logged out. Waiting for processes to exit.
Feb 20 10:05:36 np0005625203.localdomain systemd-logind[759]: Removed session 79.
Feb 20 10:05:36 np0005625203.localdomain podman[329674]: 2026-02-20 10:05:36.182565371 +0000 UTC m=+0.082494215 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 10:05:36 np0005625203.localdomain podman[329674]: 2026-02-20 10:05:36.195582764 +0000 UTC m=+0.095511618 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Feb 20 10:05:36 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:05:36 np0005625203.localdomain sshd[329704]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:36 np0005625203.localdomain sshd[329704]: Accepted publickey for zuul from 38.102.83.114 port 49718 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 10:05:36 np0005625203.localdomain systemd-logind[759]: New session 80 of user zuul.
Feb 20 10:05:36 np0005625203.localdomain systemd[1]: Started Session 80 of User zuul.
Feb 20 10:05:36 np0005625203.localdomain sshd[329704]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 10:05:36 np0005625203.localdomain ceph-mon[296066]: pgmap v706: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:36 np0005625203.localdomain sudo[329708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /etc/os-net-config
Feb 20 10:05:36 np0005625203.localdomain sudo[329708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 10:05:36 np0005625203.localdomain sudo[329708]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:36 np0005625203.localdomain sshd[329707]: Received disconnect from 38.102.83.114 port 49718:11: disconnected by user
Feb 20 10:05:36 np0005625203.localdomain sshd[329707]: Disconnected from user zuul 38.102.83.114 port 49718
Feb 20 10:05:36 np0005625203.localdomain sshd[329704]: pam_unix(sshd:session): session closed for user zuul
Feb 20 10:05:36 np0005625203.localdomain systemd[1]: session-80.scope: Deactivated successfully.
Feb 20 10:05:36 np0005625203.localdomain systemd-logind[759]: Session 80 logged out. Waiting for processes to exit.
Feb 20 10:05:36 np0005625203.localdomain systemd-logind[759]: Removed session 80.
Feb 20 10:05:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:05:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:05:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:05:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:05:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:05:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:05:37 np0005625203.localdomain sshd[329726]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:37 np0005625203.localdomain sshd[329726]: Accepted publickey for zuul from 38.102.83.114 port 49724 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 10:05:37 np0005625203.localdomain systemd-logind[759]: New session 81 of user zuul.
Feb 20 10:05:37 np0005625203.localdomain systemd[1]: Started Session 81 of User zuul.
Feb 20 10:05:37 np0005625203.localdomain sshd[329726]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 10:05:37 np0005625203.localdomain sudo[329730]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rsync --server --sender -lLogDtprze.LsfxC . /home/zuul/ansible_hostname
Feb 20 10:05:37 np0005625203.localdomain sudo[329730]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 10:05:37 np0005625203.localdomain sudo[329730]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:37 np0005625203.localdomain sshd[329729]: Received disconnect from 38.102.83.114 port 49724:11: disconnected by user
Feb 20 10:05:37 np0005625203.localdomain sshd[329729]: Disconnected from user zuul 38.102.83.114 port 49724
Feb 20 10:05:37 np0005625203.localdomain sshd[329726]: pam_unix(sshd:session): session closed for user zuul
Feb 20 10:05:37 np0005625203.localdomain systemd[1]: session-81.scope: Deactivated successfully.
Feb 20 10:05:37 np0005625203.localdomain systemd-logind[759]: Session 81 logged out. Waiting for processes to exit.
Feb 20 10:05:37 np0005625203.localdomain systemd-logind[759]: Removed session 81.
Feb 20 10:05:37 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:37.563 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:37 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:37.565 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:37 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:37.566 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:05:37 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:37.566 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:37 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:37.595 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:05:37 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:37.596 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:39 np0005625203.localdomain ceph-mon[296066]: pgmap v707: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:05:41 np0005625203.localdomain ceph-mon[296066]: pgmap v708: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:42.596 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4996-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:42.598 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:42.598 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:05:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:42.599 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:42.625 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:05:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:42.625 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:43 np0005625203.localdomain ceph-mon[296066]: pgmap v709: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:05:45 np0005625203.localdomain ceph-mon[296066]: pgmap v710: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:45 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:05:45 np0005625203.localdomain systemd[1]: tmp-crun.mbxEUg.mount: Deactivated successfully.
Feb 20 10:05:45 np0005625203.localdomain podman[329748]: 2026-02-20 10:05:45.766749284 +0000 UTC m=+0.084160756 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Feb 20 10:05:45 np0005625203.localdomain podman[329748]: 2026-02-20 10:05:45.772018397 +0000 UTC m=+0.089429849 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 10:05:45 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:05:47 np0005625203.localdomain ceph-mon[296066]: pgmap v711: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:47.626 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:47.628 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:47.628 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:05:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:47.628 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:47.635 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:05:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:47.636 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:48 np0005625203.localdomain sshd[329766]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:05:48 np0005625203.localdomain sshd[329766]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: pgmap v712: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.458964) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581949459005, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 802, "num_deletes": 251, "total_data_size": 824370, "memory_usage": 839816, "flush_reason": "Manual Compaction"}
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581949464499, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 537771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37339, "largest_seqno": 38136, "table_properties": {"data_size": 534261, "index_size": 1365, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8376, "raw_average_key_size": 19, "raw_value_size": 527142, "raw_average_value_size": 1258, "num_data_blocks": 61, "num_entries": 419, "num_filter_entries": 419, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771581898, "oldest_key_time": 1771581898, "file_creation_time": 1771581949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 5586 microseconds, and 2557 cpu microseconds.
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.464546) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 537771 bytes OK
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.464570) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.466389) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.466403) EVENT_LOG_v1 {"time_micros": 1771581949466399, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.466422) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 820161, prev total WAL file size 820161, number of live WAL files 2.
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.466948) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003132383031' seq:72057594037927935, type:22 .. '7061786F73003133303533' seq:0, type:0; will stop at (end)
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(525KB)], [63(19MB)]
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581949467015, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 20947850, "oldest_snapshot_seqno": -1}
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 14230 keys, 19536568 bytes, temperature: kUnknown
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581949555249, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 19536568, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 19454275, "index_size": 45648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35589, "raw_key_size": 383116, "raw_average_key_size": 26, "raw_value_size": 19211480, "raw_average_value_size": 1350, "num_data_blocks": 1689, "num_entries": 14230, "num_filter_entries": 14230, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1771580698, "oldest_key_time": 0, "file_creation_time": 1771581949, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d7091244-ec7b-4a4c-98bf-27480b1bb7f4", "db_session_id": "XSHOJ401GNN3F43CMPC3", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.555921) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 19536568 bytes
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.557672) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 236.4 rd, 220.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 19.5 +0.0 blob) out(18.6 +0.0 blob), read-write-amplify(75.3) write-amplify(36.3) OK, records in: 14747, records dropped: 517 output_compression: NoCompression
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.557709) EVENT_LOG_v1 {"time_micros": 1771581949557692, "job": 38, "event": "compaction_finished", "compaction_time_micros": 88596, "compaction_time_cpu_micros": 33778, "output_level": 6, "num_output_files": 1, "total_output_size": 19536568, "num_input_records": 14747, "num_output_records": 14230, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581949558612, "job": 38, "event": "table_file_deletion", "file_number": 65}
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-np0005625203/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: EVENT_LOG_v1 {"time_micros": 1771581949563002, "job": 38, "event": "table_file_deletion", "file_number": 63}
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.466797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.563264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.563271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.563274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.563277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:05:49 np0005625203.localdomain ceph-mon[296066]: rocksdb: (Original Log Time 2026/02/20-10:05:49.563280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Feb 20 10:05:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:05:51 np0005625203.localdomain ceph-mon[296066]: pgmap v713: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:52.636 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:52.638 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:52.638 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:05:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:52.638 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:52.639 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:05:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:52.639 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:05:52 np0005625203.localdomain podman[329768]: 2026-02-20 10:05:52.766155553 +0000 UTC m=+0.076221641 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Feb 20 10:05:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:05:52 np0005625203.localdomain podman[329768]: 2026-02-20 10:05:52.831276968 +0000 UTC m=+0.141343056 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 10:05:52 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:05:52 np0005625203.localdomain podman[329793]: 2026-02-20 10:05:52.91501867 +0000 UTC m=+0.079514402 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_id=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.3)
Feb 20 10:05:52 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:05:52 np0005625203.localdomain podman[329793]: 2026-02-20 10:05:52.934148323 +0000 UTC m=+0.098644065 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute)
Feb 20 10:05:52 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:05:53 np0005625203.localdomain systemd[1]: tmp-crun.3SRlVw.mount: Deactivated successfully.
Feb 20 10:05:53 np0005625203.localdomain podman[329812]: 2026-02-20 10:05:53.019685011 +0000 UTC m=+0.080437451 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, release=1770267347, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, org.opencontainers.image.created=2026-02-05T04:57:10Z, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, vcs-type=git, config_id=openstack_network_exporter, build-date=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal, io.openshift.tags=minimal rhel9, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, com.redhat.component=ubi9-minimal-container)
Feb 20 10:05:53 np0005625203.localdomain podman[329812]: 2026-02-20 10:05:53.036224433 +0000 UTC m=+0.096976873 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2026-02-05T04:57:10Z, version=9.7, release=1770267347, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.expose-services=, cpe=cpe:/a:redhat:enterprise_linux:9::appstream)
Feb 20 10:05:53 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:05:53 np0005625203.localdomain ceph-mon[296066]: pgmap v714: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:54 np0005625203.localdomain sudo[329832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 10:05:54 np0005625203.localdomain sudo[329832]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:05:54 np0005625203.localdomain sudo[329832]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:54 np0005625203.localdomain sudo[329850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 10:05:54 np0005625203.localdomain sudo[329850]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:05:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:05:55 np0005625203.localdomain ceph-mon[296066]: pgmap v715: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:55 np0005625203.localdomain sudo[329850]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:55 np0005625203.localdomain sudo[329900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 10:05:55 np0005625203.localdomain sudo[329900]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:05:55 np0005625203.localdomain sudo[329900]: pam_unix(sudo:session): session closed for user root
Feb 20 10:05:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 10:05:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 10:05:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:05:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 10:05:57 np0005625203.localdomain ceph-mon[296066]: pgmap v716: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:57.640 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:57.642 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:05:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:57.643 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:05:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:57.643 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:57.659 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:05:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:57.659 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:05:58 np0005625203.localdomain podman[240359]: time="2026-02-20T10:05:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:05:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:05:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 10:05:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18308 "" "Go-http-client/1.1"
Feb 20 10:05:59 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:05:59.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:05:59 np0005625203.localdomain ceph-mon[296066]: pgmap v717: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:05:59 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:06:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:06:00 np0005625203.localdomain ceph-mon[296066]: pgmap v718: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Feb 20 10:06:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2438891474' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:06:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Feb 20 10:06:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.32:0/2438891474' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:06:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:02.660 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:02.662 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:02.662 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:06:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:02.663 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:02.665 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:02.665 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:02 np0005625203.localdomain ceph-mon[296066]: pgmap v719: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2438891474' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:06:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2438891474' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:06:04 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:04.922 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:05 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:06:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:05.344 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:05.382 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:06:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:05.383 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:06:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:05.383 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:06:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:05.383 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Auditing locally available compute resources for np0005625203.localdomain (node: np0005625203.localdomain) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Feb 20 10:06:05 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:05.384 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:06:05 np0005625203.localdomain ceph-mon[296066]: pgmap v720: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:06 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:06:06 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3751615180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.183 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.799s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.359 279640 WARNING nova.virt.libvirt.driver [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.360 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Hypervisor/Node resource view: name=np0005625203.localdomain free_ram=11578MB free_disk=41.836978912353516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.361 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.430 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.430 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Final resource view: name=np0005625203.localdomain phys_ram=15738MB used_ram=512MB phys_disk=41GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.455 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Feb 20 10:06:06 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:06:06 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:06:06 np0005625203.localdomain systemd[1]: tmp-crun.m4TkrB.mount: Deactivated successfully.
Feb 20 10:06:06 np0005625203.localdomain podman[329960]: 2026-02-20 10:06:06.836925596 +0000 UTC m=+0.145958670 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 10:06:06 np0005625203.localdomain podman[329960]: 2026-02-20 10:06:06.845409038 +0000 UTC m=+0.154442062 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 10:06:06 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:06:06 np0005625203.localdomain podman[329961]: 2026-02-20 10:06:06.803580373 +0000 UTC m=+0.110710907 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Feb 20 10:06:06 np0005625203.localdomain ceph-mon[296066]: pgmap v721: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:06 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3751615180' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:06:06 np0005625203.localdomain podman[329961]: 2026-02-20 10:06:06.891387221 +0000 UTC m=+0.198517735 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Feb 20 10:06:06 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:06:06 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Feb 20 10:06:06 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3492603920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.931 279640 DEBUG oslo_concurrency.processutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.939 279640 DEBUG nova.compute.provider_tree [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed in ProviderTree for provider: e5d5157a-2df2-4f51-b5fb-cd2da3a8584e update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.955 279640 DEBUG nova.scheduler.client.report [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Inventory has not changed for provider e5d5157a-2df2-4f51-b5fb-cd2da3a8584e based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0}, 'MEMORY_MB': {'total': 15738, 'reserved': 512, 'min_unit': 1, 'max_unit': 15738, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 41, 'reserved': 1, 'min_unit': 1, 'max_unit': 41, 'step_size': 1, 'allocation_ratio': 1.0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.958 279640 DEBUG nova.compute.resource_tracker [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Compute_service record updated for np0005625203.localdomain:np0005625203.localdomain _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Feb 20 10:06:06 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:06.958 279640 DEBUG oslo_concurrency.lockutils [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:06:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:06:07 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:06:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:06:07 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:06:07 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:06:07 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:06:07 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:07.666 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:06:07.680 161112 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Feb 20 10:06:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:06:07.680 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Feb 20 10:06:07 np0005625203.localdomain ovn_metadata_agent[161107]: 2026-02-20 10:06:07.680 161112 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Feb 20 10:06:07 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3492603920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:06:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:09.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:09.343 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:09.344 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Feb 20 10:06:09 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:09.361 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Feb 20 10:06:09 np0005625203.localdomain ceph-mon[296066]: pgmap v722: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:10 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:06:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:10.359 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:10.360 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:10 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:10.360 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Feb 20 10:06:10 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2553218350' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:06:11 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:11.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:11 np0005625203.localdomain ceph-mon[296066]: pgmap v723: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:12 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/599090075' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:06:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:12.670 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:12.671 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:12.672 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:06:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:12.672 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:12.693 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:12 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:12.694 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:13 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:13.342 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:13 np0005625203.localdomain ceph-mon[296066]: pgmap v724: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:13 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2627620429' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:06:14 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1462323746' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Feb 20 10:06:15 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:06:15 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:15.337 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:15 np0005625203.localdomain ceph-mon[296066]: pgmap v725: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:16 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:06:16 np0005625203.localdomain podman[330005]: 2026-02-20 10:06:16.763309373 +0000 UTC m=+0.083744154 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Feb 20 10:06:16 np0005625203.localdomain podman[330005]: 2026-02-20 10:06:16.772333453 +0000 UTC m=+0.092768234 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Feb 20 10:06:16 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:06:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:17.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:17.341 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Feb 20 10:06:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:17.342 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Feb 20 10:06:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:17.358 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Feb 20 10:06:17 np0005625203.localdomain ceph-mon[296066]: pgmap v726: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:17.694 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:17 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:17.697 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:18 np0005625203.localdomain sshd[330024]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:06:19 np0005625203.localdomain sshd[330024]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:06:19 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:19.354 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:19 np0005625203.localdomain ceph-mon[296066]: pgmap v727: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:20 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:06:21 np0005625203.localdomain ceph-mon[296066]: pgmap v728: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:22.730 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:22.732 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:22.732 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5034 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:06:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:22.732 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:22.733 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:22 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:22.735 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:23 np0005625203.localdomain ceph-mon[296066]: pgmap v729: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:06:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:06:23 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:06:23 np0005625203.localdomain podman[330027]: 2026-02-20 10:06:23.769202834 +0000 UTC m=+0.081895916 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, distribution-scope=public, io.openshift.tags=minimal rhel9, org.opencontainers.image.created=2026-02-05T04:57:10Z, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2026-02-05T04:57:10Z, vcs-type=git, architecture=x86_64, name=ubi9/ubi-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.7, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, container_name=openstack_network_exporter)
Feb 20 10:06:23 np0005625203.localdomain podman[330027]: 2026-02-20 10:06:23.7842553 +0000 UTC m=+0.096948382 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, build-date=2026-02-05T04:57:10Z, distribution-scope=public, io.openshift.tags=minimal rhel9, release=1770267347, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, version=9.7, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, maintainer=Red Hat, Inc., name=ubi9/ubi-minimal, org.opencontainers.image.created=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Feb 20 10:06:23 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:06:23 np0005625203.localdomain podman[330026]: 2026-02-20 10:06:23.8730927 +0000 UTC m=+0.185398690 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 10:06:23 np0005625203.localdomain podman[330026]: 2026-02-20 10:06:23.913446239 +0000 UTC m=+0.225752259 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20260127, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Feb 20 10:06:23 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:06:23 np0005625203.localdomain podman[330028]: 2026-02-20 10:06:23.922719516 +0000 UTC m=+0.232406625 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.schema-version=1.0)
Feb 20 10:06:23 np0005625203.localdomain podman[330028]: 2026-02-20 10:06:23.9977881 +0000 UTC m=+0.307475239 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260127, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Feb 20 10:06:24 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:06:24 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:24.341 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:25 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:06:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:25.363 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:25.364 279640 DEBUG nova.compute.manager [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Feb 20 10:06:25 np0005625203.localdomain ceph-mon[296066]: pgmap v730: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:25 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:25.927 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:06:26 np0005625203.localdomain ceph-mon[296066]: pgmap v731: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:27 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:27.735 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:28 np0005625203.localdomain podman[240359]: time="2026-02-20T10:06:28Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:06:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:06:28 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 10:06:29 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18304 "" "Go-http-client/1.1"
Feb 20 10:06:29 np0005625203.localdomain ceph-mon[296066]: pgmap v732: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:30 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:06:31 np0005625203.localdomain ceph-mon[296066]: pgmap v733: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:32 np0005625203.localdomain sshd[330090]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:06:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:32.737 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:32.738 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:32.739 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:06:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:32.739 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:32.739 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:32 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:32.741 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:33 np0005625203.localdomain ceph-mon[296066]: pgmap v734: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:33 np0005625203.localdomain sshd[330090]: fatal: mm_answer_sign: sign: error in libcrypto
Feb 20 10:06:35 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:06:35 np0005625203.localdomain ceph-mon[296066]: pgmap v735: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:36 np0005625203.localdomain sshd[330092]: main: sshd: ssh-rsa algorithm is disabled
Feb 20 10:06:36 np0005625203.localdomain sshd[330092]: Accepted publickey for zuul from 192.168.122.10 port 38234 ssh2: RSA SHA256:uaxCfdsB9dLyL4znjsLCPDTT6O+6xZzjHRjczJFOA6k
Feb 20 10:06:36 np0005625203.localdomain systemd-logind[759]: New session 82 of user zuul.
Feb 20 10:06:36 np0005625203.localdomain systemd[1]: Started Session 82 of User zuul.
Feb 20 10:06:36 np0005625203.localdomain sshd[330092]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by (uid=0)
Feb 20 10:06:36 np0005625203.localdomain sudo[330096]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt
Feb 20 10:06:36 np0005625203.localdomain sudo[330096]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
Feb 20 10:06:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.
Feb 20 10:06:36 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.
Feb 20 10:06:37 np0005625203.localdomain podman[330116]: 2026-02-20 10:06:37.016413764 +0000 UTC m=+0.080539464 container health_status 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, health_status=healthy, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 10:06:37 np0005625203.localdomain podman[330116]: 2026-02-20 10:06:37.053402009 +0000 UTC m=+0.117527669 container exec_died 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72 (image=quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c, name=node_exporter, config_data={'command': ['--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl', '--path.rootfs=/rootfs'], 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter@sha256:39c642b2b337e38c18e80266fb14383754178202f40103646337722a594d984c', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/:/rootfs:ro', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Feb 20 10:06:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:06:37 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Feb 20 10:06:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:06:37 np0005625203.localdomain openstack_network_exporter[242811]: ERROR   10:06:37 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Feb 20 10:06:37 np0005625203.localdomain openstack_network_exporter[242811]: 
Feb 20 10:06:37 np0005625203.localdomain systemd[1]: 8a91a8ed4db1099d33e1a27eda1af6e156cdd62cd75add0b7a4bb46b4d76fa72.service: Deactivated successfully.
Feb 20 10:06:37 np0005625203.localdomain systemd[1]: tmp-crun.u3zrLh.mount: Deactivated successfully.
Feb 20 10:06:37 np0005625203.localdomain podman[330114]: 2026-02-20 10:06:37.084932765 +0000 UTC m=+0.150706266 container health_status 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, health_status=healthy, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Feb 20 10:06:37 np0005625203.localdomain podman[330114]: 2026-02-20 10:06:37.093203531 +0000 UTC m=+0.158977022 container exec_died 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d (image=quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd, name=podman_exporter, managed_by=edpm_ansible, config_data={'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter@sha256:d339ba049bbd1adccb795962bf163f5b22fd84dea865d88b9eb525e46247d6bd', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Feb 20 10:06:37 np0005625203.localdomain systemd[1]: 408dafc33fe313e0228046c24f2ed41d70c6026e02353eb981b866591f35615d.service: Deactivated successfully.
Feb 20 10:06:37 np0005625203.localdomain ceph-mon[296066]: pgmap v736: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:37 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:37.742 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:39 np0005625203.localdomain ceph-mon[296066]: pgmap v737: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:06:40 np0005625203.localdomain ceph-mon[296066]: from='client.58939 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:40 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "status"} v 0)
Feb 20 10:06:40 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3194755779' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 20 10:06:41 np0005625203.localdomain ceph-mon[296066]: from='client.49767 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:41 np0005625203.localdomain ceph-mon[296066]: from='client.98600 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:41 np0005625203.localdomain ceph-mon[296066]: pgmap v738: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:41 np0005625203.localdomain ceph-mon[296066]: from='client.58945 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:41 np0005625203.localdomain ceph-mon[296066]: from='client.49773 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:41 np0005625203.localdomain ceph-mon[296066]: from='client.98606 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:41 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3194755779' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 20 10:06:41 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/391173645' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 20 10:06:41 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2111544624' entity='client.admin' cmd={"prefix": "status"} : dispatch
Feb 20 10:06:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:42.743 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:42.746 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:42.746 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:06:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:42.746 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:42.762 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:42 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:42.763 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:42 np0005625203.localdomain ovs-vsctl[330390]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Feb 20 10:06:43 np0005625203.localdomain ceph-mon[296066]: pgmap v739: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:43 np0005625203.localdomain virtqemud[228198]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Feb 20 10:06:43 np0005625203.localdomain virtqemud[228198]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Feb 20 10:06:43 np0005625203.localdomain virtqemud[228198]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Feb 20 10:06:43 np0005625203.localdomain systemd[1]: efi.automount: Got automount request for /efi, triggered by 330541 (lsinitrd)
Feb 20 10:06:43 np0005625203.localdomain systemd[1]: Mounting EFI System Partition Automount...
Feb 20 10:06:43 np0005625203.localdomain systemd[1]: Mounted EFI System Partition Automount.
Feb 20 10:06:44 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: cache status {prefix=cache status} (starting...)
Feb 20 10:06:44 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: client ls {prefix=client ls} (starting...)
Feb 20 10:06:44 np0005625203.localdomain lvm[330624]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Feb 20 10:06:44 np0005625203.localdomain lvm[330624]: VG ceph_vg0 finished
Feb 20 10:06:44 np0005625203.localdomain lvm[330628]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Feb 20 10:06:44 np0005625203.localdomain lvm[330628]: VG ceph_vg1 finished
Feb 20 10:06:44 np0005625203.localdomain ceph-mon[296066]: pgmap v740: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:44 np0005625203.localdomain ceph-mon[296066]: from='client.58963 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:44 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: damage ls {prefix=damage ls} (starting...)
Feb 20 10:06:44 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: dump loads {prefix=dump loads} (starting...)
Feb 20 10:06:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:06:45 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Feb 20 10:06:45 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Feb 20 10:06:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "report"} v 0)
Feb 20 10:06:45 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/198603473' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 20 10:06:45 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Feb 20 10:06:45 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Feb 20 10:06:45 np0005625203.localdomain ceph-mon[296066]: from='client.49785 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:45 np0005625203.localdomain ceph-mon[296066]: from='client.58969 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:45 np0005625203.localdomain ceph-mon[296066]: from='client.49791 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:45 np0005625203.localdomain ceph-mon[296066]: from='client.98618 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:45 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/198603473' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 20 10:06:45 np0005625203.localdomain ceph-mon[296066]: from='client.? ' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 20 10:06:45 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1730588979' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 20 10:06:45 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Feb 20 10:06:45 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Feb 20 10:06:45 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/252892574' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 10:06:45 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: get subtrees {prefix=get subtrees} (starting...)
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/639021198' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "config log"} v 0)
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/932176662' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: ops {prefix=ops} (starting...)
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3139561945' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "config-key dump"} v 0)
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2483575500' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.98627 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.58990 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/252892574' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: pgmap v741: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2484470455' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.49812 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/639021198' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/932176662' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2593553930' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.? ' entity='client.admin' cmd={"prefix": "report"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2366589297' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.98651 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/26432637' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3139561945' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2483575500' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1502005148' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 10:06:46 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: session ls {prefix=session ls} (starting...)
Feb 20 10:06:46 np0005625203.localdomain ceph-mds[282126]: mds.mds.np0005625203.zsrwgk asok_command: status {prefix=status} (starting...)
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 20 10:06:46 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/224606860' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3226058210' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2884655271' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2372081725' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3334651353' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/224606860' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.59044 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2447655434' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.49845 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2724243597' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.59050 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/4212865981' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3226058210' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.49857 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3458121244' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:47.764 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:47.766 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:47.766 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:06:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:47.766 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:47.780 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:47 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:47.781 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:47 np0005625203.localdomain podman[331085]: 2026-02-20 10:06:47.782918114 +0000 UTC m=+0.098441838 container health_status 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "features"} v 0)
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1605299969' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 20 10:06:47 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1261710750' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 20 10:06:47 np0005625203.localdomain podman[331085]: 2026-02-20 10:06:47.825222663 +0000 UTC m=+0.140746387 container exec_died 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-df122b180261157f1de1391083b3d8abac306e2f12893ac7b9291feafc874311'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS)
Feb 20 10:06:47 np0005625203.localdomain systemd[1]: 379421a3b95935790d82cc88be16e4ccdfc3a41cc0e458e153016400f99a148d.service: Deactivated successfully.
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr stat"} v 0)
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1510216666' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/93294090' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1688900044' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2868000341' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1605299969' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? ' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1261710750' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: pgmap v742: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.98705 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2314617770' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3018402630' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/269488103' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.98711 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1510216666' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/93294090' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1012312030' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2564032777' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2032604306' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1688900044' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2273315391' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 20 10:06:48 np0005625203.localdomain ceph-mon[296066]: from='client.? ' entity='client.admin' cmd={"prefix": "features"} : dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2544276023' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2029848996' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.59095 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.49893 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1261218979' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3830416334' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.59113 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2544276023' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/419782457' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3888246860' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2343932821' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.49911 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.59122 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.98762 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1296629889' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2029848996' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 20 10:06:49 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2992728241' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr dump"} v 0)
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2780626791' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 982790 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:21.121352+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:22.121527+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 88 heartbeat osd_stat(store_statfs(0x1b9da7000/0x0/0x1bfc00000, data 0x2e062e2/0x2e85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:23.121713+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:24.121851+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:25.121996+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 982790 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:26.122139+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 88 heartbeat osd_stat(store_statfs(0x1b9da7000/0x0/0x1bfc00000, data 0x2e062e2/0x2e85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 88 heartbeat osd_stat(store_statfs(0x1b9da7000/0x0/0x1bfc00000, data 0x2e062e2/0x2e85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:27.122285+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:28.122408+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 88 heartbeat osd_stat(store_statfs(0x1b9da7000/0x0/0x1bfc00000, data 0x2e062e2/0x2e85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:29.122926+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:30.123087+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 982790 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:31.123257+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:32.123393+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:33.123525+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 88 heartbeat osd_stat(store_statfs(0x1b9da7000/0x0/0x1bfc00000, data 0x2e062e2/0x2e85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:34.123728+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 88 heartbeat osd_stat(store_statfs(0x1b9da7000/0x0/0x1bfc00000, data 0x2e062e2/0x2e85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 88 heartbeat osd_stat(store_statfs(0x1b9da7000/0x0/0x1bfc00000, data 0x2e062e2/0x2e85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:35.123946+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 982790 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:36.124121+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:37.124253+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 88 heartbeat osd_stat(store_statfs(0x1b9da7000/0x0/0x1bfc00000, data 0x2e062e2/0x2e85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:38.124427+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 88 heartbeat osd_stat(store_statfs(0x1b9da7000/0x0/0x1bfc00000, data 0x2e062e2/0x2e85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:39.124578+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:40.124714+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 982790 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:41.124856+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:42.125027+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 88 heartbeat osd_stat(store_statfs(0x1b9da7000/0x0/0x1bfc00000, data 0x2e062e2/0x2e85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96493568 unmapped: 2162688 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:43.125170+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96501760 unmapped: 2154496 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:44.126108+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96501760 unmapped: 2154496 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 88 heartbeat osd_stat(store_statfs(0x1b9da7000/0x0/0x1bfc00000, data 0x2e062e2/0x2e85000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:45.126232+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96501760 unmapped: 2154496 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 982790 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:46.126378+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 82.472755432s of 82.480300903s, submitted: 1
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 31
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now 
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc reconnect Terminating session with v2:172.18.0.106:6810/1027089384
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc reconnect No active mgr available yet
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 ms_handle_reset con 0x558a62d36000 session 0x558a602f8960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96780288 unmapped: 1875968 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:47.126500+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 32
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc reconnect Starting new session with [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: get_auth_request con 0x558a62114800 auth_method 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_configure stats_period=5
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96526336 unmapped: 2129920 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:48.126709+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 33
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96714752 unmapped: 1941504 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:49.126869+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96714752 unmapped: 1941504 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 34
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:50.127312+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96854016 unmapped: 1802240 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:51.127462+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 35
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:52.127606+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 36
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:53.127794+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:54.128106+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:55.128732+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:56.129158+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:57.129419+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:58.129679+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:59.129967+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:00.130257+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:01.130521+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:02.130720+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:03.130967+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:04.131203+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:05.131396+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:06.131618+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:07.131850+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:08.132136+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:09.132463+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient:  got monmap 12 from mon.np0005625204 (according to old e12)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: dump:
                                                          epoch 12
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:45:39.346453+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
                                                          1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
                                                          2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          3: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:10.132721+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:11.133027+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:12.133465+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:13.133687+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:14.134018+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:15.134366+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97001472 unmapped: 1654784 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:16.137017+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient:  got monmap 13 from mon.np0005625204 (according to old e13)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: dump:
                                                          epoch 13
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:45:46.327222+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
                                                          1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: ms_handle_reset current mon [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _reopen_session rank -1
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _add_conns ranks=[1,0,2]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): picked mon.np0005625202 con 0x558a60652800 addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): picked mon.np0005625204 con 0x558a602eb800 addr [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): picked mon.np0005625203 con 0x558a602ea000 addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): start opening mon connection
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): start opening mon connection
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): start opening mon connection
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 ms_handle_reset con 0x558a63f2f800 session 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): get_auth_request con 0x558a602ea000 auth_method 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): _init_auth method 2
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): _init_auth already have auth, reseting
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): handle_auth_reply_more payload 9
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): handle_auth_reply_more payload_len 9
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): handle_auth_done global_id 14343 payload 293
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _finish_hunting 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: found mon.np0005625203
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _finish_auth 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:16.353273+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: ms_handle_reset current mon [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _reopen_session rank -1
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _add_conns ranks=[0,2,1]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): picked mon.np0005625204 con 0x558a602eb800 addr [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): picked mon.np0005625203 con 0x558a63f2f800 addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): picked mon.np0005625202 con 0x558a60652800 addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): start opening mon connection
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): start opening mon connection
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): start opening mon connection
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 ms_handle_reset con 0x558a602ea000 session 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): get_auth_request con 0x558a63f2f800 auth_method 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): _init_auth method 2
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): _init_auth already have auth, reseting
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): handle_auth_reply_more payload 9
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): handle_auth_reply_more payload_len 9
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): handle_auth_done global_id 14343 payload 293
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _finish_hunting 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: found mon.np0005625203
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _finish_auth 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:16.358835+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient:  got monmap 13 from mon.np0005625203 (according to old e13)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: dump:
                                                          epoch 13
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:45:46.327222+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
                                                          1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_config config(7 keys)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: set_mon_vals no callback set
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 36
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:17.137423+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:18.138385+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:19.139935+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:20.141251+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:21.141791+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:22.142017+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:23.142745+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:24.143561+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:25.143898+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:26.144739+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:27.145203+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient:  got monmap 14 from mon.np0005625203 (according to old e14)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: dump:
                                                          epoch 14
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:45:57.556107+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
                                                          1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
                                                          3: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:28.145740+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:29.146125+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:30.146499+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:31.146727+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:32.146975+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:33.147139+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:34.147323+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:35.147577+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:36.147747+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:37.147974+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:38.148124+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient:  got monmap 15 from mon.np0005625203 (according to old e15)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: dump:
                                                          epoch 15
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:46:08.177805+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
                                                          1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:39.148386+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97034240 unmapped: 1622016 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:40.148568+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:41.148780+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:42.148964+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:43.149147+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:44.149345+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:45.149528+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:46.149698+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:47.149949+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:48.150829+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:49.151227+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:50.151496+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:51.151639+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:52.151817+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:53.151971+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96919552 unmapped: 1736704 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:54.152155+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient:  got monmap 16 from mon.np0005625203 (according to old e16)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: dump:
                                                          epoch 16
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:46:24.360760+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: ms_handle_reset current mon [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _reopen_session rank -1
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _add_conns ranks=[0,1]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): picked mon.np0005625202 con 0x558a60652800 addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): picked mon.np0005625203 con 0x558a602ea000 addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): start opening mon connection
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): start opening mon connection
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 ms_handle_reset con 0x558a63f2f800 session 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): get_auth_request con 0x558a602ea000 auth_method 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): _init_auth method 2
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): _init_auth already have auth, reseting
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): handle_auth_reply_more payload 9
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): handle_auth_reply_more payload_len 9
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient(hunting): handle_auth_done global_id 14343 payload 293
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _finish_hunting 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: found mon.np0005625203
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _finish_auth 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:54.383353+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient:  got monmap 16 from mon.np0005625203 (according to old e16)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: dump:
                                                          epoch 16
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:46:24.360760+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_config config(7 keys)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: set_mon_vals no callback set
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 36
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:55.152310+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:56.152679+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:57.152951+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:58.153191+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:59.153430+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:00.153644+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:01.153830+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:02.153966+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:03.154090+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:04.154325+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:05.154499+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:06.154702+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:07.154868+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:08.155037+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:09.155225+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:10.155446+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:11.155589+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:12.155743+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:13.155920+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:14.156119+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:15.156316+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:16.156520+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient:  got monmap 17 from mon.np0005625203 (according to old e17)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: dump:
                                                          epoch 17
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:46:46.606881+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
                                                          2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625204
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:17.156730+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:18.156942+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:19.157201+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:20.157393+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96944128 unmapped: 1712128 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:21.157590+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 37
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96960512 unmapped: 1695744 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:22.157738+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96960512 unmapped: 1695744 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:23.157970+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96960512 unmapped: 1695744 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:24.158123+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96960512 unmapped: 1695744 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:25.158302+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96960512 unmapped: 1695744 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:26.158475+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 985762 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96960512 unmapped: 1695744 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:27.158638+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96960512 unmapped: 1695744 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:28.158802+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 38
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now 
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc reconnect Terminating session with v2:172.18.0.107:6810/2084071713
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc reconnect No active mgr available yet
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 ms_handle_reset con 0x558a5f807800 session 0x558a621305a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 101.904441833s of 101.911865234s, submitted: 1
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da4000/0x0/0x1bfc00000, data 0x2e083f6/0x2e88000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97017856 unmapped: 1638400 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:29.158943+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 39
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/689946273,v1:172.18.0.108:6811/689946273]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc reconnect Starting new session with [v2:172.18.0.108:6810/689946273,v1:172.18.0.108:6811/689946273]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: get_auth_request con 0x558a63f2f800 auth_method 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_configure stats_period=5
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97026048 unmapped: 1630208 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:30.159185+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:31.159330+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97042432 unmapped: 1613824 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/689946273,v1:172.18.0.108:6811/689946273]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:32.159482+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96763904 unmapped: 1892352 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:33.159627+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96763904 unmapped: 1892352 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:34.159758+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96763904 unmapped: 1892352 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 41
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/689946273,v1:172.18.0.108:6811/689946273]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:35.159907+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:36.160087+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:37.160252+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:38.160442+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:39.161010+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:40.161140+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:41.161277+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:42.161414+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:43.161574+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:44.161762+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:45.161934+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:46.162142+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:47.162305+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:48.162500+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:49.162755+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:50.162965+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:51.163152+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:52.163324+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 42
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/689946273,v1:172.18.0.108:6811/689946273]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:53.163482+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:54.163618+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:55.163835+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:56.164114+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:57.164283+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:58.164426+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:59.164620+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:00.164803+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:01.164989+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:02.165162+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:03.165390+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:04.165583+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:05.165697+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:06.165795+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:07.165970+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:08.166134+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:09.166346+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:10.166524+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:11.166703+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:12.166960+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:13.167191+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:14.167409+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:15.167582+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:16.167747+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:17.168011+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:18.168227+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:19.168441+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:20.168645+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:21.168829+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:22.169044+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:23.169266+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96796672 unmapped: 1859584 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:24.169454+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96804864 unmapped: 1851392 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:25.169641+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96804864 unmapped: 1851392 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:26.170031+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96804864 unmapped: 1851392 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:27.170198+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96804864 unmapped: 1851392 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:28.170379+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96804864 unmapped: 1851392 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:29.170726+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:30.170952+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:31.171191+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:32.171649+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:33.172001+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:34.172218+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:35.172567+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:36.172894+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:37.173049+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:38.173296+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:39.173487+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:40.173735+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:41.173865+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:42.174166+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:43.174396+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96772096 unmapped: 1884160 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:44.174582+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96763904 unmapped: 1892352 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:45.174816+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96763904 unmapped: 1892352 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:46.175087+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96763904 unmapped: 1892352 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:47.175246+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96763904 unmapped: 1892352 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:48.175438+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96763904 unmapped: 1892352 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:49.175673+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96763904 unmapped: 1892352 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:50.175866+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96763904 unmapped: 1892352 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:51.176034+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96763904 unmapped: 1892352 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 988062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:52.176200+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 heartbeat osd_stat(store_statfs(0x1b9da2000/0x0/0x1bfc00000, data 0x2e0a5b6/0x2e8b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96763904 unmapped: 1892352 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:53.176407+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 84.805038452s of 84.814750671s, submitted: 2
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 43
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now 
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc reconnect Terminating session with v2:172.18.0.108:6810/689946273
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc reconnect No active mgr available yet
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 ms_handle_reset con 0x558a5f807800 session 0x558a62126780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96862208 unmapped: 1794048 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a602eb800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:54.176582+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 44
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc reconnect Starting new session with [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: get_auth_request con 0x558a62719000 auth_method 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_configure stats_period=5
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96960512 unmapped: 1695744 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:55.176709+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96960512 unmapped: 1695744 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:56.176848+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96960512 unmapped: 1695744 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:57.177007+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96960512 unmapped: 1695744 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:58.177164+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96960512 unmapped: 1695744 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 45
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:59.177336+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96731136 unmapped: 1925120 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:00.177490+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96731136 unmapped: 1925120 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:01.177732+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96731136 unmapped: 1925120 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:02.177940+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96731136 unmapped: 1925120 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:03.178086+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96731136 unmapped: 1925120 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:04.178384+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:05.178520+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:06.178742+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:07.178932+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:08.179115+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:09.179331+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:10.179488+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:11.179646+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:12.179823+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:13.180006+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:14.180277+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:15.180548+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:16.180848+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:17.181202+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:18.181417+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:19.181642+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:20.181869+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:21.182042+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:22.182255+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:23.182443+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:24.182638+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:25.182788+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:26.182976+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:27.183163+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:28.183344+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:29.183580+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:30.183772+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:31.183999+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 301989888 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:32.184219+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:33.184455+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:34.184682+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:35.184923+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:36.185147+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 285212672 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:37.185375+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:38.185535+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:39.185957+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:40.186159+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:41.186536+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 285212672 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:42.186755+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:43.186966+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:44.187201+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:45.187414+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:46.187634+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 285212672 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:47.187829+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:48.188044+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:49.188261+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:50.188554+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:51.188785+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 285212672 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:52.189714+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:53.189965+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:54.190236+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:55.190470+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:56.190684+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 285212672 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:57.190922+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:58.191199+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:59.191522+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:00.191857+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:01.192130+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 285212672 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:02.192639+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:03.193332+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:04.195852+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:05.199052+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:06.200293+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 285212672 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:07.200921+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:08.204627+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:09.205149+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:10.205668+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:11.208310+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 285212672 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:12.208522+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:13.209364+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:14.211139+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:15.211791+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:16.212332+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:17.212683+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 285212672 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:18.213809+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:19.214150+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:20.215074+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:21.215918+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7800.1 total, 600.0 interval
                                                          Cumulative writes: 5978 writes, 25K keys, 5978 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5978 writes, 818 syncs, 7.31 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 51 writes, 170 keys, 51 commit groups, 1.0 writes per commit group, ingest: 0.23 MB, 0.00 MB/s
                                                          Interval WAL: 51 writes, 21 syncs, 2.43 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:22.216704+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 285212672 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:23.217081+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 46
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:24.217745+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:25.218072+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c924/0x2e8f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:26.218354+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:27.218608+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 991062 data_alloc: 285212672 data_used: 14368768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 96534528 unmapped: 2121728 heap: 98656256 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60652800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 94.709617615s of 94.724601746s, submitted: 11
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:28.218810+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 heartbeat osd_stat(store_statfs(0x1b9d9e000/0x0/0x1bfc00000, data 0x2e0c934/0x2e90000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97722368 unmapped: 24018944 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:29.219042+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 91 handle_osd_map epochs [92,92], i have 91, src has [1,92]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 92 ms_handle_reset con 0x558a60652800 session 0x558a61f22780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97730560 unmapped: 24010752 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:30.219222+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003d000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 106192896 unmapped: 15548416 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:31.219400+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 ms_handle_reset con 0x558a6003d000 session 0x558a64160960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:32.219581+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1284300 data_alloc: 285212672 data_used: 14397440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:33.219739+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:34.223551+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:35.224664+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:36.224942+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:37.226599+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1284300 data_alloc: 285212672 data_used: 14397440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:38.226758+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:39.228450+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:40.228775+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:41.229483+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:42.229712+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1284300 data_alloc: 285212672 data_used: 14397440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:43.230351+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:44.230705+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:45.231339+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:46.232218+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:47.232803+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1284300 data_alloc: 285212672 data_used: 14397440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:48.233519+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:49.234004+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:50.234401+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:51.234648+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:52.235024+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1284300 data_alloc: 285212672 data_used: 14397440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:53.235278+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:54.235748+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:55.235973+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:56.236237+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:57.236472+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1284300 data_alloc: 285212672 data_used: 14397440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:58.236655+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:59.237165+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:00.237540+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:01.237906+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:02.238252+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1284300 data_alloc: 285212672 data_used: 14397440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:03.238524+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97927168 unmapped: 23814144 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:04.238868+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:05.239221+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:06.239514+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:07.243048+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1284300 data_alloc: 285212672 data_used: 14397440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:08.246147+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:09.248961+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:10.249870+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:11.250284+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:12.250986+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1284300 data_alloc: 285212672 data_used: 14397440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:13.251590+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:14.252086+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:15.252549+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:16.252732+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:17.252948+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1284300 data_alloc: 285212672 data_used: 14397440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:18.253946+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:19.254668+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:20.254971+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:21.255523+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:22.256137+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1284300 data_alloc: 285212672 data_used: 14397440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:23.256562+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:24.256729+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:25.257265+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:26.257829+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:27.258344+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b74b3000/0x0/0x1bfc00000, data 0x56f104a/0x577a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1284300 data_alloc: 285212672 data_used: 14397440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:28.258632+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 97935360 unmapped: 23805952 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 ms_handle_reset con 0x558a6003c400 session 0x558a641603c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:29.258858+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 105308160 unmapped: 16433152 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 ms_handle_reset con 0x558a5f807800 session 0x558a61f72d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 61.849128723s of 62.087238312s, submitted: 17
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003d000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:30.259056+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 111788032 unmapped: 9953280 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 ms_handle_reset con 0x558a6003d000 session 0x558a61f734a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60652800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 ms_handle_reset con 0x558a60652800 session 0x558a61f73a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:31.259321+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 108101632 unmapped: 13639680 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63f2f800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 ms_handle_reset con 0x558a63f2f800 session 0x558a621272c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b6bde000/0x0/0x1bfc00000, data 0x5fc60ac/0x6050000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:32.259808+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 108240896 unmapped: 13500416 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 ms_handle_reset con 0x558a6003c800 session 0x558a602f8b40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1388220 data_alloc: 301989888 data_used: 22790144
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 ms_handle_reset con 0x558a5f807800 session 0x558a60c943c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:33.260062+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 107601920 unmapped: 14139392 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003d000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:34.260160+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 107872256 unmapped: 13869056 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:35.260337+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 107872256 unmapped: 13869056 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b6bb9000/0x0/0x1bfc00000, data 0x5fea0bc/0x6075000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:36.260657+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 107872256 unmapped: 13869056 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b6bb9000/0x0/0x1bfc00000, data 0x5fea0bc/0x6075000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:37.260815+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 107872256 unmapped: 13869056 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1392700 data_alloc: 301989888 data_used: 23355392
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:38.260965+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 107872256 unmapped: 13869056 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:39.261209+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 107872256 unmapped: 13869056 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b6bb9000/0x0/0x1bfc00000, data 0x5fea0bc/0x6075000, compress 0x0/0x0/0x0, omap 0x649, meta 0x2fcf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:40.261417+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 107872256 unmapped: 13869056 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:41.261619+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 107872256 unmapped: 13869056 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:42.261765+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 107872256 unmapped: 13869056 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1392700 data_alloc: 301989888 data_used: 23355392
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.675814629s of 12.876140594s, submitted: 45
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:43.261930+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 116170752 unmapped: 5570560 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 heartbeat osd_stat(store_statfs(0x1b4ec6000/0x0/0x1bfc00000, data 0x6b3d0bc/0x6bc8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:44.262144+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60652800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 113057792 unmapped: 8683520 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:45.262391+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 114663424 unmapped: 7077888 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 94 ms_handle_reset con 0x558a60652800 session 0x558a6211f2c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:46.262723+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 114515968 unmapped: 7225344 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 94 heartbeat osd_stat(store_statfs(0x1b4de6000/0x0/0x1bfc00000, data 0x6c1646a/0x6ca5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:47.262976+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 94 ms_handle_reset con 0x558a6003d000 session 0x558a627b9a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 94 ms_handle_reset con 0x558a6003c800 session 0x558a627b81e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 114524160 unmapped: 7217152 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1500535 data_alloc: 301989888 data_used: 23560192
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:48.263147+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 114524160 unmapped: 7217152 heap: 121741312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 94 handle_osd_map epochs [95,95], i have 94, src has [1,95]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63f2f800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 95 heartbeat osd_stat(store_statfs(0x1b4de6000/0x0/0x1bfc00000, data 0x6c1646a/0x6ca5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 95 ms_handle_reset con 0x558a63f2f800 session 0x558a611c5c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:49.263503+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 128966656 unmapped: 7364608 heap: 136331264 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 95 ms_handle_reset con 0x558a5f807800 session 0x558a6249a5a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003d000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:50.263670+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 121552896 unmapped: 22036480 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 96 ms_handle_reset con 0x558a6003c800 session 0x558a6247c5a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60652800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 96 ms_handle_reset con 0x558a6003d000 session 0x558a5fff2f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:51.263799+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 121659392 unmapped: 21929984 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a60652800 session 0x558a627b8780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63f2f800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a63f2f800 session 0x558a61900d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a5f807800 session 0x558a6249a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:52.263928+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 heartbeat osd_stat(store_statfs(0x1b3252000/0x0/0x1bfc00000, data 0x87a8cfe/0x883a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a6003c800 session 0x558a621232c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 122732544 unmapped: 20856832 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003d000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a6003d000 session 0x558a62123a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60652800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a60652800 session 0x558a62122960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a6003c000 session 0x558a621234a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1746305 data_alloc: 301989888 data_used: 26226688
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 heartbeat osd_stat(store_statfs(0x1b324c000/0x0/0x1bfc00000, data 0x87ab100/0x883f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.852327347s of 10.003829956s, submitted: 295
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:53.264083+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a6003c800 session 0x558a620ffe00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a5f807800 session 0x558a641610e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003d000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 120782848 unmapped: 22806528 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a6003d000 session 0x558a63d7dc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:54.264228+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60652800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a60652800 session 0x558a63d7de00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62556800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 120799232 unmapped: 22790144 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62556400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a62556400 session 0x558a611f52c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a62556800 session 0x558a621232c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a6003c800 session 0x558a61900d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003d000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a6003d000 session 0x558a627b8780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60652800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a60652800 session 0x558a611c5c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:55.264383+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62557400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c45c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a60c45c00 session 0x558a6211e780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 116219904 unmapped: 27369472 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 ms_handle_reset con 0x558a5f807800 session 0x558a612052c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 98 ms_handle_reset con 0x558a62557400 session 0x558a622050e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:56.264551+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003d000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 126042112 unmapped: 17547264 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 98 ms_handle_reset con 0x558a6003c800 session 0x558a62211a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60652800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 98 ms_handle_reset con 0x558a60652800 session 0x558a62127680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 98 ms_handle_reset con 0x558a6003d000 session 0x558a610a72c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:57.264747+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 116195328 unmapped: 27394048 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1758218 data_alloc: 301989888 data_used: 22847488
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 99 ms_handle_reset con 0x558a5f807800 session 0x558a64160780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 99 ms_handle_reset con 0x558a6003c800 session 0x558a63dda000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 99 heartbeat osd_stat(store_statfs(0x1b3158000/0x0/0x1bfc00000, data 0x889bca5/0x8936000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60652800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 99 ms_handle_reset con 0x558a60652800 session 0x558a63dda5a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62557400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 99 ms_handle_reset con 0x558a62557400 session 0x558a63ddab40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:58.264989+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62556800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 99 ms_handle_reset con 0x558a62556800 session 0x558a63ddad20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 99 ms_handle_reset con 0x558a5f807800 session 0x558a63ddaf00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 116293632 unmapped: 27295744 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60652800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 99 ms_handle_reset con 0x558a60652800 session 0x558a63ddb0e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62557400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 99 ms_handle_reset con 0x558a62557400 session 0x558a63ddb2c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 99 ms_handle_reset con 0x558a60c44000 session 0x558a63ddb4a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 100 ms_handle_reset con 0x558a60c44c00 session 0x558a5fff5860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:59.265198+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 128270336 unmapped: 15319040 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 100 ms_handle_reset con 0x558a60c44800 session 0x558a63ddb680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 100 ms_handle_reset con 0x558a5f807800 session 0x558a62204f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 100 ms_handle_reset con 0x558a6003c800 session 0x558a6249a3c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60652800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 100 ms_handle_reset con 0x558a60c44000 session 0x558a63dda960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62557400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 100 ms_handle_reset con 0x558a62557400 session 0x558a65a88000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62557400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:00.265354+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 100 ms_handle_reset con 0x558a62557400 session 0x558a65a883c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 128516096 unmapped: 15073280 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 100 ms_handle_reset con 0x558a6003c800 session 0x558a63f12d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 101 ms_handle_reset con 0x558a60c44000 session 0x558a63ddb860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:01.265530+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 101 ms_handle_reset con 0x558a60c44800 session 0x558a61901860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 101 ms_handle_reset con 0x558a60652800 session 0x558a63ddba40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 128450560 unmapped: 15138816 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 101 heartbeat osd_stat(store_statfs(0x1b2064000/0x0/0x1bfc00000, data 0x998afd2/0x9a28000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:02.265682+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 129064960 unmapped: 14524416 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1940472 data_alloc: 301989888 data_used: 30965760
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.779612541s of 10.065307617s, submitted: 340
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 102 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 102 ms_handle_reset con 0x558a60c44000 session 0x558a65d14000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:03.265853+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 119021568 unmapped: 24567808 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62557400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:04.266104+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 128696320 unmapped: 14893056 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:05.266308+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 128696320 unmapped: 14893056 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:06.266604+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 128696320 unmapped: 14893056 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 102 heartbeat osd_stat(store_statfs(0x1b3bf2000/0x0/0x1bfc00000, data 0x7dff354/0x7e9c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:07.266803+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 102 heartbeat osd_stat(store_statfs(0x1b3bf2000/0x0/0x1bfc00000, data 0x7dff354/0x7e9c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 128811008 unmapped: 14778368 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.038835
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1728053248 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1798465 data_alloc: 301989888 data_used: 34361344
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:08.266988+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 128811008 unmapped: 14778368 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:09.267228+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 128851968 unmapped: 14737408 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:10.267393+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 128851968 unmapped: 14737408 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 103 heartbeat osd_stat(store_statfs(0x1b3bed000/0x0/0x1bfc00000, data 0x7e015a2/0x7ea0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:11.267617+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c45000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 128974848 unmapped: 14614528 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:12.267807+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 136085504 unmapped: 7503872 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.038835
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1728053248 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1877707 data_alloc: 301989888 data_used: 34623488
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 103 ms_handle_reset con 0x558a5f807800 session 0x558a65a885a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.629053116s of 10.004798889s, submitted: 153
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:13.268468+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 103 ms_handle_reset con 0x558a62557400 session 0x558a65d143c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63049000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133128192 unmapped: 10461184 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 103 heartbeat osd_stat(store_statfs(0x1b330d000/0x0/0x1bfc00000, data 0x86e25a2/0x8781000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:14.269222+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 135880704 unmapped: 7708672 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:15.269446+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133963776 unmapped: 9625600 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:16.269619+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 136306688 unmapped: 7282688 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:17.269953+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 136314880 unmapped: 7274496 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 103 ms_handle_reset con 0x558a60c44800 session 0x558a63ddbe00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 103 ms_handle_reset con 0x558a6003c800 session 0x558a63ddbc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.038835
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1728053248 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2002119 data_alloc: 301989888 data_used: 34648064
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:18.270423+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 103 heartbeat osd_stat(store_statfs(0x1b2487000/0x0/0x1bfc00000, data 0x95685a2/0x9607000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 136314880 unmapped: 7274496 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:19.270646+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 136314880 unmapped: 7274496 heap: 143589376 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 103 ms_handle_reset con 0x558a60c45000 session 0x558a65a89a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 104 ms_handle_reset con 0x558a5f807800 session 0x558a65a89680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:20.270825+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 149053440 unmapped: 9207808 heap: 158261248 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 104 ms_handle_reset con 0x558a63049000 session 0x558a65d15e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 104 ms_handle_reset con 0x558a60c44000 session 0x558a622050e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 104 ms_handle_reset con 0x558a6003c800 session 0x558a627b9680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:21.271133+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 104 heartbeat osd_stat(store_statfs(0x1b2481000/0x0/0x1bfc00000, data 0x8c2d96c/0x8ccf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,1,2,3,5] op hist [0,0,1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 132857856 unmapped: 25403392 heap: 158261248 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 105 ms_handle_reset con 0x558a60c44800 session 0x558a63f4b2c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 105 ms_handle_reset con 0x558a6003c800 session 0x558a65a883c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:22.271300+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 132333568 unmapped: 25927680 heap: 158261248 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1789519 data_alloc: 301989888 data_used: 24158208
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 ms_handle_reset con 0x558a5f807800 session 0x558a66e49e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:23.271475+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 ms_handle_reset con 0x558a60c44000 session 0x558a610a72c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c45000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 ms_handle_reset con 0x558a60c45000 session 0x558a62417e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 ms_handle_reset con 0x558a5f807800 session 0x558a624172c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 132374528 unmapped: 25886720 heap: 158261248 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:24.271777+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6003c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 ms_handle_reset con 0x558a6003c800 session 0x558a63f4a1e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 ms_handle_reset con 0x558a60c44000 session 0x558a63f4a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 ms_handle_reset con 0x558a60c44800 session 0x558a63f4ba40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 132374528 unmapped: 25886720 heap: 158261248 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63049000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 ms_handle_reset con 0x558a63049000 session 0x558a63f4af00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.243511200s of 11.742984772s, submitted: 392
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 handle_osd_map epochs [105,106], i have 106, src has [1,106]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 ms_handle_reset con 0x558a5f807800 session 0x558a63d7c1e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:25.271945+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 129449984 unmapped: 28811264 heap: 158261248 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 ms_handle_reset con 0x558a60c44000 session 0x558a63ddad20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 heartbeat osd_stat(store_statfs(0x1b3684000/0x0/0x1bfc00000, data 0x786b0fd/0x7910000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:26.272163+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 129449984 unmapped: 28811264 heap: 158261248 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:27.272331+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.15] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.19] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 130539520 unmapped: 27721728 heap: 158261248 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.13] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[2.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1725201 data_alloc: 301989888 data_used: 20738048
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:28.272675+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 heartbeat osd_stat(store_statfs(0x1b3685000/0x0/0x1bfc00000, data 0x786b0ca/0x790e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 129990656 unmapped: 28270592 heap: 158261248 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 ms_handle_reset con 0x558a60c44800 session 0x558a61f73860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63049000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62557400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 ms_handle_reset con 0x558a62557400 session 0x558a63dda960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 ms_handle_reset con 0x558a63049000 session 0x558a61f725a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:29.272897+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 ms_handle_reset con 0x558a5f807800 session 0x558a6211fe00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 ms_handle_reset con 0x558a60c44000 session 0x558a622103c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 ms_handle_reset con 0x558a60c44800 session 0x558a6247cb40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62557400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 ms_handle_reset con 0x558a62557400 session 0x558a63f13e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466d800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 131235840 unmapped: 27025408 heap: 158261248 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:30.273153+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 151789568 unmapped: 25247744 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 ms_handle_reset con 0x558a6466d800 session 0x558a611c5c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:31.273299+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 ms_handle_reset con 0x558a60c44000 session 0x558a65d14960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138067968 unmapped: 38969344 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 heartbeat osd_stat(store_statfs(0x1b1bd8000/0x0/0x1bfc00000, data 0x9a1037a/0x9ab6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 ms_handle_reset con 0x558a60c44800 session 0x558a610a6f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:32.273542+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62557400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 ms_handle_reset con 0x558a62557400 session 0x558a610a72c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 heartbeat osd_stat(store_statfs(0x1b1bd8000/0x0/0x1bfc00000, data 0x9a1037a/0x9ab6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 ms_handle_reset con 0x558a6466c000 session 0x558a66e494a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138395648 unmapped: 38641664 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2028418 data_alloc: 301989888 data_used: 24141824
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a627b7c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63048400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:33.273709+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 heartbeat osd_stat(store_statfs(0x1b1bb2000/0x0/0x1bfc00000, data 0x9a343ad/0x9adc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138428416 unmapped: 38608896 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:34.273832+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138436608 unmapped: 38600704 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.420950890s of 10.177603722s, submitted: 227
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:35.273978+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 108 ms_handle_reset con 0x558a63048400 session 0x558a6249b4a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 131751936 unmapped: 45285376 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:36.274229+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63048400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 131751936 unmapped: 45285376 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:37.274431+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b3203000/0x0/0x1bfc00000, data 0x83dd707/0x8486000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 131760128 unmapped: 45277184 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1869026 data_alloc: 301989888 data_used: 22675456
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:38.274650+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 ms_handle_reset con 0x558a5f807800 session 0x558a5fff5860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 ms_handle_reset con 0x558a627b7c00 session 0x558a66e490e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 131760128 unmapped: 45277184 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:39.274871+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 ms_handle_reset con 0x558a60c44000 session 0x558a66e49e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 45727744 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b50c0000/0x0/0x1bfc00000, data 0x62188c0/0x62c0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:40.275050+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 45727744 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:41.275245+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 45727744 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:42.275423+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 45727744 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1600115 data_alloc: 301989888 data_used: 21286912
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:43.275571+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 45727744 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:44.276339+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b50c0000/0x0/0x1bfc00000, data 0x62188c0/0x62c0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 45727744 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:45.276636+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 131309568 unmapped: 45727744 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.592528343s of 10.808332443s, submitted: 95
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:46.276964+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133619712 unmapped: 43417600 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:47.277195+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133627904 unmapped: 43409408 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1709441 data_alloc: 301989888 data_used: 21286912
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:48.277399+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b4576000/0x0/0x1bfc00000, data 0x70708c0/0x7118000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133677056 unmapped: 43360256 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:49.277631+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133677056 unmapped: 43360256 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:50.277807+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133677056 unmapped: 43360256 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:51.277964+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133677056 unmapped: 43360256 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:52.278104+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133677056 unmapped: 43360256 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b4576000/0x0/0x1bfc00000, data 0x70708c0/0x7118000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1713185 data_alloc: 301989888 data_used: 21286912
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:53.278271+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133677056 unmapped: 43360256 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b4576000/0x0/0x1bfc00000, data 0x70708c0/0x7118000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:54.278426+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133677056 unmapped: 43360256 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:55.278610+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133677056 unmapped: 43360256 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:56.278799+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133677056 unmapped: 43360256 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.554446220s of 11.115598679s, submitted: 179
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 ms_handle_reset con 0x558a63048400 session 0x558a61f22f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:57.278947+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 ms_handle_reset con 0x558a60c44800 session 0x558a61f73a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1512286 data_alloc: 301989888 data_used: 20762624
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:58.279107+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b5ed4000/0x0/0x1bfc00000, data 0x571484e/0x57ba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:59.279362+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:00.279633+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:01.279855+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:02.280036+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1512286 data_alloc: 301989888 data_used: 20762624
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:03.280290+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:04.280607+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b5ed4000/0x0/0x1bfc00000, data 0x571484e/0x57ba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:05.280789+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b5ed4000/0x0/0x1bfc00000, data 0x571484e/0x57ba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:06.281215+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b5ed4000/0x0/0x1bfc00000, data 0x571484e/0x57ba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:07.281698+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1512286 data_alloc: 301989888 data_used: 20762624
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:08.282243+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:09.282671+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:10.282919+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:11.283109+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:12.283413+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 heartbeat osd_stat(store_statfs(0x1b5ed4000/0x0/0x1bfc00000, data 0x571484e/0x57ba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1512286 data_alloc: 301989888 data_used: 20762624
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:13.283603+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133758976 unmapped: 43278336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:14.283812+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 17.408576965s of 17.593820572s, submitted: 55
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133783552 unmapped: 43253760 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: get_auth_request con 0x558a60d83c00 auth_method 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:15.283971+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 110 ms_handle_reset con 0x558a5f807800 session 0x558a62126960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133808128 unmapped: 43229184 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:16.284137+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133816320 unmapped: 43220992 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:17.284295+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 111 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 111 ms_handle_reset con 0x558a60c44000 session 0x558a63f125a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b5ecb000/0x0/0x1bfc00000, data 0x5718f72/0x57c2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133857280 unmapped: 43180032 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1523489 data_alloc: 301989888 data_used: 20787200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:18.284629+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b5ecb000/0x0/0x1bfc00000, data 0x5718f72/0x57c2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133873664 unmapped: 43163648 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:19.285111+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133865472 unmapped: 43171840 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:20.285646+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133865472 unmapped: 43171840 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:21.286016+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133865472 unmapped: 43171840 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:22.286219+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 111 heartbeat osd_stat(store_statfs(0x1b5ecb000/0x0/0x1bfc00000, data 0x5718f72/0x57c2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133963776 unmapped: 43073536 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1526491 data_alloc: 301989888 data_used: 20787200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:23.286514+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133963776 unmapped: 43073536 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:24.286935+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 133980160 unmapped: 43057152 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:25.287124+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134062080 unmapped: 42975232 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:26.287277+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134086656 unmapped: 42950656 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:27.288939+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134086656 unmapped: 42950656 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1526491 data_alloc: 301989888 data_used: 20787200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:28.289103+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134086656 unmapped: 42950656 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:29.290072+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134086656 unmapped: 42950656 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:30.290315+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134086656 unmapped: 42950656 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:31.290578+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134086656 unmapped: 42950656 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:32.290836+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134086656 unmapped: 42950656 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1526491 data_alloc: 301989888 data_used: 20787200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:33.291069+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134086656 unmapped: 42950656 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:34.291270+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134094848 unmapped: 42942464 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:35.291557+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134094848 unmapped: 42942464 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:36.291717+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134094848 unmapped: 42942464 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:37.291946+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134094848 unmapped: 42942464 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1526491 data_alloc: 301989888 data_used: 20787200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:38.292173+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134094848 unmapped: 42942464 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:39.292440+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134103040 unmapped: 42934272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:40.293322+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134103040 unmapped: 42934272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:41.293497+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134103040 unmapped: 42934272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:42.293731+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134103040 unmapped: 42934272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1526491 data_alloc: 301989888 data_used: 20787200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:43.293945+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:44.294228+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:45.294493+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:46.294693+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:47.294974+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1526491 data_alloc: 301989888 data_used: 20787200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:48.295243+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:49.295517+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134094848 unmapped: 42942464 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:50.295731+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134103040 unmapped: 42934272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:51.295958+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134103040 unmapped: 42934272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:52.296128+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134103040 unmapped: 42934272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1526491 data_alloc: 301989888 data_used: 20787200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:53.296279+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134103040 unmapped: 42934272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:54.296496+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134103040 unmapped: 42934272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:55.296711+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134103040 unmapped: 42934272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:56.296932+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134103040 unmapped: 42934272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:57.297154+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134103040 unmapped: 42934272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1526651 data_alloc: 301989888 data_used: 20791296
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:58.297428+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:59.297690+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:00.297921+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:01.298156+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:02.298365+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1526651 data_alloc: 301989888 data_used: 20791296
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:03.298560+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:04.298749+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:05.299010+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134111232 unmapped: 42926080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:06.299217+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134119424 unmapped: 42917888 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:07.299393+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134119424 unmapped: 42917888 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1526651 data_alloc: 301989888 data_used: 20791296
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:08.299536+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134127616 unmapped: 42909696 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:09.299692+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134127616 unmapped: 42909696 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:10.299937+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134127616 unmapped: 42909696 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:11.300161+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134127616 unmapped: 42909696 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:12.300373+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134127616 unmapped: 42909696 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:13.300539+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1526651 data_alloc: 301989888 data_used: 20791296
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134127616 unmapped: 42909696 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:14.300695+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134127616 unmapped: 42909696 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:15.300863+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 heartbeat osd_stat(store_statfs(0x1b5ec7000/0x0/0x1bfc00000, data 0x571b1c0/0x57c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134135808 unmapped: 42901504 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:16.301140+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a627b7c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 62.283386230s of 62.601634979s, submitted: 90
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134135808 unmapped: 42901504 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:17.301260+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134193152 unmapped: 42844160 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:18.301389+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1539085 data_alloc: 301989888 data_used: 20803584
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 113 ms_handle_reset con 0x558a627b7c00 session 0x558a62130f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 113 heartbeat osd_stat(store_statfs(0x1b5ebf000/0x0/0x1bfc00000, data 0x571d97f/0x57ce000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134184960 unmapped: 42852352 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:19.301575+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134184960 unmapped: 42852352 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63048400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:20.301702+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 113 heartbeat osd_stat(store_statfs(0x1b5ebf000/0x0/0x1bfc00000, data 0x571d98f/0x57cf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134201344 unmapped: 42835968 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:21.301848+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134209536 unmapped: 42827776 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:22.302072+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 114 heartbeat osd_stat(store_statfs(0x1b46b9000/0x0/0x1bfc00000, data 0x6f1fcff/0x6fd4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 142630912 unmapped: 34406400 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:23.302217+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1875518 data_alloc: 301989888 data_used: 20815872
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 115 heartbeat osd_stat(store_statfs(0x1b2eb4000/0x0/0x1bfc00000, data 0x87220c6/0x87d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134234112 unmapped: 42803200 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:24.302432+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134258688 unmapped: 42778624 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:25.302601+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 115 handle_osd_map epochs [115,116], i have 115, src has [1,116]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 116 ms_handle_reset con 0x558a63048400 session 0x558a602f81e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62557400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134266880 unmapped: 42770432 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:26.302816+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 117 handle_osd_map epochs [116,117], i have 117, src has [1,117]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 117 handle_osd_map epochs [116,117], i have 117, src has [1,117]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 117 ms_handle_reset con 0x558a62557400 session 0x558a64161860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.373908043s of 10.079854012s, submitted: 137
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134422528 unmapped: 42614784 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:27.302978+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 119 handle_osd_map epochs [118,119], i have 119, src has [1,119]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 119 ms_handle_reset con 0x558a5f807800 session 0x558a64160000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134537216 unmapped: 42500096 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:28.303124+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1579516 data_alloc: 301989888 data_used: 20815872
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 119 heartbeat osd_stat(store_statfs(0x1b5ea6000/0x0/0x1bfc00000, data 0x572ab95/0x57e8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134537216 unmapped: 42500096 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:29.303286+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134537216 unmapped: 42500096 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:30.303453+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134545408 unmapped: 42491904 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:31.303623+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134545408 unmapped: 42491904 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:32.303810+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134545408 unmapped: 42491904 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:33.303941+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1579669 data_alloc: 301989888 data_used: 20815872
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134569984 unmapped: 42467328 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:34.304099+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 119 heartbeat osd_stat(store_statfs(0x1b5ea6000/0x0/0x1bfc00000, data 0x572ab95/0x57e8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,0,1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134586368 unmapped: 42450944 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:35.304233+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134586368 unmapped: 42450944 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:36.304393+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a627b7c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63048400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 119 ms_handle_reset con 0x558a627b7c00 session 0x558a62127680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 119 ms_handle_reset con 0x558a63048400 session 0x558a627b8b40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.575805664s of 10.058761597s, submitted: 137
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134643712 unmapped: 42393600 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:37.304506+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 120 ms_handle_reset con 0x558a60c44000 session 0x558a63f4a960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 120 ms_handle_reset con 0x558a6466c000 session 0x558a611c4780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134758400 unmapped: 42278912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:38.304683+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1590006 data_alloc: 301989888 data_used: 20828160
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 121 ms_handle_reset con 0x558a5f807800 session 0x558a63ddaf00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 122 ms_handle_reset con 0x558a6466c000 session 0x558a6235da40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134946816 unmapped: 42090496 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:39.304866+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 122 heartbeat osd_stat(store_statfs(0x1b5e9a000/0x0/0x1bfc00000, data 0x573138b/0x57ef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134946816 unmapped: 42090496 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:40.305150+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:41.305338+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134946816 unmapped: 42090496 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 122 heartbeat osd_stat(store_statfs(0x1b5e9a000/0x0/0x1bfc00000, data 0x573138b/0x57ef000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:42.305483+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134955008 unmapped: 42082304 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 122 handle_osd_map epochs [122,123], i have 122, src has [1,123]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:43.305616+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134807552 unmapped: 42229760 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1590980 data_alloc: 301989888 data_used: 20828160
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:44.305805+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134807552 unmapped: 42229760 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:45.305993+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134807552 unmapped: 42229760 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:46.306141+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134807552 unmapped: 42229760 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 123 heartbeat osd_stat(store_statfs(0x1b5e9a000/0x0/0x1bfc00000, data 0x57335d9/0x57f3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:47.306334+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134807552 unmapped: 42229760 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.884309769s of 10.400353432s, submitted: 165
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 123 ms_handle_reset con 0x558a60c44000 session 0x558a629c5c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:48.306507+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134840320 unmapped: 42196992 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1592808 data_alloc: 301989888 data_used: 20828160
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:49.306697+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134848512 unmapped: 42188800 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a627b7c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 123 ms_handle_reset con 0x558a627b7c00 session 0x558a65862000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:50.306860+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134873088 unmapped: 42164224 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:51.307053+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134873088 unmapped: 42164224 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 123 heartbeat osd_stat(store_statfs(0x1b5e9b000/0x0/0x1bfc00000, data 0x57335d9/0x57f3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:52.307219+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134873088 unmapped: 42164224 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:53.307360+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134873088 unmapped: 42164224 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1590100 data_alloc: 301989888 data_used: 20828160
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:54.307526+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134897664 unmapped: 42139648 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63048400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 123 heartbeat osd_stat(store_statfs(0x1b5e9b000/0x0/0x1bfc00000, data 0x57335d9/0x57f3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:55.307715+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134963200 unmapped: 42074112 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 123 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 124 ms_handle_reset con 0x558a63048400 session 0x558a658623c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:56.307853+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 124 handle_osd_map epochs [124,124], i have 124, src has [1,124]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134987776 unmapped: 42049536 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 124 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 125 ms_handle_reset con 0x558a5f807800 session 0x558a65863c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:57.308070+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 134995968 unmapped: 42041344 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.051059723s of 10.288072586s, submitted: 58
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 126 ms_handle_reset con 0x558a60c44000 session 0x558a65862000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:58.308228+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 135061504 unmapped: 41975808 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1622237 data_alloc: 301989888 data_used: 20840448
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a627b7c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 126 handle_osd_map epochs [126,126], i have 126, src has [1,126]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 127 ms_handle_reset con 0x558a627b7c00 session 0x558a629c5c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:59.308446+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63048400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 135110656 unmapped: 41926656 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 128 ms_handle_reset con 0x558a63048400 session 0x558a64161c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 128 ms_handle_reset con 0x558a6466c000 session 0x558a65863e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:00.308627+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 135208960 unmapped: 41828352 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 128 heartbeat osd_stat(store_statfs(0x1b5e73000/0x0/0x1bfc00000, data 0x5741846/0x5819000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 129 ms_handle_reset con 0x558a6466c000 session 0x558a63d7c1e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:01.308762+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 129 heartbeat osd_stat(store_statfs(0x1b5e73000/0x0/0x1bfc00000, data 0x5741846/0x5819000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 135233536 unmapped: 41803776 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 129 handle_osd_map epochs [129,130], i have 129, src has [1,130]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 130 ms_handle_reset con 0x558a5f807800 session 0x558a63d7de00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:02.308943+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 135249920 unmapped: 41787392 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 131 handle_osd_map epochs [130,131], i have 131, src has [1,131]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a627b7c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:03.309098+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 135274496 unmapped: 41762816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 131 handle_osd_map epochs [130,131], i have 131, src has [1,131]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 131 ms_handle_reset con 0x558a60c44000 session 0x558a62416780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1655358 data_alloc: 301989888 data_used: 20856832
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63048400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:04.309233+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 135299072 unmapped: 41738240 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 132 ms_handle_reset con 0x558a63048400 session 0x558a62417e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 132 ms_handle_reset con 0x558a627b7c00 session 0x558a6195cb40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a627b7c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:05.309392+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 136454144 unmapped: 40583168 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 133 heartbeat osd_stat(store_statfs(0x1b5a62000/0x0/0x1bfc00000, data 0x574bf11/0x582a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 133 ms_handle_reset con 0x558a627b7c00 session 0x558a611f7e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 134 ms_handle_reset con 0x558a5f807800 session 0x558a63dda000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:06.309536+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137543680 unmapped: 39493632 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 134 ms_handle_reset con 0x558a60c44000 session 0x558a6249ab40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63048400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 135 handle_osd_map epochs [135,135], i have 135, src has [1,135]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:07.309669+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137633792 unmapped: 39403520 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 135 ms_handle_reset con 0x558a63048400 session 0x558a6247cd20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 135 handle_osd_map epochs [135,136], i have 135, src has [1,136]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.526808739s of 10.002306938s, submitted: 416
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 136 ms_handle_reset con 0x558a6466c000 session 0x558a65a89680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:08.309826+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137732096 unmapped: 39305216 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1662666 data_alloc: 301989888 data_used: 20881408
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 137 ms_handle_reset con 0x558a67264000 session 0x558a63f4a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:09.310004+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 137 ms_handle_reset con 0x558a6466c000 session 0x558a611fe1e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137846784 unmapped: 39190528 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 137 ms_handle_reset con 0x558a5f807800 session 0x558a6249a780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:10.310149+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137863168 unmapped: 39174144 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:11.310294+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 137 heartbeat osd_stat(store_statfs(0x1b5a5e000/0x0/0x1bfc00000, data 0x575284d/0x582b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137871360 unmapped: 39165952 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:12.310449+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137871360 unmapped: 39165952 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:13.310601+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 138 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 138 ms_handle_reset con 0x558a60c44000 session 0x558a658630e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137912320 unmapped: 39124992 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1667687 data_alloc: 301989888 data_used: 20881408
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 138 heartbeat osd_stat(store_statfs(0x1b5a5c000/0x0/0x1bfc00000, data 0x5754b2d/0x5831000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:14.310736+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137920512 unmapped: 39116800 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a627b7c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:15.310962+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137920512 unmapped: 39116800 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 139 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 139 ms_handle_reset con 0x558a627b7c00 session 0x558a6211f680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:16.311137+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137928704 unmapped: 39108608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 139 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 139 ms_handle_reset con 0x558a60c44000 session 0x558a611f5e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 139 ms_handle_reset con 0x558a5f807800 session 0x558a611f7a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:17.311482+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 139 ms_handle_reset con 0x558a6466c000 session 0x558a66e483c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137928704 unmapped: 39108608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:18.311641+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137928704 unmapped: 39108608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1672866 data_alloc: 301989888 data_used: 20885504
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 139 ms_handle_reset con 0x558a67264000 session 0x558a62122d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.458681107s of 10.842226982s, submitted: 120
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a63048400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 139 ms_handle_reset con 0x558a63048400 session 0x558a63f4b0e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:19.311960+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137936896 unmapped: 39100416 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 139 heartbeat osd_stat(store_statfs(0x1b5a58000/0x0/0x1bfc00000, data 0x5756e95/0x5835000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:20.312156+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137936896 unmapped: 39100416 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 139 ms_handle_reset con 0x558a5f807800 session 0x558a62131680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 139 heartbeat osd_stat(store_statfs(0x1b5a58000/0x0/0x1bfc00000, data 0x5756e95/0x5835000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:21.312675+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137936896 unmapped: 39100416 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 140 ms_handle_reset con 0x558a60c44000 session 0x558a621310e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:22.312795+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137977856 unmapped: 39059456 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 140 ms_handle_reset con 0x558a6466c000 session 0x558a5fff3a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 140 ms_handle_reset con 0x558a67264000 session 0x558a627b81e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:23.313007+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 137986048 unmapped: 39051264 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1673134 data_alloc: 301989888 data_used: 20901888
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets getting new tickets!
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:24.313237+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _finish_auth 0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:24.314361+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 139034624 unmapped: 38002688 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 47
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:25.313464+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 139108352 unmapped: 37928960 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 140 ms_handle_reset con 0x558a67264800 session 0x558a621230e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:26.314108+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138059776 unmapped: 38977536 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 140 heartbeat osd_stat(store_statfs(0x1b5a55000/0x0/0x1bfc00000, data 0x57592dc/0x5839000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:27.314388+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138059776 unmapped: 38977536 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:28.314554+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138108928 unmapped: 38928384 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1685956 data_alloc: 301989888 data_used: 20914176
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 48
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 141 heartbeat osd_stat(store_statfs(0x1b5a4e000/0x0/0x1bfc00000, data 0x575b59f/0x583f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:29.315043+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138125312 unmapped: 38912000 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:30.315191+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138125312 unmapped: 38912000 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:31.315403+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138125312 unmapped: 38912000 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.969761848s of 13.213902473s, submitted: 80
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:32.315590+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138125312 unmapped: 38912000 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 141 heartbeat osd_stat(store_statfs(0x1b5a50000/0x0/0x1bfc00000, data 0x575b569/0x583e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:33.315767+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138125312 unmapped: 38912000 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1684018 data_alloc: 301989888 data_used: 20914176
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:34.315923+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138125312 unmapped: 38912000 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 141 heartbeat osd_stat(store_statfs(0x1b5a50000/0x0/0x1bfc00000, data 0x575b569/0x583e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 141 handle_osd_map epochs [142,142], i have 141, src has [1,142]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 142 handle_osd_map epochs [142,142], i have 142, src has [1,142]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:35.316054+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138149888 unmapped: 38887424 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:36.316289+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 143 ms_handle_reset con 0x558a5f807800 session 0x558a60e1cd20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138166272 unmapped: 38871040 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:37.316457+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 143 ms_handle_reset con 0x558a60c44000 session 0x558a60e1cf00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 143 handle_osd_map epochs [143,143], i have 143, src has [1,143]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138166272 unmapped: 38871040 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 144 handle_osd_map epochs [144,144], i have 144, src has [1,144]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:38.316593+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 144 ms_handle_reset con 0x558a6466c000 session 0x558a60e1d2c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138182656 unmapped: 38854656 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1694142 data_alloc: 301989888 data_used: 20930560
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 144 handle_osd_map epochs [144,145], i have 144, src has [1,145]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 145 ms_handle_reset con 0x558a67264000 session 0x558a60e1d4a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:39.316744+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 145 heartbeat osd_stat(store_statfs(0x1b5a43000/0x0/0x1bfc00000, data 0x5761ff5/0x584a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138223616 unmapped: 38813696 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:40.316932+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138223616 unmapped: 38813696 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 145 ms_handle_reset con 0x558a67264c00 session 0x558a60e1d860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:41.317249+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138223616 unmapped: 38813696 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 145 heartbeat osd_stat(store_statfs(0x1b5a3e000/0x0/0x1bfc00000, data 0x57643b1/0x584e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 145 ms_handle_reset con 0x558a67264c00 session 0x558a611f5e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:42.317496+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138231808 unmapped: 38805504 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 145 handle_osd_map epochs [146,146], i have 145, src has [1,146]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.590934753s of 11.259260178s, submitted: 103
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:43.317675+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138248192 unmapped: 38789120 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1698210 data_alloc: 301989888 data_used: 20938752
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:44.317869+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138248192 unmapped: 38789120 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 146 handle_osd_map epochs [146,146], i have 146, src has [1,146]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 146 ms_handle_reset con 0x558a5f807800 session 0x558a5fff4d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:45.318174+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138256384 unmapped: 38780928 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 146 heartbeat osd_stat(store_statfs(0x1b5a3d000/0x0/0x1bfc00000, data 0x57665ef/0x5851000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 146 heartbeat osd_stat(store_statfs(0x1b5a3d000/0x0/0x1bfc00000, data 0x57665ef/0x5851000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:46.318347+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 146 handle_osd_map epochs [147,147], i have 146, src has [1,147]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 147 ms_handle_reset con 0x558a60c44000 session 0x558a63f13c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138272768 unmapped: 38764544 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:47.318557+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 147 ms_handle_reset con 0x558a6466c000 session 0x558a63f4a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138272768 unmapped: 38764544 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:48.318850+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 147 handle_osd_map epochs [148,148], i have 147, src has [1,148]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 147 handle_osd_map epochs [148,148], i have 148, src has [1,148]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 148 ms_handle_reset con 0x558a67264000 session 0x558a60c943c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138289152 unmapped: 38748160 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1708136 data_alloc: 301989888 data_used: 20967424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:49.319059+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 148 ms_handle_reset con 0x558a67264000 session 0x558a611f54a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138297344 unmapped: 38739968 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 148 heartbeat osd_stat(store_statfs(0x1b5a35000/0x0/0x1bfc00000, data 0x576acb1/0x5858000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:50.319368+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138297344 unmapped: 38739968 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:51.319539+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138297344 unmapped: 38739968 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:52.319708+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138297344 unmapped: 38739968 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:53.319941+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 148 handle_osd_map epochs [149,149], i have 148, src has [1,149]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.887326241s of 10.157606125s, submitted: 93
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1710472 data_alloc: 301989888 data_used: 20983808
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:54.320094+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 149 heartbeat osd_stat(store_statfs(0x1b5a31000/0x0/0x1bfc00000, data 0x576ceff/0x585c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:55.320272+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:56.320416+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:57.320601+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:58.320756+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 149 heartbeat osd_stat(store_statfs(0x1b5a31000/0x0/0x1bfc00000, data 0x576cf9a/0x585d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1711360 data_alloc: 301989888 data_used: 20983808
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:59.322297+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 149 heartbeat osd_stat(store_statfs(0x1b5a31000/0x0/0x1bfc00000, data 0x576cf9a/0x585d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:00.322483+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:01.322837+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 149 heartbeat osd_stat(store_statfs(0x1b5a31000/0x0/0x1bfc00000, data 0x576cf9a/0x585d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:02.322947+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:03.323237+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 149 heartbeat osd_stat(store_statfs(0x1b5a31000/0x0/0x1bfc00000, data 0x576cf9a/0x585d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1711360 data_alloc: 301989888 data_used: 20983808
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 149 heartbeat osd_stat(store_statfs(0x1b5a31000/0x0/0x1bfc00000, data 0x576cf9a/0x585d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:04.323385+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:05.323655+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138346496 unmapped: 38690816 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 12.370441437s of 12.407494545s, submitted: 22
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:06.323974+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138354688 unmapped: 38682624 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:07.324118+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 149 ms_handle_reset con 0x558a5f807800 session 0x558a66e49c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138387456 unmapped: 38649856 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:08.324275+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138387456 unmapped: 38649856 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1714847 data_alloc: 301989888 data_used: 20983808
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:09.324505+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 149 heartbeat osd_stat(store_statfs(0x1b5a2e000/0x0/0x1bfc00000, data 0x576d035/0x585e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138387456 unmapped: 38649856 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 149 heartbeat osd_stat(store_statfs(0x1b5a31000/0x0/0x1bfc00000, data 0x576d06c/0x585d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:10.324725+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138387456 unmapped: 38649856 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:11.325123+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138387456 unmapped: 38649856 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:12.325262+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138403840 unmapped: 38633472 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 149 handle_osd_map epochs [150,150], i have 149, src has [1,150]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:13.325403+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 150 handle_osd_map epochs [150,151], i have 150, src has [1,151]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138412032 unmapped: 38625280 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 151 handle_osd_map epochs [150,151], i have 151, src has [1,151]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1719791 data_alloc: 301989888 data_used: 21000192
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:14.325586+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138428416 unmapped: 38608896 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 151 heartbeat osd_stat(store_statfs(0x1b5a2a000/0x0/0x1bfc00000, data 0x577185d/0x5864000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:15.325732+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138428416 unmapped: 38608896 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:16.325951+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 151 ms_handle_reset con 0x558a60c44000 session 0x558a65d14f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138428416 unmapped: 38608896 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 151 heartbeat osd_stat(store_statfs(0x1b5a2a000/0x0/0x1bfc00000, data 0x5771927/0x5864000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 151 ms_handle_reset con 0x558a6466c000 session 0x558a61f232c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.788802147s of 11.176210403s, submitted: 127
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:17.326118+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138428416 unmapped: 38608896 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 151 handle_osd_map epochs [151,152], i have 151, src has [1,152]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:18.326319+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138428416 unmapped: 38608896 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1724153 data_alloc: 301989888 data_used: 21016576
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:19.326492+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138428416 unmapped: 38608896 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:20.326659+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138436608 unmapped: 38600704 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:21.326818+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138436608 unmapped: 38600704 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:22.326979+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138436608 unmapped: 38600704 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b5a25000/0x0/0x1bfc00000, data 0x5773b86/0x5869000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a67264c00 session 0x558a629c5a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:23.327166+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a67264c00 session 0x558a629c52c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 139165696 unmapped: 37871616 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1806946 data_alloc: 301989888 data_used: 21016576
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:24.327356+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b5077000/0x0/0x1bfc00000, data 0x6121b85/0x6217000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 139165696 unmapped: 37871616 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a5f807800 session 0x558a6211fc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:25.327539+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 139190272 unmapped: 37847040 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:26.327713+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 139190272 unmapped: 37847040 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.638718605s of 10.004400253s, submitted: 99
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:27.327871+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 139190272 unmapped: 37847040 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:28.328055+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138878976 unmapped: 38158336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1729716 data_alloc: 301989888 data_used: 21016576
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:29.328258+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138895360 unmapped: 38141952 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b5a23000/0x0/0x1bfc00000, data 0x5773e91/0x586b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 49
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:30.328462+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138829824 unmapped: 38207488 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:31.328691+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138829824 unmapped: 38207488 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:32.328931+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a67265000 session 0x558a62123a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138829824 unmapped: 38207488 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:33.329179+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138829824 unmapped: 38207488 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1735294 data_alloc: 301989888 data_used: 21016576
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:34.329312+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a67265400 session 0x558a60e1da40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138846208 unmapped: 38191104 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:35.329473+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a67265800 session 0x558a6494c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b5a24000/0x0/0x1bfc00000, data 0x5773e88/0x5869000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138895360 unmapped: 38141952 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:36.329682+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138895360 unmapped: 38141952 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a5f807800 session 0x558a62130000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.736786842s of 10.000470161s, submitted: 55
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a67264c00 session 0x558a6249be00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:37.329948+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138928128 unmapped: 38109184 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:38.330154+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138944512 unmapped: 38092800 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1731967 data_alloc: 301989888 data_used: 21016576
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:39.330329+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b5a26000/0x0/0x1bfc00000, data 0x5773ea5/0x5868000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138952704 unmapped: 38084608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:40.330480+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138952704 unmapped: 38084608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:41.330643+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138952704 unmapped: 38084608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:42.330786+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138952704 unmapped: 38084608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:43.330960+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138952704 unmapped: 38084608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1731967 data_alloc: 301989888 data_used: 21016576
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:44.331195+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b5a26000/0x0/0x1bfc00000, data 0x5773ea5/0x5868000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138952704 unmapped: 38084608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:45.331349+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138952704 unmapped: 38084608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:46.331582+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138952704 unmapped: 38084608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.907020569s of 10.001611710s, submitted: 22
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:47.331758+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a67265000 session 0x558a60c94000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138960896 unmapped: 38076416 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:48.331960+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b5a26000/0x0/0x1bfc00000, data 0x5773ea5/0x5868000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a67265400 session 0x558a60e1cd20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138960896 unmapped: 38076416 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1737051 data_alloc: 301989888 data_used: 21016576
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:49.332132+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138960896 unmapped: 38076416 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a67265c00 session 0x558a60e1d4a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:50.332255+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a5f807800 session 0x558a6211fc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138993664 unmapped: 38043648 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b5a25000/0x0/0x1bfc00000, data 0x577400a/0x5869000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:51.332394+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138993664 unmapped: 38043648 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:52.332544+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138993664 unmapped: 38043648 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:53.332699+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138993664 unmapped: 38043648 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1735937 data_alloc: 301989888 data_used: 21016576
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:54.332909+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138608640 unmapped: 38428672 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:55.332997+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138608640 unmapped: 38428672 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b5a26000/0x0/0x1bfc00000, data 0x5774039/0x5868000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:56.333115+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138608640 unmapped: 38428672 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.796061516s of 10.000635147s, submitted: 48
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:57.333277+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138608640 unmapped: 38428672 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:58.333438+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138608640 unmapped: 38428672 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1735615 data_alloc: 301989888 data_used: 21016576
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b5a26000/0x0/0x1bfc00000, data 0x5774039/0x5868000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:59.333610+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 heartbeat osd_stat(store_statfs(0x1b5a26000/0x0/0x1bfc00000, data 0x5774039/0x5868000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138608640 unmapped: 38428672 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:00.335987+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 ms_handle_reset con 0x558a67264c00 session 0x558a627b81e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138608640 unmapped: 38428672 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:01.336175+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138608640 unmapped: 38428672 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:02.336339+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138608640 unmapped: 38428672 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:03.336505+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 152 handle_osd_map epochs [153,153], i have 152, src has [1,153]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138616832 unmapped: 38420480 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1744803 data_alloc: 301989888 data_used: 21028864
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:04.336660+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 153 handle_osd_map epochs [154,154], i have 153, src has [1,154]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 154 heartbeat osd_stat(store_statfs(0x1b5a1e000/0x0/0x1bfc00000, data 0x57764e5/0x586f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138633216 unmapped: 38404096 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 154 ms_handle_reset con 0x558a67265000 session 0x558a627b85a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:05.336846+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 154 handle_osd_map epochs [155,155], i have 154, src has [1,155]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 139681792 unmapped: 37355520 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 155 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 155 ms_handle_reset con 0x558a6467a000 session 0x558a6235c3c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:06.336980+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a632d0c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 155 ms_handle_reset con 0x558a632d0c00 session 0x558a611fc000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 139722752 unmapped: 37314560 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 156 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 156 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.838083267s of 10.000305176s, submitted: 60
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 156 ms_handle_reset con 0x558a6467ac00 session 0x558a62122960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 156 ms_handle_reset con 0x558a67265400 session 0x558a61f22780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:07.337112+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 156 heartbeat osd_stat(store_statfs(0x1b5a14000/0x0/0x1bfc00000, data 0x577acf4/0x5879000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 139747328 unmapped: 37289984 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 156 ms_handle_reset con 0x558a5f807800 session 0x558a62131a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:08.337239+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138903552 unmapped: 38133760 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 156 handle_osd_map epochs [156,157], i have 156, src has [1,157]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1764651 data_alloc: 301989888 data_used: 21049344
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:09.337978+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 157 ms_handle_reset con 0x558a6467a000 session 0x558a602f8960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138928128 unmapped: 38109184 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 158 ms_handle_reset con 0x558a67264c00 session 0x558a60e1cb40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 158 handle_osd_map epochs [158,158], i have 158, src has [1,158]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:10.338132+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 158 ms_handle_reset con 0x558a5f807800 session 0x558a627b92c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 158 heartbeat osd_stat(store_statfs(0x1b5a0e000/0x0/0x1bfc00000, data 0x577f60c/0x587f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138952704 unmapped: 38084608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:11.338304+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 138969088 unmapped: 38068224 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:12.338445+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 140017664 unmapped: 37019648 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 159 heartbeat osd_stat(store_statfs(0x1b5a0b000/0x0/0x1bfc00000, data 0x5781b3d/0x5881000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:13.338594+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 140017664 unmapped: 37019648 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1766335 data_alloc: 301989888 data_used: 21041152
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:14.338735+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 159 ms_handle_reset con 0x558a6467a000 session 0x558a611fcb40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 140017664 unmapped: 37019648 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 159 ms_handle_reset con 0x558a6467ac00 session 0x558a611fc1e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 159 ms_handle_reset con 0x558a67265400 session 0x558a611fd860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 159 heartbeat osd_stat(store_statfs(0x1b5a08000/0x0/0x1bfc00000, data 0x5783e1b/0x5885000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:15.338840+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 159 ms_handle_reset con 0x558a67265000 session 0x558a6247d4a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 140017664 unmapped: 37019648 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:16.339033+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 159 ms_handle_reset con 0x558a5f807800 session 0x558a621f8b40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 140042240 unmapped: 36995072 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.553024292s of 10.000454903s, submitted: 161
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 159 ms_handle_reset con 0x558a6467a000 session 0x558a621f8d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:17.339202+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 159 handle_osd_map epochs [160,160], i have 159, src has [1,160]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 140042240 unmapped: 36995072 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:18.339360+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 160 ms_handle_reset con 0x558a6467ac00 session 0x558a5fff21e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 145080320 unmapped: 31956992 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1824270 data_alloc: 301989888 data_used: 21053440
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 160 ms_handle_reset con 0x558a67265400 session 0x558a6249bc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62560800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:19.339508+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 160 ms_handle_reset con 0x558a62560800 session 0x558a60deed20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 160 ms_handle_reset con 0x558a5f807800 session 0x558a6494c960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 140951552 unmapped: 36085760 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:20.339676+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 140918784 unmapped: 36118528 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 160 ms_handle_reset con 0x558a6467a000 session 0x558a611f72c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 160 heartbeat osd_stat(store_statfs(0x1b2e11000/0x0/0x1bfc00000, data 0x6dd9168/0x6edd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:21.339839+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 160 handle_osd_map epochs [160,161], i have 160, src has [1,161]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 161 handle_osd_map epochs [161,161], i have 161, src has [1,161]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 161 handle_osd_map epochs [161,161], i have 161, src has [1,161]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 140918784 unmapped: 36118528 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 161 ms_handle_reset con 0x558a6467ac00 session 0x558a611fba40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:22.339992+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 140943360 unmapped: 36093952 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:23.340152+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 161 ms_handle_reset con 0x558a67265400 session 0x558a6494c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62775400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 161 ms_handle_reset con 0x558a62775400 session 0x558a6494cf00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 140959744 unmapped: 36077568 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1899512 data_alloc: 301989888 data_used: 21065728
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 161 heartbeat osd_stat(store_statfs(0x1b378f000/0x0/0x1bfc00000, data 0x64546d0/0x655e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:24.340282+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 161 handle_osd_map epochs [162,162], i have 161, src has [1,162]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 162 handle_osd_map epochs [162,162], i have 162, src has [1,162]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 140992512 unmapped: 36044800 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:25.340430+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 162 handle_osd_map epochs [162,163], i have 162, src has [1,163]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141000704 unmapped: 36036608 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 163 handle_osd_map epochs [162,163], i have 163, src has [1,163]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 163 handle_osd_map epochs [162,163], i have 163, src has [1,163]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:26.340576+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 163 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 163 ms_handle_reset con 0x558a5f807800 session 0x558a6494d4a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 163 ms_handle_reset con 0x558a6467a000 session 0x558a6494da40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.812202454s of 10.005859375s, submitted: 318
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 164 handle_osd_map epochs [164,164], i have 164, src has [1,164]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141082624 unmapped: 35954688 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 164 heartbeat osd_stat(store_statfs(0x1b3788000/0x0/0x1bfc00000, data 0x6458f43/0x6566000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5f0f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 164 ms_handle_reset con 0x558a6467ac00 session 0x558a6494dc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:27.340716+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141131776 unmapped: 35905536 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 164 handle_osd_map epochs [164,165], i have 164, src has [1,165]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 165 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:28.340868+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 165 ms_handle_reset con 0x558a67265400 session 0x558a62122d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 165 handle_osd_map epochs [165,165], i have 165, src has [1,165]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60d82c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 165 ms_handle_reset con 0x558a60d82c00 session 0x558a62123680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141205504 unmapped: 35831808 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1911672 data_alloc: 301989888 data_used: 21094400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:29.341111+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 165 ms_handle_reset con 0x558a5f807800 session 0x558a62123860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 165 ms_handle_reset con 0x558a6467a000 session 0x558a62123c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141238272 unmapped: 35799040 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:30.341250+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 165 ms_handle_reset con 0x558a6467ac00 session 0x558a6216b0e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141246464 unmapped: 35790848 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:31.341433+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 165 heartbeat osd_stat(store_statfs(0x1b3360000/0x0/0x1bfc00000, data 0x645d75f/0x656e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x632f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141262848 unmapped: 35774464 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:32.341597+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 165 handle_osd_map epochs [165,166], i have 165, src has [1,166]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141271040 unmapped: 35766272 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:33.341723+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 166 handle_osd_map epochs [166,167], i have 166, src has [1,167]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141279232 unmapped: 35758080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1921340 data_alloc: 301989888 data_used: 21110784
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:34.341993+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141287424 unmapped: 35749888 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 167 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 167 heartbeat osd_stat(store_statfs(0x1b4756000/0x0/0x1bfc00000, data 0x6461ec0/0x6576000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:35.342117+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141336576 unmapped: 35700736 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:36.342311+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141344768 unmapped: 35692544 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.200352669s of 10.129467964s, submitted: 156
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 168 ms_handle_reset con 0x558a67265400 session 0x558a61f225a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:37.343101+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141369344 unmapped: 35667968 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 168 handle_osd_map epochs [168,169], i have 168, src has [1,169]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62115400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 169 ms_handle_reset con 0x558a62115400 session 0x558a610a6960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:38.343290+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 169 ms_handle_reset con 0x558a5f807800 session 0x558a620f3c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 169 heartbeat osd_stat(store_statfs(0x1b4752000/0x0/0x1bfc00000, data 0x646446e/0x657b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141410304 unmapped: 35627008 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1931022 data_alloc: 301989888 data_used: 21123072
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:39.343500+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 169 ms_handle_reset con 0x558a6467a000 session 0x558a5fff3c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141434880 unmapped: 35602432 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:40.343691+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 169 ms_handle_reset con 0x558a6467ac00 session 0x558a611fc5a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141443072 unmapped: 35594240 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:41.343934+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141443072 unmapped: 35594240 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 169 handle_osd_map epochs [169,170], i have 169, src has [1,170]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 170 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:42.344159+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 170 ms_handle_reset con 0x558a67265400 session 0x558a612054a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 170 heartbeat osd_stat(store_statfs(0x1b4751000/0x0/0x1bfc00000, data 0x646683c/0x657d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141484032 unmapped: 35553280 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 170 handle_osd_map epochs [171,171], i have 170, src has [1,171]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62556c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 171 ms_handle_reset con 0x558a62556c00 session 0x558a5fff23c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:43.344365+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 141484032 unmapped: 35553280 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1950479 data_alloc: 301989888 data_used: 21139456
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:44.344537+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 171 ms_handle_reset con 0x558a5f807800 session 0x558a60c95c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 142573568 unmapped: 34463744 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:45.344696+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 171 handle_osd_map epochs [171,172], i have 171, src has [1,172]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 172 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 172 ms_handle_reset con 0x558a6467a000 session 0x558a622101e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 172 ms_handle_reset con 0x558a6467ac00 session 0x558a611f4780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 142598144 unmapped: 34439168 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:46.344854+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 172 handle_osd_map epochs [172,173], i have 172, src has [1,173]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 173 ms_handle_reset con 0x558a67265400 session 0x558a5fff5680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a65a1ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 173 ms_handle_reset con 0x558a65a1ac00 session 0x558a5fff4960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 173 handle_osd_map epochs [173,173], i have 173, src has [1,173]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 142622720 unmapped: 34414592 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:47.345092+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 173 handle_osd_map epochs [173,174], i have 173, src has [1,174]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.267856598s of 10.292163849s, submitted: 296
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 174 handle_osd_map epochs [174,175], i have 174, src has [1,175]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 143687680 unmapped: 33349632 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 175 heartbeat osd_stat(store_statfs(0x1b4732000/0x0/0x1bfc00000, data 0x6474315/0x659a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 175 ms_handle_reset con 0x558a5f807800 session 0x558a60c8c3c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:48.345416+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 175 ms_handle_reset con 0x558a6467a000 session 0x558a5fff3860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 175 ms_handle_reset con 0x558a6467ac00 session 0x558a65a88780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 143704064 unmapped: 33333248 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1961313 data_alloc: 301989888 data_used: 21155840
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:49.345603+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 175 handle_osd_map epochs [176,176], i have 175, src has [1,176]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 143712256 unmapped: 33325056 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 176 ms_handle_reset con 0x558a67265400 session 0x558a63ddb680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:50.345862+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 176 heartbeat osd_stat(store_statfs(0x1b472d000/0x0/0x1bfc00000, data 0x64766e7/0x659e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 143720448 unmapped: 33316864 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:51.346056+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 176 handle_osd_map epochs [176,177], i have 176, src has [1,177]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 177 handle_osd_map epochs [177,177], i have 177, src has [1,177]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 143753216 unmapped: 33284096 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:52.346329+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a62411c00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 177 ms_handle_reset con 0x558a62411c00 session 0x558a65d145a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 177 ms_handle_reset con 0x558a6467a000 session 0x558a620f2f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 177 handle_osd_map epochs [177,178], i have 177, src has [1,178]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 178 handle_osd_map epochs [178,178], i have 178, src has [1,178]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 178 ms_handle_reset con 0x558a5f807800 session 0x558a63f4b4a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 143769600 unmapped: 33267712 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:53.346479+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 178 ms_handle_reset con 0x558a60c44000 session 0x558a63ddbe00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 178 ms_handle_reset con 0x558a6467ac00 session 0x558a611f6f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 178 ms_handle_reset con 0x558a67264000 session 0x558a620f3680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 178 handle_osd_map epochs [178,179], i have 178, src has [1,179]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 144572416 unmapped: 32464896 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2068947 data_alloc: 301989888 data_used: 21180416
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:54.346666+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 50
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 144793600 unmapped: 32243712 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:55.346847+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 179 handle_osd_map epochs [180,180], i have 179, src has [1,180]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 180 ms_handle_reset con 0x558a67264000 session 0x558a63f134a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 144809984 unmapped: 32227328 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 180 handle_osd_map epochs [180,180], i have 180, src has [1,180]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 180 heartbeat osd_stat(store_statfs(0x1b3aed000/0x0/0x1bfc00000, data 0x70ad585/0x71df000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [0,0,1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:56.347006+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 180 ms_handle_reset con 0x558a5f807800 session 0x558a66e48000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 180 ms_handle_reset con 0x558a60c44000 session 0x558a66e483c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 180 ms_handle_reset con 0x558a6467a000 session 0x558a620ff0e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 144809984 unmapped: 32227328 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 180 handle_osd_map epochs [181,181], i have 180, src has [1,181]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:57.347155+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 181 ms_handle_reset con 0x558a6467ac00 session 0x558a60e1dc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 181 handle_osd_map epochs [181,181], i have 181, src has [1,181]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.317087173s of 10.048410416s, submitted: 428
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 181 ms_handle_reset con 0x558a6467ac00 session 0x558a66882000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 182 handle_osd_map epochs [182,182], i have 182, src has [1,182]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 145940480 unmapped: 31096832 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:58.347346+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 182 ms_handle_reset con 0x558a5f807800 session 0x558a668823c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 145940480 unmapped: 31096832 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1996067 data_alloc: 301989888 data_used: 21192704
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:59.347544+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 182 handle_osd_map epochs [182,183], i have 182, src has [1,183]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 145948672 unmapped: 31088640 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:00.347681+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 183 heartbeat osd_stat(store_statfs(0x1b4715000/0x0/0x1bfc00000, data 0x6486028/0x65b8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 183 heartbeat osd_stat(store_statfs(0x1b4715000/0x0/0x1bfc00000, data 0x6486028/0x65b8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 145948672 unmapped: 31088640 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 183 ms_handle_reset con 0x558a60c44000 session 0x558a668825a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:01.347849+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 183 heartbeat osd_stat(store_statfs(0x1b4715000/0x0/0x1bfc00000, data 0x64860f2/0x65b8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 183 ms_handle_reset con 0x558a6467a000 session 0x558a66882780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 183 heartbeat osd_stat(store_statfs(0x1b4715000/0x0/0x1bfc00000, data 0x64860f2/0x65b8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 183 handle_osd_map epochs [183,184], i have 183, src has [1,184]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 145956864 unmapped: 31080448 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:02.348032+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 184 handle_osd_map epochs [184,184], i have 184, src has [1,184]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 184 handle_osd_map epochs [185,185], i have 184, src has [1,185]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 185 handle_osd_map epochs [185,185], i have 185, src has [1,185]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 185 ms_handle_reset con 0x558a67264000 session 0x558a66882960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 185 ms_handle_reset con 0x558a6466c000 session 0x558a6235d860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 145997824 unmapped: 31039488 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:03.348257+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 185 ms_handle_reset con 0x558a67264000 session 0x558a66882b40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 145997824 unmapped: 31039488 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 185 handle_osd_map epochs [186,186], i have 185, src has [1,186]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2010756 data_alloc: 301989888 data_used: 21213184
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 186 ms_handle_reset con 0x558a5f807800 session 0x558a66882f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:04.348399+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146014208 unmapped: 31023104 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:05.348586+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 186 ms_handle_reset con 0x558a60c44000 session 0x558a668830e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 186 ms_handle_reset con 0x558a6467a000 session 0x558a668834a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 186 heartbeat osd_stat(store_statfs(0x1b4707000/0x0/0x1bfc00000, data 0x648cc89/0x65c4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:06.348717+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:07.348916+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 186 heartbeat osd_stat(store_statfs(0x1b470b000/0x0/0x1bfc00000, data 0x648cc79/0x65c3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 186 handle_osd_map epochs [186,187], i have 186, src has [1,187]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.846722603s of 10.427934647s, submitted: 177
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:08.349080+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2010908 data_alloc: 301989888 data_used: 21225472
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:09.349272+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:10.349377+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:11.349614+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 187 heartbeat osd_stat(store_statfs(0x1b4706000/0x0/0x1bfc00000, data 0x648ef07/0x65c7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:12.349794+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 187 heartbeat osd_stat(store_statfs(0x1b4706000/0x0/0x1bfc00000, data 0x648ef07/0x65c7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:13.349949+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2010908 data_alloc: 301989888 data_used: 21225472
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:14.350082+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:15.350250+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:16.350393+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:17.350607+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:18.350787+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 187 heartbeat osd_stat(store_statfs(0x1b4706000/0x0/0x1bfc00000, data 0x648ef07/0x65c7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2010908 data_alloc: 301989888 data_used: 21225472
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:19.351025+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:20.351198+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 13.237484932s of 13.281394958s, submitted: 19
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146022400 unmapped: 31014912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:21.351330+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 187 ms_handle_reset con 0x558a6467a000 session 0x558a66883860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146038784 unmapped: 30998528 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:22.351509+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 187 handle_osd_map epochs [187,188], i have 187, src has [1,188]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 188 handle_osd_map epochs [188,188], i have 188, src has [1,188]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 188 ms_handle_reset con 0x558a5f807800 session 0x558a66883c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146046976 unmapped: 30990336 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:23.351659+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 188 ms_handle_reset con 0x558a60c44000 session 0x558a64160b40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 188 heartbeat osd_stat(store_statfs(0x1b46ff000/0x0/0x1bfc00000, data 0x649137c/0x65ce000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146055168 unmapped: 30982144 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2019400 data_alloc: 301989888 data_used: 21241856
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:24.351800+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 188 handle_osd_map epochs [189,189], i have 188, src has [1,189]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 189 ms_handle_reset con 0x558a6466c000 session 0x558a61ed30e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146063360 unmapped: 30973952 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:25.352009+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 189 heartbeat osd_stat(store_statfs(0x1b46fb000/0x0/0x1bfc00000, data 0x6493771/0x65d2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146063360 unmapped: 30973952 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 189 ms_handle_reset con 0x558a67264000 session 0x558a6235c960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 189 heartbeat osd_stat(store_statfs(0x1b46fb000/0x0/0x1bfc00000, data 0x6493771/0x65d2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:26.352130+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 189 ms_handle_reset con 0x558a67264000 session 0x558a65a283c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146063360 unmapped: 30973952 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:27.352263+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 189 heartbeat osd_stat(store_statfs(0x1b46fc000/0x0/0x1bfc00000, data 0x64937fc/0x65d2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146063360 unmapped: 30973952 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:28.352423+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146063360 unmapped: 30973952 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2024371 data_alloc: 301989888 data_used: 21254144
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:29.352589+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146087936 unmapped: 30949376 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:30.352766+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146087936 unmapped: 30949376 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:31.352919+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.401353836s of 10.729187965s, submitted: 74
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146087936 unmapped: 30949376 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:32.353058+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 189 heartbeat osd_stat(store_statfs(0x1b46fd000/0x0/0x1bfc00000, data 0x649382b/0x65d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146104320 unmapped: 30932992 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:33.353769+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 190 ms_handle_reset con 0x558a5f807800 session 0x558a65a29860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 190 ms_handle_reset con 0x558a60c44000 session 0x558a629c50e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 190 heartbeat osd_stat(store_statfs(0x1b46f6000/0x0/0x1bfc00000, data 0x6495aeb/0x65d7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146104320 unmapped: 30932992 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:34.353946+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2033550 data_alloc: 301989888 data_used: 21266432
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 190 ms_handle_reset con 0x558a6466c000 session 0x558a65a29e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 190 ms_handle_reset con 0x558a6467a000 session 0x558a65cde3c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146128896 unmapped: 30908416 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:35.354085+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146128896 unmapped: 30908416 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:36.354299+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146128896 unmapped: 30908416 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:37.354441+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 190 ms_handle_reset con 0x558a6467a000 session 0x558a65cdfa40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146145280 unmapped: 30892032 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:38.354620+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146145280 unmapped: 30892032 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:39.354800+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2031367 data_alloc: 301989888 data_used: 21266432
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 190 heartbeat osd_stat(store_statfs(0x1b46f9000/0x0/0x1bfc00000, data 0x6495c0d/0x65d5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146153472 unmapped: 30883840 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:40.354944+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 191 handle_osd_map epochs [191,191], i have 191, src has [1,191]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 191 handle_osd_map epochs [191,191], i have 191, src has [1,191]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146161664 unmapped: 30875648 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:41.355113+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.659821510s of 10.081795692s, submitted: 127
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 191 handle_osd_map epochs [191,192], i have 191, src has [1,192]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 192 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 192 ms_handle_reset con 0x558a5f807800 session 0x558a65cdfe00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146178048 unmapped: 30859264 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:42.355249+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 192 ms_handle_reset con 0x558a60c44000 session 0x558a66fd0000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146186240 unmapped: 30851072 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:43.355429+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 192 handle_osd_map epochs [192,193], i have 192, src has [1,193]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 193 handle_osd_map epochs [193,193], i have 193, src has [1,193]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 193 ms_handle_reset con 0x558a6466c000 session 0x558a66fd01e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 193 ms_handle_reset con 0x558a67264000 session 0x558a66fd05a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146219008 unmapped: 30818304 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:44.355551+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2041480 data_alloc: 301989888 data_used: 21295104
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 193 heartbeat osd_stat(store_statfs(0x1b46ef000/0x0/0x1bfc00000, data 0x649a33f/0x65de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x4f2f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 193 handle_osd_map epochs [194,194], i have 193, src has [1,194]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 194 ms_handle_reset con 0x558a67264000 session 0x558a66fd0960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146276352 unmapped: 30760960 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:45.355667+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 194 ms_handle_reset con 0x558a5f807800 session 0x558a66fd0b40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146309120 unmapped: 30728192 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:46.355841+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 194 ms_handle_reset con 0x558a60c44000 session 0x558a66fd0f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:47.356020+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146317312 unmapped: 30720000 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 194 handle_osd_map epochs [194,195], i have 194, src has [1,195]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 195 ms_handle_reset con 0x558a6466c000 session 0x558a66fd12c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 195 handle_osd_map epochs [195,195], i have 195, src has [1,195]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:48.356234+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146350080 unmapped: 30687232 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 195 ms_handle_reset con 0x558a6467a000 session 0x558a66fd14a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 195 ms_handle_reset con 0x558a60c44000 session 0x558a611ffa40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 195 ms_handle_reset con 0x558a5f807800 session 0x558a66fd1c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:49.356598+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146366464 unmapped: 30670848 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2057293 data_alloc: 301989888 data_used: 21307392
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 195 ms_handle_reset con 0x558a6466c000 session 0x558a66fd1e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 195 ms_handle_reset con 0x558a67264000 session 0x558a66fd14a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 195 heartbeat osd_stat(store_statfs(0x1b42e2000/0x0/0x1bfc00000, data 0x64a0f9f/0x65eb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:50.356721+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146366464 unmapped: 30670848 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 195 ms_handle_reset con 0x558a6467ac00 session 0x558a66fd12c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:51.356909+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146399232 unmapped: 30638080 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.356864929s of 10.005169868s, submitted: 172
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:52.357085+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146415616 unmapped: 30621696 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 195 ms_handle_reset con 0x558a6467ac00 session 0x558a66fd0b40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 195 handle_osd_map epochs [195,196], i have 195, src has [1,196]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 196 heartbeat osd_stat(store_statfs(0x1b42e1000/0x0/0x1bfc00000, data 0x64a10c8/0x65ec000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:53.357316+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146464768 unmapped: 30572544 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 196 ms_handle_reset con 0x558a5f807800 session 0x558a66fd01e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:54.357685+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146472960 unmapped: 30564352 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2068755 data_alloc: 301989888 data_used: 21319680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 196 heartbeat osd_stat(store_statfs(0x1b42db000/0x0/0x1bfc00000, data 0x64a3423/0x65f3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 196 ms_handle_reset con 0x558a60c44000 session 0x558a621230e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:55.357935+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146505728 unmapped: 30531584 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 196 ms_handle_reset con 0x558a6466c000 session 0x558a621270e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67264000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 196 handle_osd_map epochs [196,197], i have 196, src has [1,197]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 197 handle_osd_map epochs [197,197], i have 197, src has [1,197]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 197 handle_osd_map epochs [197,197], i have 197, src has [1,197]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 197 ms_handle_reset con 0x558a67264000 session 0x558a65a29e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:56.358074+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146571264 unmapped: 30466048 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 197 ms_handle_reset con 0x558a5f807800 session 0x558a629c50e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:57.358230+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146636800 unmapped: 30400512 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 197 ms_handle_reset con 0x558a60c44000 session 0x558a65a283c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 197 heartbeat osd_stat(store_statfs(0x1b42da000/0x0/0x1bfc00000, data 0x64a56dc/0x65f4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 197 ms_handle_reset con 0x558a6466c000 session 0x558a658630e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:58.358422+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 146636800 unmapped: 30400512 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:59.358644+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147701760 unmapped: 29335552 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2072784 data_alloc: 301989888 data_used: 21331968
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 197 ms_handle_reset con 0x558a6467ac00 session 0x558a6249a780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:00.358827+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147726336 unmapped: 29310976 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 197 heartbeat osd_stat(store_statfs(0x1b42db000/0x0/0x1bfc00000, data 0x64a5804/0x65f3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:01.358984+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147726336 unmapped: 29310976 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:02.359200+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147726336 unmapped: 29310976 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.654720306s of 10.488033295s, submitted: 197
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 handle_osd_map epochs [198,198], i have 198, src has [1,198]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 ms_handle_reset con 0x558a67265400 session 0x558a668830e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:03.359384+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147775488 unmapped: 29261824 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:04.359572+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147775488 unmapped: 29261824 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2076757 data_alloc: 301989888 data_used: 21344256
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:05.359793+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147783680 unmapped: 29253632 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:06.359963+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147783680 unmapped: 29253632 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 heartbeat osd_stat(store_statfs(0x1b42d8000/0x0/0x1bfc00000, data 0x64a7a23/0x65f5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:07.360147+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147783680 unmapped: 29253632 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:08.360319+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147783680 unmapped: 29253632 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:09.360553+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147783680 unmapped: 29253632 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2074321 data_alloc: 301989888 data_used: 21344256
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:10.360730+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147783680 unmapped: 29253632 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:11.361200+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147783680 unmapped: 29253632 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:12.361365+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147800064 unmapped: 29237248 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 heartbeat osd_stat(store_statfs(0x1b42d8000/0x0/0x1bfc00000, data 0x64a79f0/0x65f5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:13.361812+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147800064 unmapped: 29237248 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.136950493s of 11.397010803s, submitted: 72
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 ms_handle_reset con 0x558a67265400 session 0x558a6235d860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:14.361954+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147808256 unmapped: 29229056 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2079761 data_alloc: 301989888 data_used: 21344256
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:15.362246+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 heartbeat osd_stat(store_statfs(0x1b42d6000/0x0/0x1bfc00000, data 0x64a7afa/0x65f7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147808256 unmapped: 29229056 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:16.362436+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 147808256 unmapped: 29229056 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 ms_handle_reset con 0x558a5f807800 session 0x558a611f7e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:17.362602+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 156262400 unmapped: 20774912 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:18.362758+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 156278784 unmapped: 20758528 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 ms_handle_reset con 0x558a6466c000 session 0x558a65ebde00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:19.363070+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 heartbeat osd_stat(store_statfs(0x1b2ad5000/0x0/0x1bfc00000, data 0x7ca7b61/0x7df9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 156311552 unmapped: 20725760 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2361141 data_alloc: 301989888 data_used: 21348352
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 ms_handle_reset con 0x558a6467ac00 session 0x558a5fff4000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:20.363440+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 149159936 unmapped: 27877376 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 ms_handle_reset con 0x558a5fdf2000 session 0x558a627b8f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:21.363802+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 157622272 unmapped: 19415040 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:22.364002+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 149397504 unmapped: 27639808 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 ms_handle_reset con 0x558a5f807800 session 0x558a629c45a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:23.364346+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 149504000 unmapped: 27533312 heap: 177037312 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 ms_handle_reset con 0x558a5fdf2000 session 0x558a66e492c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 heartbeat osd_stat(store_statfs(0x1ad2d1000/0x0/0x1bfc00000, data 0xd4a7ca0/0xd5fc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.306979179s of 10.048179626s, submitted: 113
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 ms_handle_reset con 0x558a6466c000 session 0x558a65862960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:24.364497+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 ms_handle_reset con 0x558a6467ac00 session 0x558a624174a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 149733376 unmapped: 31506432 heap: 181239808 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3110060 data_alloc: 301989888 data_used: 21348352
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:25.364653+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 149815296 unmapped: 31424512 heap: 181239808 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 ms_handle_reset con 0x558a67265400 session 0x558a641610e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:26.364801+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 149979136 unmapped: 31260672 heap: 181239808 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 ms_handle_reset con 0x558a5fdf2000 session 0x558a6247c3c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 198 handle_osd_map epochs [198,199], i have 198, src has [1,199]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 199 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 199 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 199 heartbeat osd_stat(store_statfs(0x1a9039000/0x0/0x1bfc00000, data 0x1173cd9e/0x11895000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [0,0,0,1,0,0,0,1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:27.364975+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 199 ms_handle_reset con 0x558a67265400 session 0x558a5fff41e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 160276480 unmapped: 29360128 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 199 ms_handle_reset con 0x558a6466c000 session 0x558a63f4b680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:28.365150+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 199 handle_osd_map epochs [200,200], i have 199, src has [1,200]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 ms_handle_reset con 0x558a66b9c000 session 0x558a621310e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 ms_handle_reset con 0x558a6467ac00 session 0x558a65a89e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 151060480 unmapped: 38576128 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 ms_handle_reset con 0x558a5f807800 session 0x558a65863860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:29.365411+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 151134208 unmapped: 38502400 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3819694 data_alloc: 301989888 data_used: 21364736
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:30.365562+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 151199744 unmapped: 38436864 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 ms_handle_reset con 0x558a6467ac00 session 0x558a627b9680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:31.365741+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 151265280 unmapped: 38371328 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 ms_handle_reset con 0x558a6466c000 session 0x558a611f7a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 ms_handle_reset con 0x558a5fdf2000 session 0x558a629c5e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:32.365896+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 heartbeat osd_stat(store_statfs(0x1a36da000/0x0/0x1bfc00000, data 0x17090d55/0x171f4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 151740416 unmapped: 37896192 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9c400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 ms_handle_reset con 0x558a67265400 session 0x558a65862780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:33.366135+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 ms_handle_reset con 0x558a5fdf2000 session 0x558a62416b40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 160219136 unmapped: 29417472 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 ms_handle_reset con 0x558a6467ac00 session 0x558a621310e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.667047501s of 10.038184166s, submitted: 199
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:34.366370+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 152969216 unmapped: 36667392 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4472866 data_alloc: 301989888 data_used: 21377024
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 200 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 201 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:35.366547+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 161431552 unmapped: 28205056 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 201 ms_handle_reset con 0x558a6466c000 session 0x558a624161e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 201 ms_handle_reset con 0x558a5f807800 session 0x558a620f3a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a67265400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 201 ms_handle_reset con 0x558a67265400 session 0x558a65862d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 201 ms_handle_reset con 0x558a66b9c400 session 0x558a63f4bc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:36.366727+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 201 ms_handle_reset con 0x558a66b9c000 session 0x558a6494d680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 161554432 unmapped: 28082176 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:37.366840+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 161701888 unmapped: 27934720 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 201 heartbeat osd_stat(store_statfs(0x19db3c000/0x0/0x1bfc00000, data 0x1cc2940a/0x1cd91000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,0,1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 201 ms_handle_reset con 0x558a5f807800 session 0x558a65ebc1e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:38.366986+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 153436160 unmapped: 36200448 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 201 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 202 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 202 ms_handle_reset con 0x558a5fdf2000 session 0x558a668830e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:39.367140+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 202 heartbeat osd_stat(store_statfs(0x19c33a000/0x0/0x1bfc00000, data 0x1e4293b3/0x1e590000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [2,0,0,0,0,1,0,1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 154779648 unmapped: 34856960 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4973409 data_alloc: 301989888 data_used: 21397504
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9c800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:40.367293+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 154804224 unmapped: 34832384 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 202 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 202 heartbeat osd_stat(store_statfs(0x19ab37000/0x0/0x1bfc00000, data 0x1fc2dab0/0x1fd95000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,2])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 202 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 202 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 202 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:41.367432+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 154812416 unmapped: 34824192 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:42.367584+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 154943488 unmapped: 34693120 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 203 ms_handle_reset con 0x558a66b9c800 session 0x558a620ffe00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 203 ms_handle_reset con 0x558a6466c000 session 0x558a621f90e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 203 handle_osd_map epochs [204,204], i have 203, src has [1,204]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:43.367831+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 154984448 unmapped: 34652160 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 204 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 204 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.536086082s of 10.021492958s, submitted: 267
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:44.368027+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 154992640 unmapped: 34643968 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4998090 data_alloc: 301989888 data_used: 21409792
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:45.368258+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 155009024 unmapped: 34627584 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 204 heartbeat osd_stat(store_statfs(0x19a6cb000/0x0/0x1bfc00000, data 0x20099d2c/0x20200000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 204 ms_handle_reset con 0x558a6467ac00 session 0x558a621230e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:46.368419+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 155074560 unmapped: 34562048 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 204 handle_osd_map epochs [205,205], i have 204, src has [1,205]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 204 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 205 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:47.368668+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 155189248 unmapped: 34447360 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 205 handle_osd_map epochs [205,206], i have 205, src has [1,206]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:48.368798+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 206 ms_handle_reset con 0x558a5f807800 session 0x558a60e1da40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 206 ms_handle_reset con 0x558a5fdf2000 session 0x558a66fd14a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166420480 unmapped: 23216128 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:49.368960+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 206 handle_osd_map epochs [206,207], i have 206, src has [1,207]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 207 ms_handle_reset con 0x558a66b9c000 session 0x558a62416000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 155484160 unmapped: 34152448 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 5148721 data_alloc: 301989888 data_used: 21426176
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 207 ms_handle_reset con 0x558a5f807800 session 0x558a60e1dc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:50.369086+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 207 heartbeat osd_stat(store_statfs(0x199017000/0x0/0x1bfc00000, data 0x21751134/0x218b4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 155566080 unmapped: 34070528 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 207 heartbeat osd_stat(store_statfs(0x198aac000/0x0/0x1bfc00000, data 0x21cbc134/0x21e1f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x532f9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 207 ms_handle_reset con 0x558a5fdf2000 session 0x558a620f2960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:51.369240+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 207 handle_osd_map epochs [207,208], i have 207, src has [1,208]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 208 handle_osd_map epochs [208,208], i have 208, src has [1,208]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 155648000 unmapped: 33988608 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 208 ms_handle_reset con 0x558a6466c000 session 0x558a60e1c960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:52.369386+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 209 ms_handle_reset con 0x558a6467ac00 session 0x558a621f8960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9c400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 156803072 unmapped: 32833536 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 209 ms_handle_reset con 0x558a66b9c400 session 0x558a62127860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 209 handle_osd_map epochs [209,210], i have 209, src has [1,210]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 209 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:53.369580+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 156884992 unmapped: 32751616 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.878002167s of 10.011971474s, submitted: 284
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:54.369727+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 211 ms_handle_reset con 0x558a5f807800 session 0x558a641601e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 211 handle_osd_map epochs [210,211], i have 211, src has [1,211]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 167575552 unmapped: 22061056 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 5524715 data_alloc: 301989888 data_used: 21442560
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 211 ms_handle_reset con 0x558a5fdf2000 session 0x558a629c4780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:55.369932+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 211 heartbeat osd_stat(store_statfs(0x193cfd000/0x0/0x1bfc00000, data 0x254c4f1d/0x25630000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 159391744 unmapped: 30244864 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 211 handle_osd_map epochs [211,212], i have 211, src has [1,212]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:56.370094+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 159465472 unmapped: 30171136 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 212 ms_handle_reset con 0x558a6466c000 session 0x558a629c4960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:57.370265+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 167829504 unmapped: 21807104 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 212 handle_osd_map epochs [212,213], i have 212, src has [1,213]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 213 heartbeat osd_stat(store_statfs(0x190c7f000/0x0/0x1bfc00000, data 0x2853f4d8/0x286af000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:58.370386+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 213 ms_handle_reset con 0x558a6467ac00 session 0x558a65a28b40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 159694848 unmapped: 29941760 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9c400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 213 ms_handle_reset con 0x558a66b9c400 session 0x558a60c94b40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:59.370537+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 159637504 unmapped: 29999104 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 6043857 data_alloc: 301989888 data_used: 21454848
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 213 ms_handle_reset con 0x558a5f807800 session 0x558a64215860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:00.370657+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 213 ms_handle_reset con 0x558a5fdf2000 session 0x558a6211f2c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 160038912 unmapped: 29597696 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:01.370916+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 213 heartbeat osd_stat(store_statfs(0x18f44d000/0x0/0x1bfc00000, data 0x29d6ba09/0x29ee1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 160120832 unmapped: 29515776 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:02.371191+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 160202752 unmapped: 29433856 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 214 heartbeat osd_stat(store_statfs(0x18e44b000/0x0/0x1bfc00000, data 0x2ad6bbfe/0x2aee1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:03.371490+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 160268288 unmapped: 29368320 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.745502472s of 10.076965332s, submitted: 268
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:04.371655+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 160358400 unmapped: 29278208 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 6385863 data_alloc: 301989888 data_used: 21471232
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 215 ms_handle_reset con 0x558a60c44000 session 0x558a63ddba40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:05.371838+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 160423936 unmapped: 29212672 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:06.371979+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 215 ms_handle_reset con 0x558a6466c000 session 0x558a641614a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6467ac00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 160456704 unmapped: 29179904 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 216 heartbeat osd_stat(store_statfs(0x18cc45000/0x0/0x1bfc00000, data 0x2c570213/0x2c6e8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [0,0,0,1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:07.372184+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 160522240 unmapped: 29114368 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:08.372338+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 216 ms_handle_reset con 0x558a6467ac00 session 0x558a602f8b40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 156950528 unmapped: 32686080 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 216 ms_handle_reset con 0x558a5f807800 session 0x558a63f4a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 216 ms_handle_reset con 0x558a5fdf2000 session 0x558a5fff2f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:09.372508+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 216 ms_handle_reset con 0x558a60c44000 session 0x558a610a7e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 157736960 unmapped: 31899648 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2366631 data_alloc: 301989888 data_used: 21479424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:10.372872+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 157736960 unmapped: 31899648 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:11.373112+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 157736960 unmapped: 31899648 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 216 ms_handle_reset con 0x558a6466c000 session 0x558a63f4a000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 216 heartbeat osd_stat(store_statfs(0x1b2446000/0x0/0x1bfc00000, data 0x6d724da/0x6ee7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:12.373269+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9cc00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 216 heartbeat osd_stat(store_statfs(0x1b2446000/0x0/0x1bfc00000, data 0x6d724da/0x6ee7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 159621120 unmapped: 30015488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 216 handle_osd_map epochs [216,217], i have 216, src has [1,217]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:13.373464+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 handle_osd_map epochs [217,217], i have 217, src has [1,217]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 heartbeat osd_stat(store_statfs(0x1b1ebe000/0x0/0x1bfc00000, data 0x72f8553/0x746f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9d000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 159621120 unmapped: 30015488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 ms_handle_reset con 0x558a66b9d000 session 0x558a65ebd860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.876409531s of 10.024756432s, submitted: 303
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:14.373588+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 ms_handle_reset con 0x558a5f807800 session 0x558a629c45a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 159186944 unmapped: 30449664 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2433502 data_alloc: 301989888 data_used: 21491712
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 ms_handle_reset con 0x558a5fdf2000 session 0x558a65a894a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:15.373793+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158859264 unmapped: 30777344 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:16.373957+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 ms_handle_reset con 0x558a60c44000 session 0x558a65ebcf00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6466c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9d400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 ms_handle_reset con 0x558a66b9d400 session 0x558a6195cb40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158883840 unmapped: 30752768 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 ms_handle_reset con 0x558a6466c000 session 0x558a61f234a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:17.374181+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158883840 unmapped: 30752768 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 ms_handle_reset con 0x558a5fdf2000 session 0x558a60c8dc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 ms_handle_reset con 0x558a5f807800 session 0x558a6494c960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:18.374405+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158523392 unmapped: 31113216 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:19.374601+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 heartbeat osd_stat(store_statfs(0x1b1e82000/0x0/0x1bfc00000, data 0x732fa87/0x74ac000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158547968 unmapped: 31088640 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2439099 data_alloc: 301989888 data_used: 21495808
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:20.374780+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 ms_handle_reset con 0x558a60c44000 session 0x558a64160d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158547968 unmapped: 31088640 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9d400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 ms_handle_reset con 0x558a66b9d400 session 0x558a641601e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:21.374980+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 heartbeat osd_stat(store_statfs(0x1b1e80000/0x0/0x1bfc00000, data 0x7332a7e/0x74ae000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158547968 unmapped: 31088640 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.1 total, 600.0 interval
                                                          Cumulative writes: 18K writes, 69K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.01 MB/s
                                                          Cumulative WAL: 18K writes, 6335 syncs, 2.98 writes per sync, written: 0.05 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 12K writes, 43K keys, 12K commit groups, 1.0 writes per commit group, ingest: 27.89 MB, 0.05 MB/s
                                                          Interval WAL: 12K writes, 5517 syncs, 2.34 writes per sync, written: 0.03 GB, 0.05 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9d800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 ms_handle_reset con 0x558a66b9d800 session 0x558a621f8960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:22.375157+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158588928 unmapped: 31047680 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:23.375319+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 ms_handle_reset con 0x558a5f807800 session 0x558a602f8960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158597120 unmapped: 31039488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:24.375530+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158597120 unmapped: 31039488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2437732 data_alloc: 301989888 data_used: 21495808
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:25.375717+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.206179619s of 11.755118370s, submitted: 129
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158597120 unmapped: 31039488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:26.375912+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158597120 unmapped: 31039488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 heartbeat osd_stat(store_statfs(0x1b1e7f000/0x0/0x1bfc00000, data 0x7332ab7/0x74ae000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:27.376114+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158597120 unmapped: 31039488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:28.376303+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158597120 unmapped: 31039488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:29.376502+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158597120 unmapped: 31039488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2437556 data_alloc: 301989888 data_used: 21495808
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:30.376703+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158597120 unmapped: 31039488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:31.376928+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158597120 unmapped: 31039488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:32.377101+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 heartbeat osd_stat(store_statfs(0x1b1e7e000/0x0/0x1bfc00000, data 0x7332bea/0x74af000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158597120 unmapped: 31039488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:33.377315+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158597120 unmapped: 31039488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:34.377531+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158597120 unmapped: 31039488 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2438844 data_alloc: 301989888 data_used: 21495808
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:35.377702+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158605312 unmapped: 31031296 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:36.377908+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158605312 unmapped: 31031296 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.955339432s of 11.100872993s, submitted: 30
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:37.378105+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 158629888 unmapped: 31006720 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 heartbeat osd_stat(store_statfs(0x1b1e7f000/0x0/0x1bfc00000, data 0x7332d87/0x74af000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:38.378268+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 217 handle_osd_map epochs [217,218], i have 217, src has [1,218]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 218 handle_osd_map epochs [218,218], i have 218, src has [1,218]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 218 handle_osd_map epochs [218,218], i have 218, src has [1,218]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 218 ms_handle_reset con 0x558a5fdf2000 session 0x558a62131a40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 159711232 unmapped: 29925376 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:39.378676+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 218 handle_osd_map epochs [218,219], i have 218, src has [1,219]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 219 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a60c44000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9d400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 219 ms_handle_reset con 0x558a66b9d400 session 0x558a66fd1860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 159744000 unmapped: 29892608 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2458435 data_alloc: 301989888 data_used: 21520384
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9dc00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 219 ms_handle_reset con 0x558a66b9dc00 session 0x558a621234a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:40.378820+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 159744000 unmapped: 29892608 heap: 189636608 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 219 handle_osd_map epochs [220,220], i have 220, src has [1,220]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 220 ms_handle_reset con 0x558a60c44000 session 0x558a60e1da40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 220 handle_osd_map epochs [219,220], i have 220, src has [1,220]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:41.379007+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176103424 unmapped: 17735680 heap: 193839104 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:42.379144+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 220 heartbeat osd_stat(store_statfs(0x1b1e6c000/0x0/0x1bfc00000, data 0x733a107/0x74c0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,2,1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 159408128 unmapped: 42835968 heap: 202244096 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9d400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 220 ms_handle_reset con 0x558a66b9d400 session 0x558a65cded20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:43.379275+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 168919040 unmapped: 33325056 heap: 202244096 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 220 handle_osd_map epochs [220,221], i have 220, src has [1,221]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:44.379471+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9dc00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 221 ms_handle_reset con 0x558a66b9dc00 session 0x558a6494c000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 221 handle_osd_map epochs [221,221], i have 221, src has [1,221]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 221 heartbeat osd_stat(store_statfs(0x1aaa6b000/0x0/0x1bfc00000, data 0xe73a203/0xe8c1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,1])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3542863 data_alloc: 301989888 data_used: 21536768
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 164978688 unmapped: 37265408 heap: 202244096 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 221 handle_osd_map epochs [221,222], i have 221, src has [1,222]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 222 handle_osd_map epochs [222,222], i have 222, src has [1,222]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:45.379634+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 222 ms_handle_reset con 0x558a6607e000 session 0x558a66e49c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 222 ms_handle_reset con 0x558a6607e400 session 0x558a5fff21e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 160907264 unmapped: 41336832 heap: 202244096 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:46.379831+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 8.039125443s of 10.000274658s, submitted: 288
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173522944 unmapped: 28721152 heap: 202244096 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:47.379969+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 223 handle_osd_map epochs [223,223], i have 223, src has [1,223]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 223 handle_osd_map epochs [223,223], i have 223, src has [1,223]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 223 ms_handle_reset con 0x558a6607e800 session 0x558a611fda40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 223 ms_handle_reset con 0x558a6607e800 session 0x558a60e1cb40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 161030144 unmapped: 49618944 heap: 210649088 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:48.380182+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 170524672 unmapped: 48529408 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:49.380363+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 224 ms_handle_reset con 0x558a6607e000 session 0x558a66e494a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 5347346 data_alloc: 301989888 data_used: 21561344
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 167395328 unmapped: 51658752 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:50.380533+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 224 heartbeat osd_stat(store_statfs(0x19925b000/0x0/0x1bfc00000, data 0x1ff44043/0x200d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x68cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 224 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 225 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 180125696 unmapped: 38928384 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9d400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 225 ms_handle_reset con 0x558a66b9d400 session 0x558a611f6f00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:51.380658+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 225 handle_osd_map epochs [225,226], i have 225, src has [1,226]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 225 handle_osd_map epochs [226,226], i have 226, src has [1,226]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 179331072 unmapped: 39723008 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:52.380779+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 226 ms_handle_reset con 0x558a6607e400 session 0x558a61f22780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 172146688 unmapped: 46907392 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 226 handle_osd_map epochs [226,227], i have 226, src has [1,227]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 227 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 227 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 227 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:53.380942+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 227 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9dc00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 172752896 unmapped: 46301184 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 227 ms_handle_reset con 0x558a5f807800 session 0x558a6235dc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:54.381131+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 227 ms_handle_reset con 0x558a5fdf2000 session 0x558a65a28d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 227 ms_handle_reset con 0x558a5f807800 session 0x558a60c8c3c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 227 heartbeat osd_stat(store_statfs(0x189651000/0x0/0x1bfc00000, data 0x2f74a967/0x2f8dd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 227 ms_handle_reset con 0x558a6607e000 session 0x558a658623c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 7009942 data_alloc: 301989888 data_used: 21585920
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 164511744 unmapped: 54542336 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:55.381301+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 227 handle_osd_map epochs [227,228], i have 227, src has [1,228]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 227 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 228 ms_handle_reset con 0x558a66b9dc00 session 0x558a6211fc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 228 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165273600 unmapped: 53780480 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 228 ms_handle_reset con 0x558a6607e400 session 0x558a6235da40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:56.381442+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 228 ms_handle_reset con 0x558a5f807800 session 0x558a62123860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 6.720526695s of 10.004890442s, submitted: 616
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165437440 unmapped: 53616640 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 229 ms_handle_reset con 0x558a6607e800 session 0x558a61f72d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:57.381570+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165445632 unmapped: 53608448 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:58.381779+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 230 handle_osd_map epochs [230,230], i have 230, src has [1,230]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 230 ms_handle_reset con 0x558a5fdf2000 session 0x558a63f13e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165445632 unmapped: 53608448 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:59.381950+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 230 ms_handle_reset con 0x558a66b9cc00 session 0x558a6211fe00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2676401 data_alloc: 301989888 data_used: 21606400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 230 heartbeat osd_stat(store_statfs(0x1b1247000/0x0/0x1bfc00000, data 0x73510f9/0x74e2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165445632 unmapped: 53608448 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:00.382094+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165470208 unmapped: 53583872 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 231 ms_handle_reset con 0x558a6607e000 session 0x558a62122780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:01.382255+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 231 heartbeat osd_stat(store_statfs(0x1b28aa000/0x0/0x1bfc00000, data 0x64f25dd/0x6682000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165470208 unmapped: 53583872 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 231 ms_handle_reset con 0x558a5f807800 session 0x558a611fb860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:02.382440+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165470208 unmapped: 53583872 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:03.382636+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.4] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 232 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.5] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165502976 unmapped: 53551104 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[5.c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 232 handle_osd_map epochs [232,232], i have 232, src has [1,232]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:04.382821+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 51
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 233 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 233 ms_handle_reset con 0x558a5fdf2000 session 0x558a6247d680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2576467 data_alloc: 301989888 data_used: 21614592
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165543936 unmapped: 53510144 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:05.382957+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 233 handle_osd_map epochs [233,234], i have 233, src has [1,234]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9cc00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 234 ms_handle_reset con 0x558a66b9cc00 session 0x558a6211e780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 234 ms_handle_reset con 0x558a6607e800 session 0x558a620fe1e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165560320 unmapped: 53493760 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 234 ms_handle_reset con 0x558a6607e400 session 0x558a61204000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:06.383123+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 234 heartbeat osd_stat(store_statfs(0x1b28a2000/0x0/0x1bfc00000, data 0x64f6d31/0x668b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 235 handle_osd_map epochs [235,235], i have 235, src has [1,235]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.111031532s of 10.010788918s, submitted: 251
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 235 handle_osd_map epochs [235,235], i have 235, src has [1,235]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 235 ms_handle_reset con 0x558a6607e400 session 0x558a620fef00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165609472 unmapped: 53444608 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:07.383325+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 235 ms_handle_reset con 0x558a5f807800 session 0x558a602f9c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 235 ms_handle_reset con 0x558a5fdf2000 session 0x558a627b8b40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165650432 unmapped: 53403648 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 236 ms_handle_reset con 0x558a6607e800 session 0x558a65a283c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9cc00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:08.383500+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 165666816 unmapped: 53387264 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 237 ms_handle_reset con 0x558a66b9cc00 session 0x558a65d143c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:09.383685+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2586333 data_alloc: 301989888 data_used: 21643264
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166723584 unmapped: 52330496 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9cc00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 237 ms_handle_reset con 0x558a66b9cc00 session 0x558a6235c3c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:10.383850+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 237 ms_handle_reset con 0x558a5f807800 session 0x558a63d7c1e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166780928 unmapped: 52273152 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 237 heartbeat osd_stat(store_statfs(0x1b2894000/0x0/0x1bfc00000, data 0x64ffbce/0x6697000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:11.384050+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166780928 unmapped: 52273152 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:12.384257+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 237 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166805504 unmapped: 52248576 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:13.384464+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 238 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166805504 unmapped: 52248576 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 238 ms_handle_reset con 0x558a5fdf2000 session 0x558a6494c5a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:14.384637+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 238 heartbeat osd_stat(store_statfs(0x1b288f000/0x0/0x1bfc00000, data 0x65020e7/0x669d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2594186 data_alloc: 301989888 data_used: 21651456
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166813696 unmapped: 52240384 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 238 ms_handle_reset con 0x558a6607e400 session 0x558a65a29860
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:15.384795+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 238 ms_handle_reset con 0x558a6607e800 session 0x558a5fff3e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166813696 unmapped: 52240384 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:16.385067+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 238 ms_handle_reset con 0x558a6607e800 session 0x558a60c8c3c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 238 heartbeat osd_stat(store_statfs(0x1b288e000/0x0/0x1bfc00000, data 0x650211d/0x669c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.208695412s of 10.024385452s, submitted: 205
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166813696 unmapped: 52240384 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:17.385221+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 239 ms_handle_reset con 0x558a5f807800 session 0x558a65a28d20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166846464 unmapped: 52207616 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:18.385349+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166854656 unmapped: 52199424 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 239 ms_handle_reset con 0x558a5fdf2000 session 0x558a63ddb680
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:19.385521+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2592836 data_alloc: 301989888 data_used: 21663744
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166862848 unmapped: 52191232 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:20.386579+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166862848 unmapped: 52191232 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:21.388432+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166871040 unmapped: 52183040 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:22.389374+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 239 heartbeat osd_stat(store_statfs(0x1b288f000/0x0/0x1bfc00000, data 0x6504415/0x669e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 239 handle_osd_map epochs [239,240], i have 239, src has [1,240]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166649856 unmapped: 52404224 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 240 heartbeat osd_stat(store_statfs(0x1b288f000/0x0/0x1bfc00000, data 0x6504415/0x669e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:23.389602+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166649856 unmapped: 52404224 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:24.390055+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2595150 data_alloc: 301989888 data_used: 21676032
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166649856 unmapped: 52404224 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:25.390288+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166649856 unmapped: 52404224 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:26.390705+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 240 ms_handle_reset con 0x558a6607e400 session 0x558a66e494a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.714587212s of 10.002608299s, submitted: 82
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166658048 unmapped: 52396032 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:27.390951+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9cc00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 240 heartbeat osd_stat(store_statfs(0x1b288a000/0x0/0x1bfc00000, data 0x650676c/0x66a4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 241 ms_handle_reset con 0x558a66b9cc00 session 0x558a611fda40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166666240 unmapped: 52387840 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:28.391107+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9cc00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166674432 unmapped: 52379648 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:29.391972+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 241 handle_osd_map epochs [241,242], i have 241, src has [1,242]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 241 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 242 ms_handle_reset con 0x558a66b9cc00 session 0x558a611f4780
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2608396 data_alloc: 301989888 data_used: 21692416
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166723584 unmapped: 52330496 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:30.392684+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 242 handle_osd_map epochs [242,243], i have 242, src has [1,243]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 243 handle_osd_map epochs [243,243], i have 243, src has [1,243]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 243 ms_handle_reset con 0x558a5f807800 session 0x558a60c95c20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 243 heartbeat osd_stat(store_statfs(0x1b287a000/0x0/0x1bfc00000, data 0x650d338/0x66b2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166748160 unmapped: 52305920 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:31.392812+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 243 ms_handle_reset con 0x558a5fdf2000 session 0x558a622103c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166764544 unmapped: 52289536 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:32.394357+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 244 ms_handle_reset con 0x558a6607e400 session 0x558a60e1da40
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166772736 unmapped: 52281344 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:33.394859+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 244 heartbeat osd_stat(store_statfs(0x1b2878000/0x0/0x1bfc00000, data 0x650f660/0x66b5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 244 ms_handle_reset con 0x558a6607e800 session 0x558a611fc5a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166772736 unmapped: 52281344 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:34.395460+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2614871 data_alloc: 301989888 data_used: 21692416
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 245 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 245 ms_handle_reset con 0x558a5f807800 session 0x558a61f225a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166797312 unmapped: 52256768 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:35.395643+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 245 ms_handle_reset con 0x558a5fdf2000 session 0x558a621f8960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166821888 unmapped: 52232192 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:36.395940+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 245 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.468883514s of 10.003805161s, submitted: 125
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166830080 unmapped: 52224000 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:37.396077+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 246 ms_handle_reset con 0x558a6607e400 session 0x558a641601e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166854656 unmapped: 52199424 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 52
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9cc00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:38.396199+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 247 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 247 ms_handle_reset con 0x558a66b9cc00 session 0x558a6494c960
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9dc00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 247 ms_handle_reset con 0x558a66b9dc00 session 0x558a60c8dc20
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166879232 unmapped: 52174848 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:39.396378+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 247 heartbeat osd_stat(store_statfs(0x1b286c000/0x0/0x1bfc00000, data 0x65161fd/0x66c1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2626758 data_alloc: 301989888 data_used: 21700608
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166862848 unmapped: 52191232 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:40.396528+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166871040 unmapped: 52183040 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:41.396660+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166871040 unmapped: 52183040 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:42.396800+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 247 heartbeat osd_stat(store_statfs(0x1b286c000/0x0/0x1bfc00000, data 0x6516399/0x66c1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 247 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 247 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166903808 unmapped: 52150272 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:43.397011+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 248 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166903808 unmapped: 52150272 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 248 heartbeat osd_stat(store_statfs(0x1b2866000/0x0/0x1bfc00000, data 0x651874b/0x66c7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:44.397218+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 249 handle_osd_map epochs [249,249], i have 249, src has [1,249]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5f807800
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 249 ms_handle_reset con 0x558a5f807800 session 0x558a65ebcf00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2638400 data_alloc: 301989888 data_used: 21712896
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166928384 unmapped: 52125696 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:45.397503+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 249 handle_osd_map epochs [249,250], i have 249, src has [1,250]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 250 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a5fdf2000
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166961152 unmapped: 52092928 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:46.397832+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 250 ms_handle_reset con 0x558a5fdf2000 session 0x558a629c45a0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.372380257s of 10.005574226s, submitted: 184
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166985728 unmapped: 52068352 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:47.397951+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166985728 unmapped: 52068352 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:48.398101+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 250 heartbeat osd_stat(store_statfs(0x1b285c000/0x0/0x1bfc00000, data 0x651cda0/0x66ce000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166985728 unmapped: 52068352 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:49.398404+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2638176 data_alloc: 301989888 data_used: 21712896
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166985728 unmapped: 52068352 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:50.398723+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 250 heartbeat osd_stat(store_statfs(0x1b285f000/0x0/0x1bfc00000, data 0x651cd73/0x66ce000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 250 heartbeat osd_stat(store_statfs(0x1b285f000/0x0/0x1bfc00000, data 0x651cd73/0x66ce000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166985728 unmapped: 52068352 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:51.399008+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:52.399302+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 166985728 unmapped: 52068352 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1a] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.17] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:53.400045+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 167002112 unmapped: 52051968 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 251 heartbeat osd_stat(store_statfs(0x1b285b000/0x0/0x1bfc00000, data 0x651f026/0x66d2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.b] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[3.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:54.400540+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 167002112 unmapped: 52051968 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2640822 data_alloc: 301989888 data_used: 21741568
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:55.401317+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 167002112 unmapped: 52051968 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:56.401658+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 167002112 unmapped: 52051968 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:57.402321+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 167002112 unmapped: 52051968 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.904427528s of 11.025944710s, submitted: 32
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:58.402581+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 167051264 unmapped: 52002816 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 251 heartbeat osd_stat(store_statfs(0x1b2860000/0x0/0x1bfc00000, data 0x651f024/0x66ce000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:59.402850+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 167051264 unmapped: 52002816 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2641034 data_alloc: 301989888 data_used: 21741568
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:00.403377+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 167051264 unmapped: 52002816 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:01.403648+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 168198144 unmapped: 50855936 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 251 heartbeat osd_stat(store_statfs(0x1b285f000/0x0/0x1bfc00000, data 0x651f189/0x66cf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:02.403781+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 168370176 unmapped: 50683904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:03.403972+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 168648704 unmapped: 50405376 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:04.404304+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 168648704 unmapped: 50405376 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2656540 data_alloc: 301989888 data_used: 21757952
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:05.404449+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 168984576 unmapped: 50069504 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:06.404786+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 169189376 unmapped: 49864704 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:07.405291+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 169205760 unmapped: 49848320 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 252 heartbeat osd_stat(store_statfs(0x1b27ba000/0x0/0x1bfc00000, data 0x65bd3b9/0x6774000, compress 0x0/0x0/0x0, omap 0x649, meta 0x6ccf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:08.405682+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 169246720 unmapped: 49807360 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 252 heartbeat osd_stat(store_statfs(0x1b23a1000/0x0/0x1bfc00000, data 0x65d4def/0x678d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 252 handle_osd_map epochs [253,253], i have 253, src has [1,253]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 252 handle_osd_map epochs [253,253], i have 253, src has [1,253]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.268705368s of 10.732069016s, submitted: 104
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:09.405921+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 169566208 unmapped: 49487872 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2684094 data_alloc: 301989888 data_used: 21770240
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:10.406119+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 170065920 unmapped: 48988160 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:11.406280+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 170311680 unmapped: 48742400 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:12.406480+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 170909696 unmapped: 48144384 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 254 heartbeat osd_stat(store_statfs(0x1b2301000/0x0/0x1bfc00000, data 0x6674f6c/0x682c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:13.406670+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 170909696 unmapped: 48144384 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 255 handle_osd_map epochs [255,255], i have 255, src has [1,255]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:14.406870+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 170909696 unmapped: 48144384 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2689950 data_alloc: 301989888 data_used: 21786624
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:15.407146+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 172048384 unmapped: 47005696 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:16.407369+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 172072960 unmapped: 46981120 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:17.407579+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 172154880 unmapped: 46899200 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 255 heartbeat osd_stat(store_statfs(0x1b227d000/0x0/0x1bfc00000, data 0x66f5bd0/0x68b0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:18.407870+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 171614208 unmapped: 47439872 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.620461464s of 10.286121368s, submitted: 171
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:19.408206+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 171614208 unmapped: 47439872 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2692184 data_alloc: 301989888 data_used: 21786624
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:20.408447+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 171614208 unmapped: 47439872 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 255 heartbeat osd_stat(store_statfs(0x1b2267000/0x0/0x1bfc00000, data 0x670cc28/0x68c7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:21.408646+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 171614208 unmapped: 47439872 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:22.408918+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 171720704 unmapped: 47333376 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:23.409220+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 171720704 unmapped: 47333376 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:24.409375+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 172883968 unmapped: 46170112 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 255 heartbeat osd_stat(store_statfs(0x1b21e2000/0x0/0x1bfc00000, data 0x67922e4/0x694c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 53
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2707724 data_alloc: 301989888 data_used: 21786624
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:25.409819+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173056000 unmapped: 45998080 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:26.410430+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173072384 unmapped: 45981696 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:27.410937+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173187072 unmapped: 45867008 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:28.411345+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173359104 unmapped: 45694976 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.592577934s of 10.085275650s, submitted: 109
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 255 heartbeat osd_stat(store_statfs(0x1b214f000/0x0/0x1bfc00000, data 0x6822647/0x69dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:29.411679+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173531136 unmapped: 45522944 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:30.411961+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2707440 data_alloc: 301989888 data_used: 21786624
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173449216 unmapped: 45604864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:31.412110+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174604288 unmapped: 44449792 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:32.412296+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174612480 unmapped: 44441600 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:33.412522+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174612480 unmapped: 44441600 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 255 heartbeat osd_stat(store_statfs(0x1b30cd000/0x0/0x1bfc00000, data 0x68a5068/0x6a60000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:34.412672+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 255 heartbeat osd_stat(store_statfs(0x1b3094000/0x0/0x1bfc00000, data 0x68de154/0x6a99000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174686208 unmapped: 44367872 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:35.412827+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2725776 data_alloc: 301989888 data_used: 21786624
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174694400 unmapped: 44359680 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:36.412975+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173547520 unmapped: 45506560 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:37.413124+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173563904 unmapped: 45490176 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:38.413289+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 255 heartbeat osd_stat(store_statfs(0x1b304e000/0x0/0x1bfc00000, data 0x6928e61/0x6ae0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173572096 unmapped: 45481984 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:39.413486+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173572096 unmapped: 45481984 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 255 heartbeat osd_stat(store_statfs(0x1b304e000/0x0/0x1bfc00000, data 0x6928e61/0x6ae0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:40.413922+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2725174 data_alloc: 301989888 data_used: 21786624
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173572096 unmapped: 45481984 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 11.518424988s of 11.934701920s, submitted: 89
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:41.414249+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173572096 unmapped: 45481984 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:42.414416+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173572096 unmapped: 45481984 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:43.414569+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173572096 unmapped: 45481984 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:44.414793+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173580288 unmapped: 45473792 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 255 heartbeat osd_stat(store_statfs(0x1b304c000/0x0/0x1bfc00000, data 0x692915a/0x6ae1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:45.414962+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2722692 data_alloc: 301989888 data_used: 21786624
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173580288 unmapped: 45473792 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:46.415103+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173580288 unmapped: 45473792 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:47.415303+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173588480 unmapped: 45465600 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:48.415500+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173588480 unmapped: 45465600 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 256 heartbeat osd_stat(store_statfs(0x1b304c000/0x0/0x1bfc00000, data 0x692b583/0x6ae2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:49.415796+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173588480 unmapped: 45465600 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:50.416045+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2723494 data_alloc: 301989888 data_used: 21798912
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173588480 unmapped: 45465600 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:51.416203+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.960661888s of 10.331076622s, submitted: 84
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173588480 unmapped: 45465600 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:52.416426+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173588480 unmapped: 45465600 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:53.416565+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173604864 unmapped: 45449216 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b3045000/0x0/0x1bfc00000, data 0x692d984/0x6ae7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:54.416739+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173604864 unmapped: 45449216 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:55.416894+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b3047000/0x0/0x1bfc00000, data 0x692d982/0x6ae7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2728216 data_alloc: 301989888 data_used: 21811200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 173613056 unmapped: 45441024 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:56.417080+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174661632 unmapped: 44392448 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:57.417225+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174710784 unmapped: 44343296 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:58.417384+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174710784 unmapped: 44343296 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:59.417586+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174710784 unmapped: 44343296 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:00.417720+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2729310 data_alloc: 301989888 data_used: 21811200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174710784 unmapped: 44343296 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b3047000/0x0/0x1bfc00000, data 0x692da85/0x6ae7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:01.417922+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174710784 unmapped: 44343296 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.483118057s of 10.598681450s, submitted: 32
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:02.418075+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174710784 unmapped: 44343296 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:03.418238+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174718976 unmapped: 44335104 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:04.418400+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174718976 unmapped: 44335104 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:05.418585+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2729550 data_alloc: 301989888 data_used: 21811200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174718976 unmapped: 44335104 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:06.418770+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b3046000/0x0/0x1bfc00000, data 0x692dbb4/0x6ae8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174718976 unmapped: 44335104 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:07.418969+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174718976 unmapped: 44335104 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:08.419218+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174718976 unmapped: 44335104 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:09.419429+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174718976 unmapped: 44335104 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:10.419605+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2731026 data_alloc: 301989888 data_used: 21811200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174718976 unmapped: 44335104 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b3047000/0x0/0x1bfc00000, data 0x692dcb5/0x6ae7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:11.419736+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174727168 unmapped: 44326912 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.913599968s of 10.000055313s, submitted: 18
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:12.419869+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174727168 unmapped: 44326912 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:13.420042+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174727168 unmapped: 44326912 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 257 heartbeat osd_stat(store_statfs(0x1b3046000/0x0/0x1bfc00000, data 0x692de1a/0x6ae8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:14.420198+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174735360 unmapped: 44318720 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:15.420390+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2734370 data_alloc: 301989888 data_used: 21811200
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174735360 unmapped: 44318720 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:16.420657+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174759936 unmapped: 44294144 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:17.420809+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 174792704 unmapped: 44261376 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:18.420979+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175849472 unmapped: 43204608 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:19.421164+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 259 heartbeat osd_stat(store_statfs(0x1b303c000/0x0/0x1bfc00000, data 0x693294a/0x6af1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175849472 unmapped: 43204608 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:20.421303+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2743698 data_alloc: 301989888 data_used: 21839872
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175849472 unmapped: 43204608 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:21.421430+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175849472 unmapped: 43204608 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.510382652s of 10.003357887s, submitted: 170
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:22.421642+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175857664 unmapped: 43196416 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:23.421783+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175865856 unmapped: 43188224 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:24.421961+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 261 handle_osd_map epochs [261,261], i have 261, src has [1,261]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 261 handle_osd_map epochs [261,261], i have 261, src has [1,261]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175874048 unmapped: 43180032 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 54
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:25.422207+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 261 heartbeat osd_stat(store_statfs(0x1b3033000/0x0/0x1bfc00000, data 0x69372b2/0x6afa000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [0,0,0,0,0,0,0,0,0,2])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2753770 data_alloc: 301989888 data_used: 21864448
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175882240 unmapped: 43171840 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:26.422366+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175898624 unmapped: 43155456 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 263 handle_osd_map epochs [263,263], i have 263, src has [1,263]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:27.422545+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175931392 unmapped: 43122688 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 264 heartbeat osd_stat(store_statfs(0x1b302d000/0x0/0x1bfc00000, data 0x693ba9d/0x6b00000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:28.422681+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175955968 unmapped: 43098112 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:29.422932+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175964160 unmapped: 43089920 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:30.423073+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2762076 data_alloc: 301989888 data_used: 21864448
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175972352 unmapped: 43081728 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 264 heartbeat osd_stat(store_statfs(0x1b302b000/0x0/0x1bfc00000, data 0x693de95/0x6b03000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:31.423233+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175972352 unmapped: 43081728 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.490006447s of 10.004901886s, submitted: 158
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:32.423689+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175980544 unmapped: 43073536 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:33.424027+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175988736 unmapped: 43065344 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:34.424192+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 266 heartbeat osd_stat(store_statfs(0x1b3022000/0x0/0x1bfc00000, data 0x6942500/0x6b0b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175988736 unmapped: 43065344 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:35.424328+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2767532 data_alloc: 301989888 data_used: 21876736
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175988736 unmapped: 43065344 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:36.424463+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175996928 unmapped: 43057152 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:37.424592+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 175996928 unmapped: 43057152 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b3022000/0x0/0x1bfc00000, data 0x6942500/0x6b0b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:38.424738+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176005120 unmapped: 43048960 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:39.424961+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176005120 unmapped: 43048960 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:40.425115+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2771934 data_alloc: 301989888 data_used: 21889024
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176005120 unmapped: 43048960 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:41.425304+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 267 heartbeat osd_stat(store_statfs(0x1b301e000/0x0/0x1bfc00000, data 0x69448b3/0x6b10000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176005120 unmapped: 43048960 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.791350365s of 10.000575066s, submitted: 69
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:42.425475+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176013312 unmapped: 43040768 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:43.425651+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176021504 unmapped: 43032576 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:44.425839+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 268 heartbeat osd_stat(store_statfs(0x1b3017000/0x0/0x1bfc00000, data 0x6946eb4/0x6b16000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176021504 unmapped: 43032576 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:45.425973+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2777060 data_alloc: 301989888 data_used: 21901312
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176037888 unmapped: 43016192 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:46.426112+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176037888 unmapped: 43016192 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 268 heartbeat osd_stat(store_statfs(0x1b301a000/0x0/0x1bfc00000, data 0x6946e48/0x6b14000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 268 handle_osd_map epochs [268,269], i have 268, src has [1,269]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:47.426283+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 269 handle_osd_map epochs [269,269], i have 269, src has [1,269]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 269 heartbeat osd_stat(store_statfs(0x1b3016000/0x0/0x1bfc00000, data 0x69491be/0x6b17000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176070656 unmapped: 42983424 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:48.426427+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176078848 unmapped: 42975232 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:49.426662+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176078848 unmapped: 42975232 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:50.426865+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2782418 data_alloc: 301989888 data_used: 21913600
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176078848 unmapped: 42975232 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:51.427037+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 270 heartbeat osd_stat(store_statfs(0x1b3014000/0x0/0x1bfc00000, data 0x694b551/0x6b1a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176078848 unmapped: 42975232 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:52.427197+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176078848 unmapped: 42975232 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.907083511s of 11.197980881s, submitted: 101
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:53.427384+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 271 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 271 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 271 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176603136 unmapped: 42450944 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:54.427570+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 55
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176373760 unmapped: 42680320 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 271 heartbeat osd_stat(store_statfs(0x1b300f000/0x0/0x1bfc00000, data 0x694d7bf/0x6b1e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:55.427727+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 271 heartbeat osd_stat(store_statfs(0x1b300f000/0x0/0x1bfc00000, data 0x694d7bf/0x6b1e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2785740 data_alloc: 301989888 data_used: 21925888
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176373760 unmapped: 42680320 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 271 heartbeat osd_stat(store_statfs(0x1b300f000/0x0/0x1bfc00000, data 0x694d7bf/0x6b1e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:56.427861+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176373760 unmapped: 42680320 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:57.428077+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 271 heartbeat osd_stat(store_statfs(0x1b300f000/0x0/0x1bfc00000, data 0x694d889/0x6b1f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176373760 unmapped: 42680320 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:58.428253+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 271 heartbeat osd_stat(store_statfs(0x1b300f000/0x0/0x1bfc00000, data 0x694d889/0x6b1f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176373760 unmapped: 42680320 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:59.428470+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176373760 unmapped: 42680320 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:00.428658+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2786836 data_alloc: 301989888 data_used: 21925888
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176373760 unmapped: 42680320 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:01.428821+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176373760 unmapped: 42680320 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 271 heartbeat osd_stat(store_statfs(0x1b300e000/0x0/0x1bfc00000, data 0x694d924/0x6b20000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:02.428985+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176381952 unmapped: 42672128 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:03.429157+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 10.588075638s of 10.693936348s, submitted: 238
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176381952 unmapped: 42672128 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:04.429368+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176381952 unmapped: 42672128 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:05.429573+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2791098 data_alloc: 301989888 data_used: 21925888
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176381952 unmapped: 42672128 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:06.429823+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176381952 unmapped: 42672128 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:07.430021+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 272 heartbeat osd_stat(store_statfs(0x1b3008000/0x0/0x1bfc00000, data 0x694ff24/0x6b25000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176406528 unmapped: 42647552 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:08.430219+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176406528 unmapped: 42647552 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:09.430434+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176406528 unmapped: 42647552 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:10.430656+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2792512 data_alloc: 301989888 data_used: 21938176
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176406528 unmapped: 42647552 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:11.430815+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176406528 unmapped: 42647552 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:12.431047+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 272 heartbeat osd_stat(store_statfs(0x1b300b000/0x0/0x1bfc00000, data 0x694ff82/0x6b23000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176406528 unmapped: 42647552 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:13.431247+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176414720 unmapped: 42639360 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 9.972866058s of 10.247456551s, submitted: 108
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:14.431419+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 273 heartbeat osd_stat(store_statfs(0x1b3005000/0x0/0x1bfc00000, data 0x6952335/0x6b28000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176422912 unmapped: 42631168 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:15.431629+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2797792 data_alloc: 301989888 data_used: 21950464
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176422912 unmapped: 42631168 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:16.431825+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176422912 unmapped: 42631168 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:17.431981+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176431104 unmapped: 42622976 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 274 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:18.432176+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176431104 unmapped: 42622976 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:19.432368+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176431104 unmapped: 42622976 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:20.432612+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 274 heartbeat osd_stat(store_statfs(0x1b3003000/0x0/0x1bfc00000, data 0x6954765/0x6b2b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2799048 data_alloc: 301989888 data_used: 21950464
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176431104 unmapped: 42622976 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:21.432825+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176431104 unmapped: 42622976 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:22.433042+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176439296 unmapped: 42614784 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.9] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[6.14] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:23.433932+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176447488 unmapped: 42606592 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:24.434183+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176447488 unmapped: 42606592 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b2ffe000/0x0/0x1bfc00000, data 0x69569d3/0x6b2f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:25.434361+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2802882 data_alloc: 301989888 data_used: 21962752
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176463872 unmapped: 42590208 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:26.434522+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176463872 unmapped: 42590208 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:27.434681+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176463872 unmapped: 42590208 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:28.434835+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176463872 unmapped: 42590208 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:29.435119+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b2ffe000/0x0/0x1bfc00000, data 0x69569d3/0x6b2f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176488448 unmapped: 42565632 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:30.435277+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2802882 data_alloc: 301989888 data_used: 21962752
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176488448 unmapped: 42565632 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:31.435688+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176488448 unmapped: 42565632 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:32.435845+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176496640 unmapped: 42557440 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:33.435988+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b2ffe000/0x0/0x1bfc00000, data 0x69569d3/0x6b2f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176513024 unmapped: 42541056 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:34.436162+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176513024 unmapped: 42541056 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:35.436421+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2802882 data_alloc: 301989888 data_used: 21962752
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176513024 unmapped: 42541056 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:36.436651+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176513024 unmapped: 42541056 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:37.436811+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176513024 unmapped: 42541056 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:38.437022+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b2ffe000/0x0/0x1bfc00000, data 0x69569d3/0x6b2f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176521216 unmapped: 42532864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:39.437296+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176521216 unmapped: 42532864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:40.437573+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2802882 data_alloc: 301989888 data_used: 21962752
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176521216 unmapped: 42532864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:41.437813+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176521216 unmapped: 42532864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b2ffe000/0x0/0x1bfc00000, data 0x69569d3/0x6b2f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:42.438032+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176521216 unmapped: 42532864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:43.438288+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176521216 unmapped: 42532864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:44.438526+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176521216 unmapped: 42532864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:45.438745+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2802882 data_alloc: 301989888 data_used: 21962752
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176521216 unmapped: 42532864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:46.439035+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176529408 unmapped: 42524672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:47.439382+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b2ffe000/0x0/0x1bfc00000, data 0x69569d3/0x6b2f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b2ffe000/0x0/0x1bfc00000, data 0x69569d3/0x6b2f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176529408 unmapped: 42524672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:48.439680+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176529408 unmapped: 42524672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:49.439920+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176529408 unmapped: 42524672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:50.440118+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2802882 data_alloc: 301989888 data_used: 21962752
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176529408 unmapped: 42524672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:51.440351+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b2ffe000/0x0/0x1bfc00000, data 0x69569d3/0x6b2f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176545792 unmapped: 42508288 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:52.440519+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176545792 unmapped: 42508288 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:53.440708+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176545792 unmapped: 42508288 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:54.440997+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176553984 unmapped: 42500096 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:55.441206+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2802882 data_alloc: 301989888 data_used: 21962752
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176553984 unmapped: 42500096 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:56.441424+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176553984 unmapped: 42500096 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:57.441604+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b2ffe000/0x0/0x1bfc00000, data 0x69569d3/0x6b2f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176562176 unmapped: 42491904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:58.441836+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176562176 unmapped: 42491904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:59.442121+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176562176 unmapped: 42491904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:00.442339+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2802882 data_alloc: 301989888 data_used: 21962752
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176562176 unmapped: 42491904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:01.442556+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176562176 unmapped: 42491904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:02.442737+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176562176 unmapped: 42491904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b2ffe000/0x0/0x1bfc00000, data 0x69569d3/0x6b2f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:03.442950+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 275 heartbeat osd_stat(store_statfs(0x1b2ffe000/0x0/0x1bfc00000, data 0x69569d3/0x6b2f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a6607e400
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 49.760707855s of 49.960681915s, submitted: 56
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176570368 unmapped: 42483712 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:04.443113+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177119232 unmapped: 41934848 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:05.443335+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2889839 data_alloc: 301989888 data_used: 21962752
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _renew_subs
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 276 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177192960 unmapped: 41861120 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 276 ms_handle_reset con 0x558a6607e400 session 0x558a66e49e00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:06.443493+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: handle_auth_request added challenge on 0x558a66b9cc00
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 276 handle_osd_map epochs [276,277], i have 276, src has [1,277]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177192960 unmapped: 41861120 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:07.443628+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 277 handle_osd_map epochs [277,277], i have 277, src has [1,277]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 277 ms_handle_reset con 0x558a66b9cc00 session 0x558a66e481e0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177192960 unmapped: 41861120 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:08.443802+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 277 heartbeat osd_stat(store_statfs(0x1b2ff5000/0x0/0x1bfc00000, data 0x695b0f7/0x6b37000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177192960 unmapped: 41861120 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:09.443957+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177192960 unmapped: 41861120 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:10.444092+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2814065 data_alloc: 301989888 data_used: 21987328
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177201152 unmapped: 41852928 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:11.444249+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177201152 unmapped: 41852928 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:12.444461+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.3] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.8] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.6] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.11] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.10] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.0] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.f] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.12] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.16] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 scrub-queue::remove_from_osd_queue removing pg[4.7] failed. State was: not registered w/ OSD
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177217536 unmapped: 41836544 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:13.444628+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177217536 unmapped: 41836544 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:14.444984+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff2000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177217536 unmapped: 41836544 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:15.445187+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816875 data_alloc: 301989888 data_used: 21987328
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177217536 unmapped: 41836544 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:16.445428+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff2000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177217536 unmapped: 41836544 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:17.445634+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177217536 unmapped: 41836544 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:18.445847+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff2000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177217536 unmapped: 41836544 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:19.446132+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177217536 unmapped: 41836544 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:20.446359+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816875 data_alloc: 301989888 data_used: 21987328
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177217536 unmapped: 41836544 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:21.446592+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177217536 unmapped: 41836544 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:22.447027+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177217536 unmapped: 41836544 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:23.447199+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore(/var/lib/ceph/osd/ceph-4) _kv_sync_thread utilization: idle 19.090415955s of 19.356349945s, submitted: 80
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 ms_handle_reset con 0x558a67264400 session 0x558a63f123c0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff2000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177504256 unmapped: 41549824 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:24.447357+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Got map version 56
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177504256 unmapped: 41549824 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:25.447523+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2815995 data_alloc: 301989888 data_used: 21987328
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177504256 unmapped: 41549824 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:26.447680+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177504256 unmapped: 41549824 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:27.447824+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177504256 unmapped: 41549824 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:28.448024+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177504256 unmapped: 41549824 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:29.448392+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177504256 unmapped: 41549824 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:30.448603+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2815995 data_alloc: 301989888 data_used: 21987328
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177504256 unmapped: 41549824 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:31.448777+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177504256 unmapped: 41549824 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:32.449029+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177504256 unmapped: 41549824 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:33.449157+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177504256 unmapped: 41549824 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:34.449345+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177512448 unmapped: 41541632 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:35.449509+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2815995 data_alloc: 301989888 data_used: 21987328
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177512448 unmapped: 41541632 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:36.449726+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177512448 unmapped: 41541632 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:37.449966+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177520640 unmapped: 41533440 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:38.450189+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177520640 unmapped: 41533440 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:39.450368+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177520640 unmapped: 41533440 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:40.450522+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2815995 data_alloc: 301989888 data_used: 21987328
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177520640 unmapped: 41533440 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:41.450685+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177520640 unmapped: 41533440 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:42.450869+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177520640 unmapped: 41533440 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:43.451123+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177520640 unmapped: 41533440 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:44.451346+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177528832 unmapped: 41525248 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:45.451495+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2815995 data_alloc: 301989888 data_used: 21987328
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177528832 unmapped: 41525248 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:46.451629+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177537024 unmapped: 41517056 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:47.451784+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177537024 unmapped: 41517056 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:48.452006+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177537024 unmapped: 41517056 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:49.452326+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177537024 unmapped: 41517056 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:50.452553+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2815995 data_alloc: 301989888 data_used: 21987328
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177545216 unmapped: 41508864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:51.452746+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177545216 unmapped: 41508864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:52.452918+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177545216 unmapped: 41508864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:53.453134+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177545216 unmapped: 41508864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:54.453340+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177545216 unmapped: 41508864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:55.453489+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177545216 unmapped: 41508864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:56.453628+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:57.453831+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177545216 unmapped: 41508864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:58.454011+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177545216 unmapped: 41508864 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:59.454184+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177553408 unmapped: 41500672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:00.454375+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177553408 unmapped: 41500672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:01.454552+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177553408 unmapped: 41500672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:02.454726+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177553408 unmapped: 41500672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:03.454897+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177553408 unmapped: 41500672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:04.455088+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177553408 unmapped: 41500672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:05.455245+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177553408 unmapped: 41500672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:06.455420+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177553408 unmapped: 41500672 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:07.455623+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177561600 unmapped: 41492480 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:08.455839+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177561600 unmapped: 41492480 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:09.456159+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177561600 unmapped: 41492480 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:10.456369+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177561600 unmapped: 41492480 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:11.456522+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177561600 unmapped: 41492480 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:12.456712+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177569792 unmapped: 41484288 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:13.456928+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177569792 unmapped: 41484288 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:14.457128+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177569792 unmapped: 41484288 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:15.457325+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177577984 unmapped: 41476096 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:16.457516+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177577984 unmapped: 41476096 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:17.457695+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177586176 unmapped: 41467904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:18.457849+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177586176 unmapped: 41467904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:19.458092+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177586176 unmapped: 41467904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:20.458248+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177586176 unmapped: 41467904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:21.458414+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177586176 unmapped: 41467904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:22.458615+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177586176 unmapped: 41467904 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:23.458739+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177594368 unmapped: 41459712 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:24.458910+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177594368 unmapped: 41459712 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:25.459075+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177594368 unmapped: 41459712 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:26.459206+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177594368 unmapped: 41459712 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:27.459367+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177594368 unmapped: 41459712 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:28.459562+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177594368 unmapped: 41459712 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:29.459743+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177594368 unmapped: 41459712 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:30.459972+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177594368 unmapped: 41459712 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:31.460182+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177602560 unmapped: 41451520 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:32.460394+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177602560 unmapped: 41451520 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:33.460596+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177602560 unmapped: 41451520 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:34.460754+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177602560 unmapped: 41451520 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:35.460958+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177602560 unmapped: 41451520 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:36.461191+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177602560 unmapped: 41451520 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:37.461422+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177602560 unmapped: 41451520 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:38.461568+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177602560 unmapped: 41451520 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:39.461818+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177610752 unmapped: 41443328 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:40.462054+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177610752 unmapped: 41443328 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:41.462215+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177618944 unmapped: 41435136 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:42.462348+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177627136 unmapped: 41426944 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:43.462472+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177627136 unmapped: 41426944 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:44.462641+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177627136 unmapped: 41426944 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:45.462867+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177627136 unmapped: 41426944 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:46.463161+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177627136 unmapped: 41426944 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:47.463397+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177627136 unmapped: 41426944 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:48.463650+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177635328 unmapped: 41418752 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:49.463978+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177635328 unmapped: 41418752 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:50.464118+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177635328 unmapped: 41418752 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:51.464269+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177635328 unmapped: 41418752 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:52.464421+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177635328 unmapped: 41418752 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:53.464572+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177635328 unmapped: 41418752 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:54.464738+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177635328 unmapped: 41418752 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:55.464981+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177643520 unmapped: 41410560 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:56.465201+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177643520 unmapped: 41410560 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:57.465397+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177643520 unmapped: 41410560 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:58.465540+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177643520 unmapped: 41410560 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:59.465738+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177643520 unmapped: 41410560 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:00.465933+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177643520 unmapped: 41410560 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:01.466089+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177643520 unmapped: 41410560 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:02.466249+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177643520 unmapped: 41410560 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:03.466404+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177651712 unmapped: 41402368 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:04.466601+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177659904 unmapped: 41394176 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:05.466771+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177659904 unmapped: 41394176 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:06.466975+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177659904 unmapped: 41394176 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:07.467175+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177659904 unmapped: 41394176 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:08.467357+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177659904 unmapped: 41394176 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:09.467572+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177659904 unmapped: 41394176 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:10.467774+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177659904 unmapped: 41394176 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:11.467909+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177668096 unmapped: 41385984 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:12.468057+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177668096 unmapped: 41385984 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:13.468199+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177668096 unmapped: 41385984 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:14.468308+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177668096 unmapped: 41385984 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:15.468434+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177668096 unmapped: 41385984 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: bluestore.MempoolThread(0x558a5ea93b60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2816155 data_alloc: 301989888 data_used: 21991424
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:16.468533+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177668096 unmapped: 41385984 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:17.468634+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: do_command 'config diff' '{prefix=config diff}'
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177709056 unmapped: 41345024 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: do_command 'config show' '{prefix=config show}'
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: do_command 'counter dump' '{prefix=counter dump}'
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: do_command 'counter schema' '{prefix=counter schema}'
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:18.468744+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 176979968 unmapped: 42074112 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: osd.4 278 heartbeat osd_stat(store_statfs(0x1b2ff3000/0x0/0x1bfc00000, data 0x695d345/0x6b3b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,1,2,3,5] op hist [])
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: tick
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_tickets
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:19.468862+0000)
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: prioritycache tune_memory target: 5709084876 mapped: 177209344 unmapped: 41844736 heap: 219054080 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:50 np0005625203.localdomain ceph-osd[32924]: do_command 'log dump' '{prefix=log dump}'
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1742345117' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.49932 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: pgmap v743: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.59137 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3859343773' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.98777 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.49938 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2780626791' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/474225806' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.59152 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.98783 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/576497619' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.49950 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1742345117' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/4092547990' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Feb 20 10:06:50 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3031087465' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 20 10:06:51 np0005625203.localdomain crontab[331609]: (root) LIST (root)
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr services"} v 0)
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3249238967' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.98795 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.59164 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3386122060' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.49962 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3031087465' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.98816 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1093232064' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.59179 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.49974 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2432434700' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3249238967' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.59191 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/106052076' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/154490225' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr versions"} v 0)
Feb 20 10:06:51 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3152040596' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mon stat"} v 0)
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1141553942' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 20 10:06:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:52.781 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:52.783 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:06:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:52.783 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:06:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:52.783 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:52.818 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:52 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:52.818 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.49986 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.98828 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: pgmap v744: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.59200 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3152040596' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1001056526' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.49998 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.98840 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.59218 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2761767677' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1141553942' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2885617671' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.50019 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:52 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3920137714' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "node ls"} v 0)
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2258020461' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2415945841' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1921623941' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: from='client.98861 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: from='client.98873 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: from='client.59251 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3803234465' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: from='client.50040 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2258020461' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3675822213' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: from='client.98888 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2415945841' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2062908097' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1921623941' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 20 10:06:53 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2840627815' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/14354088' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3660405424' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1041369153' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:03.703902+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:04.704112+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:05.704316+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:06.704448+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 787198 data_alloc: 301989888 data_used: 6524928
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:07.704633+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:08.704785+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:09.704943+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:10.705083+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:11.705200+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 787198 data_alloc: 301989888 data_used: 6524928
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:12.705347+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient:  got monmap 11 from mon.np0005625201 (according to old e11)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: dump:
                                                          epoch 11
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:44:43.337910+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
                                                          1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
                                                          2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:13.705496+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:14.705638+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:15.707059+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:16.707271+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 787198 data_alloc: 301989888 data_used: 6524928
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:17.707367+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:18.707522+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:19.707670+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:20.708064+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:21.708462+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 787198 data_alloc: 301989888 data_used: 6524928
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:22.708641+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:23.708859+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:24.709114+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:25.709305+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:26.709525+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 787198 data_alloc: 301989888 data_used: 6524928
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:27.709700+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:28.709836+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:29.709983+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:30.710098+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:31.710230+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 787198 data_alloc: 301989888 data_used: 6524928
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:32.710344+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:33.710505+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:34.710735+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:35.710913+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:36.711057+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 787198 data_alloc: 301989888 data_used: 6524928
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:37.711773+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:38.711968+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:39.712098+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:40.712267+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:41.712450+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 787198 data_alloc: 301989888 data_used: 6524928
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:42.712630+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:43.713333+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 heartbeat osd_stat(store_statfs(0x1ba24b000/0x0/0x1bfc00000, data 0x17c195c/0x1843000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:44.713506+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89014272 unmapped: 811008 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:45.713645+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 31
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now 
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc reconnect Terminating session with v2:172.18.0.106:6810/1027089384
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc reconnect No active mgr available yet
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 82.428604126s of 82.502632141s, submitted: 16
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb1d1680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 ms_handle_reset con 0x55f8ea0b7800 session 0x55f8eb63c960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 ms_handle_reset con 0x55f8e8f26400 session 0x55f8eb1c0960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 88981504 unmapped: 843776 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:46.713789+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb658800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 32
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc reconnect Starting new session with [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: get_auth_request con 0x55f8ecb41c00 auth_method 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_configure stats_period=5
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89169920 unmapped: 655360 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:47.713933+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 33
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:48.714072+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89178112 unmapped: 647168 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea76b000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:49.714250+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89415680 unmapped: 409600 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 34
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:50.714405+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89137152 unmapped: 688128 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 35
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:51.714633+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 36
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:52.714795+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:53.714936+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:54.715156+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:55.715345+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:56.715480+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:57.716357+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:58.716502+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:44:59.716786+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:00.717017+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:01.717216+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:02.717379+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:03.717628+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:04.717952+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:05.718138+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:06.718335+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:07.718506+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:08.718693+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient:  got monmap 12 from mon.np0005625201 (according to old e12)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: dump:
                                                          epoch 12
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:45:39.346453+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
                                                          1: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
                                                          2: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          3: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:09.719015+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:10.719242+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:11.719445+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:12.719608+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:13.719780+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:14.719979+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:15.720137+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient:  got monmap 13 from mon.np0005625201 (according to old e13)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: dump:
                                                          epoch 13
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:45:46.327222+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
                                                          1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: mon.np0005625201 at [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] went away
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _reopen_session rank -1
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _add_conns ranks=[1,0,2]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): picked mon.np0005625202 con 0x55f8eb659000 addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): picked mon.np0005625204 con 0x55f8eb659800 addr [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): picked mon.np0005625203 con 0x55f8ebd4a400 addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): start opening mon connection
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): start opening mon connection
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): start opening mon connection
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): _finish_auth 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): get_auth_request con 0x55f8ebd4a400 auth_method 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): _init_auth method 2
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): _init_auth already have auth, reseting
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): handle_auth_reply_more payload 9
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): handle_auth_reply_more payload_len 9
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): handle_auth_done global_id 24226 payload 293
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _finish_hunting 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: found mon.np0005625203
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _finish_auth 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:16.346219+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: ms_handle_reset current mon [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _reopen_session rank -1
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _add_conns ranks=[1,0,2]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): picked mon.np0005625202 con 0x55f8eb659000 addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): picked mon.np0005625204 con 0x55f8eb659800 addr [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): picked mon.np0005625203 con 0x55f8ec9cb400 addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): start opening mon connection
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): start opening mon connection
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): start opening mon connection
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 ms_handle_reset con 0x55f8ebd4a400 session 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): get_auth_request con 0x55f8ec9cb400 auth_method 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): _init_auth method 2
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): _init_auth already have auth, reseting
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): handle_auth_reply_more payload 9
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): handle_auth_reply_more payload_len 9
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): get_auth_request con 0x55f8eb659000 auth_method 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): _init_auth method 2
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): _init_auth already have auth, reseting
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): handle_auth_done global_id 24226 payload 293
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _finish_hunting 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: found mon.np0005625203
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _finish_auth 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:16.357412+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient:  got monmap 13 from mon.np0005625203 (according to old e13)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: dump:
                                                          epoch 13
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:45:46.327222+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
                                                          1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_config config(7 keys)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: set_mon_vals no callback set
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 36
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:16.720700+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:17.720896+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:18.721559+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:19.721751+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:20.722163+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:21.722302+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3147845967' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:22.722523+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:23.722741+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:24.723261+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:25.723735+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:26.723967+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient:  got monmap 14 from mon.np0005625203 (according to old e14)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: dump:
                                                          epoch 14
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:45:57.556107+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
                                                          1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
                                                          3: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625201
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:27.724208+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:28.725535+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:29.725960+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:30.726631+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:31.726992+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:32.727490+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:33.727642+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:34.728172+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:35.728437+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:36.728712+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:37.728929+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient:  got monmap 15 from mon.np0005625203 (according to old e15)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: dump:
                                                          epoch 15
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:46:08.177805+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.108:3300/0,v1:172.18.0.108:6789/0] mon.np0005625204
                                                          1: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          2: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:38.729180+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:39.729328+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:40.729767+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:41.729950+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:42.730090+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:43.730229+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:44.730427+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:45.730816+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:46.731056+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:47.731189+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:48.731469+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:49.731655+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:50.731781+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:51.731944+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:52.732057+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:53.732453+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89489408 unmapped: 335872 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient:  got monmap 16 from mon.np0005625203 (according to old e16)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: dump:
                                                          epoch 16
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:46:24.360760+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: ms_handle_reset current mon [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _reopen_session rank -1
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _add_conns ranks=[1,0]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): picked mon.np0005625203 con 0x55f8ebd4a400 addr [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): picked mon.np0005625202 con 0x55f8eb659000 addr [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): start opening mon connection
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): start opening mon connection
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 ms_handle_reset con 0x55f8ec9cb400 session 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): get_auth_request con 0x55f8ebd4a400 auth_method 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): get_auth_request method 2 preferred_modes [2,1]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): _init_auth method 2
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): _init_auth already have auth, reseting
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): handle_auth_reply_more payload 9
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): handle_auth_reply_more payload_len 9
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): handle_auth_reply_more responding with 132 bytes
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient(hunting): handle_auth_done global_id 24226 payload 293
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _finish_hunting 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: found mon.np0005625203
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _finish_auth 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:54.383632+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient:  got monmap 16 from mon.np0005625203 (according to old e16)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: dump:
                                                          epoch 16
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:46:24.360760+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_config config(7 keys)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: set_mon_vals no callback set
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 36
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:54.732676+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:55.732823+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:56.733070+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:57.733218+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:58.733399+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:45:59.733539+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:00.733736+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:01.733904+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:02.734086+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:03.734230+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:04.734440+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:05.734577+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:06.734782+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:07.734927+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:08.735080+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:09.735222+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:10.735366+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:11.735512+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790734 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:12.735729+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:13.735947+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:14.736164+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:15.736346+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89284608 unmapped: 540672 heap: 89825280 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_monmap mon_map magic: 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient:  got monmap 17 from mon.np0005625203 (according to old e17)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: dump:
                                                          epoch 17
                                                          fsid a8557ee9-b55d-5519-942c-cf8f6172f1d8
                                                          last_changed 2026-02-20T09:46:46.606881+0000
                                                          created 2026-02-20T07:36:51.191305+0000
                                                          min_mon_release 18 (reef)
                                                          election_strategy: 1
                                                          0: [v2:172.18.0.103:3300/0,v1:172.18.0.103:6789/0] mon.np0005625202
                                                          1: [v2:172.18.0.104:3300/0,v1:172.18.0.104:6789/0] mon.np0005625203
                                                          2: [v2:172.18.0.105:3300/0,v1:172.18.0.105:6789/0] mon.np0005625204
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:16.736524+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 90.150962830s of 90.217193604s, submitted: 17
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3ed4/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90374144 unmapped: 499712 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790910 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:17.736707+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90374144 unmapped: 499712 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3fee/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:18.736832+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90374144 unmapped: 499712 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:19.737024+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90374144 unmapped: 499712 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:20.737168+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90374144 unmapped: 499712 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:21.737317+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 37
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.107:6810/2084071713,v1:172.18.0.107:6811/2084071713]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90374144 unmapped: 499712 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790910 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:22.737506+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90374144 unmapped: 499712 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3fee/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:23.737629+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90374144 unmapped: 499712 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:24.737807+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90374144 unmapped: 499712 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:25.737974+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3fee/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90374144 unmapped: 499712 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3fee/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:26.738159+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 heartbeat osd_stat(store_statfs(0x1ba247000/0x0/0x1bfc00000, data 0x17c3fee/0x1847000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90374144 unmapped: 499712 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 790910 data_alloc: 301989888 data_used: 6533120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:27.738349+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.405926704s of 11.415510178s, submitted: 3
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 38
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now 
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc reconnect Terminating session with v2:172.18.0.107:6810/2084071713
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc reconnect No active mgr available yet
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90423296 unmapped: 450560 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 ms_handle_reset con 0x55f8ea76b000 session 0x55f8eb1be960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 ms_handle_reset con 0x55f8ecb41400 session 0x55f8eb63d2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 ms_handle_reset con 0x55f8eb658800 session 0x55f8eb55e5a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:28.738482+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 39
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/689946273,v1:172.18.0.108:6811/689946273]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc reconnect Starting new session with [v2:172.18.0.108:6810/689946273,v1:172.18.0.108:6811/689946273]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: get_auth_request con 0x55f8eb655c00 auth_method 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89989120 unmapped: 884736 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_configure stats_period=5
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:29.738605+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 89989120 unmapped: 884736 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb659000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb659800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:30.738728+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90046464 unmapped: 827392 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/689946273,v1:172.18.0.108:6811/689946273]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:31.738854+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90193920 unmapped: 679936 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:32.738949+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90193920 unmapped: 679936 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:33.739064+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90193920 unmapped: 679936 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 41
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/689946273,v1:172.18.0.108:6811/689946273]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:34.739239+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:35.739406+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:36.739559+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:37.739708+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:38.739834+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:39.739997+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:40.740255+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:41.740438+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:42.740645+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:43.740832+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:44.741039+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:45.741263+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:46.741560+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:47.741742+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:48.741939+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:49.742308+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90382336 unmapped: 491520 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:50.742543+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:51.742780+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 42
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.108:6810/689946273,v1:172.18.0.108:6811/689946273]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:52.742943+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:53.743048+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:54.743274+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:55.743463+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:56.743657+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:57.743765+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:58.743866+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:46:59.744009+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:00.744169+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:01.744356+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:02.744497+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:03.744664+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:04.744852+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:05.745098+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:06.745291+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:07.745465+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:08.745618+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:09.745773+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:10.746148+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:11.746344+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:12.746529+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:13.746709+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:14.746928+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:15.747110+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:16.747289+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:17.747473+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:18.747654+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:19.747800+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:20.748010+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:21.748201+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:22.748387+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:23.748568+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:24.748941+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:25.749254+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:26.749505+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:27.749666+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:28.750829+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:29.750976+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:30.751749+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:31.752048+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:32.752283+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:33.752519+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:34.752767+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:35.752927+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:36.753299+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:37.753534+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:38.753767+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:39.753961+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:40.754119+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:41.754270+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:42.754486+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:43.754721+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:44.754938+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:45.755080+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:46.755338+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:47.755553+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:48.755756+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:49.755938+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:50.756222+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:51.756400+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90177536 unmapped: 696320 heap: 90873856 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:52.756557+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 heartbeat osd_stat(store_statfs(0x1ba244000/0x0/0x1bfc00000, data 0x17c64f8/0x184a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 793834 data_alloc: 301989888 data_used: 6541312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 43
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now 
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc reconnect Terminating session with v2:172.18.0.108:6810/689946273
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc reconnect No active mgr available yet
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 85.018394470s of 85.092674255s, submitted: 15
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 ms_handle_reset con 0x55f8eb659800 session 0x55f8eb141680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 ms_handle_reset con 0x55f8eb659000 session 0x55f8ecb3d680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 ms_handle_reset con 0x55f8ecb41400 session 0x55f8e9c69680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb655c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90316800 unmapped: 1605632 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:53.756688+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 44
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc reconnect Starting new session with [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: get_auth_request con 0x55f8ec9cbc00 auth_method 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_configure stats_period=5
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90415104 unmapped: 1507328 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:54.756900+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90415104 unmapped: 1507328 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:55.757073+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb658800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb659000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba23f000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:56.757211+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:57.757347+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 301989888 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 45
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:58.757475+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:47:59.757650+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:00.757822+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:01.758012+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:02.758203+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 301989888 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:03.758394+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:04.758607+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:05.758778+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:06.759014+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:07.759247+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 301989888 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:08.759501+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:09.759724+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:10.759991+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:11.760243+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:12.760383+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 301989888 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:13.760594+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:14.760854+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:15.761087+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:16.761291+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:17.761510+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 301989888 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:18.761715+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:19.761928+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:20.762104+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:21.762303+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:22.762506+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 301989888 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:23.762708+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:24.762994+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:25.763241+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:26.763467+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:27.763681+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 301989888 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:28.763901+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:29.764107+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:30.764282+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:31.764500+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:32.764725+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 301989888 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:33.764850+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:34.765127+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:35.765394+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:36.765588+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:37.765809+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 285212672 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:38.766007+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:39.766190+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:40.766374+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:41.766524+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:42.766755+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 285212672 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:43.767000+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:44.767253+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:45.767413+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:46.767608+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:47.767868+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 285212672 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:48.768139+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:49.768338+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:50.768543+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:51.768729+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:52.768927+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 285212672 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:53.769113+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:54.769463+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:55.769636+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:56.769824+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:57.770002+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 285212672 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:58.770150+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:48:59.770337+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:00.770463+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:01.770673+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:02.771700+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 285212672 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:03.771942+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:04.772132+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:05.772459+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:06.773211+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:07.775722+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 285212672 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:08.776360+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:09.777393+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:10.777845+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:11.778443+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:12.778589+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 285212672 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:13.779726+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:14.780494+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:15.780748+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:16.781132+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 7800.1 total, 600.0 interval
                                                          Cumulative writes: 5116 writes, 22K keys, 5116 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.00 MB/s
                                                          Cumulative WAL: 5116 writes, 788 syncs, 6.49 writes per sync, written: 0.02 GB, 0.00 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 212 writes, 497 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s
                                                          Interval WAL: 212 writes, 99 syncs, 2.14 writes per sync, written: 0.00 GB, 0.00 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:17.781394+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 285212672 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:18.781913+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:19.782065+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8cca/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:20.782391+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:21.782549+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:22.782729+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 797154 data_alloc: 285212672 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 90.067489624s of 90.143165588s, submitted: 19
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 46
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:23.782929+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8de4/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:24.783216+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:25.783413+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:26.783570+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90505216 unmapped: 1417216 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:27.783703+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba240000/0x0/0x1bfc00000, data 0x17c8de4/0x184e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90537984 unmapped: 1384448 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 800792 data_alloc: 285212672 data_used: 6549504
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:28.783900+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90562560 unmapped: 1359872 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 heartbeat osd_stat(store_statfs(0x1ba23f000/0x0/0x1bfc00000, data 0x17c8e07/0x184f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:29.784215+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 92 ms_handle_reset con 0x55f8ecb41c00 session 0x55f8eb550000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90595328 unmapped: 1327104 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 92 handle_osd_map epochs [92,92], i have 92, src has [1,92]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:30.784401+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90595328 unmapped: 1327104 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:31.784560+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 handle_osd_map epochs [93,93], i have 93, src has [1,93]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90660864 unmapped: 1261568 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba239000/0x0/0x1bfc00000, data 0x17cb1a2/0x1855000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:32.784711+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90710016 unmapped: 1212416 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 813060 data_alloc: 285212672 data_used: 6574080
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:33.784995+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90710016 unmapped: 1212416 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:34.786148+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90710016 unmapped: 1212416 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:35.788377+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:36.789422+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:37.791291+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 813060 data_alloc: 285212672 data_used: 6574080
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:38.792848+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:39.793304+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:40.793533+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:41.793954+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:42.794241+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 813060 data_alloc: 285212672 data_used: 6574080
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:43.794433+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:44.795020+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:45.795506+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:46.795968+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:47.796269+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 813060 data_alloc: 285212672 data_used: 6574080
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:48.796699+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:49.796940+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:50.797527+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:51.797798+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:52.798089+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 813060 data_alloc: 285212672 data_used: 6574080
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:53.798246+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:54.798452+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:55.798710+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:56.798868+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:57.799134+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 813060 data_alloc: 285212672 data_used: 6574080
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:58.799446+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:49:59.799630+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:00.799948+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:01.800126+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:02.800348+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 813060 data_alloc: 285212672 data_used: 6574080
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:03.800527+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:04.800689+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:05.800843+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:06.801766+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:07.801966+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 813060 data_alloc: 285212672 data_used: 6574080
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:08.802704+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:09.804011+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:10.804956+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:11.805156+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:12.805631+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 813060 data_alloc: 285212672 data_used: 6574080
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:13.806166+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:14.806404+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:15.806684+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:16.807903+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:17.808988+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 813060 data_alloc: 285212672 data_used: 6574080
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:18.809459+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:19.809637+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:20.809845+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:21.810326+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:22.810687+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 813060 data_alloc: 285212672 data_used: 6574080
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:23.810860+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:24.811107+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:25.811320+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:26.811536+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:27.811691+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1ba234000/0x0/0x1bfc00000, data 0x17cd50a/0x1859000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 ms_handle_reset con 0x55f8ec9ca800 session 0x55f8eb1bf860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cac00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 ms_handle_reset con 0x55f8ec9cac00 session 0x55f8eb1bf680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb202000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90742784 unmapped: 1179648 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 813060 data_alloc: 285212672 data_used: 6574080
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:28.811871+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 65.108665466s of 65.377113342s, submitted: 54
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 90726400 unmapped: 1196032 heap: 91922432 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:29.812116+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec982400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 ms_handle_reset con 0x55f8ec982400 session 0x55f8eb1d05a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ea246960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 ms_handle_reset con 0x55f8ec9ca800 session 0x55f8eb30f2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cac00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 ms_handle_reset con 0x55f8ec9cac00 session 0x55f8eb30e3c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 92053504 unmapped: 14565376 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 ms_handle_reset con 0x55f8ecb41c00 session 0x55f8e9c68960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebbdac00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 ms_handle_reset con 0x55f8ebbdac00 session 0x55f8e9c68d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:30.812253+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 92045312 unmapped: 14573568 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:31.812423+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 92069888 unmapped: 14548992 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb63d0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:32.812611+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cac00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b95ca000/0x0/0x1bfc00000, data 0x243653d/0x24c4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 92102656 unmapped: 14516224 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 913412 data_alloc: 285212672 data_used: 6750208
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:34.294274+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 98263040 unmapped: 8355840 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:35.294573+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 98263040 unmapped: 8355840 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:36.294919+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 98263040 unmapped: 8355840 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:37.295243+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 98263040 unmapped: 8355840 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:38.295386+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 98263040 unmapped: 8355840 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 960292 data_alloc: 301989888 data_used: 13389824
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b95ca000/0x0/0x1bfc00000, data 0x243653d/0x24c4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:39.295533+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 98344960 unmapped: 8273920 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:40.295710+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 98344960 unmapped: 8273920 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:41.295963+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 98344960 unmapped: 8273920 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:42.296928+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 98344960 unmapped: 8273920 heap: 106618880 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.930835724s of 14.136489868s, submitted: 39
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:43.297102+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 103071744 unmapped: 10362880 heap: 113434624 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 heartbeat osd_stat(store_statfs(0x1b95ca000/0x0/0x1bfc00000, data 0x243653d/0x24c4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1057618 data_alloc: 301989888 data_used: 13422592
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:44.297303+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 103505920 unmapped: 9928704 heap: 113434624 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:45.297474+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 103260160 unmapped: 10174464 heap: 113434624 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:46.297651+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 100966400 unmapped: 12468224 heap: 113434624 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:47.297790+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 94 ms_handle_reset con 0x55f8ec9ca800 session 0x55f8eb52e960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 94 ms_handle_reset con 0x55f8ec9cac00 session 0x55f8eb523860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 100966400 unmapped: 12468224 heap: 113434624 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 94 heartbeat osd_stat(store_statfs(0x1b88b1000/0x0/0x1bfc00000, data 0x314bcb5/0x31dd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 94 ms_handle_reset con 0x55f8ecb41c00 session 0x55f8eb63d2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:48.297938+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 94 heartbeat osd_stat(store_statfs(0x1b88b1000/0x0/0x1bfc00000, data 0x314bcb5/0x31dd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 95 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8eb63c960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 100966400 unmapped: 12468224 heap: 113434624 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 95 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8e9364b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1079533 data_alloc: 301989888 data_used: 13447168
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 95 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8eb52ef00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:49.298077+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116375552 unmapped: 13492224 heap: 129867776 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 95 ms_handle_reset con 0x55f8ec9ca800 session 0x55f8eb523c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cac00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:50.298291+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 95 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117170176 unmapped: 12697600 heap: 129867776 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 96 ms_handle_reset con 0x55f8ec9cac00 session 0x55f8eb780000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b7c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 96 ms_handle_reset con 0x55f8ecb41c00 session 0x55f8eb30f0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:51.298500+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117219328 unmapped: 12648448 heap: 129867776 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 96 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 97 ms_handle_reset con 0x55f8ea0b7c00 session 0x55f8ed33f860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:52.298645+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117301248 unmapped: 12566528 heap: 129867776 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.906193733s of 10.000569344s, submitted: 265
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 97 heartbeat osd_stat(store_statfs(0x1b75b9000/0x0/0x1bfc00000, data 0x443f571/0x44d5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:53.298794+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 97 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8e9540b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 99328000 unmapped: 30539776 heap: 129867776 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1000672 data_alloc: 285212672 data_used: 6606848
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:54.299009+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 106749952 unmapped: 26796032 heap: 133545984 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 97 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8e9541e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 97 ms_handle_reset con 0x55f8ec9ca800 session 0x55f8eb140960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cac00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 97 ms_handle_reset con 0x55f8ec9cac00 session 0x55f8eb1403c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 97 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8eb140f00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b7c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:55.299225+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 97 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ed33e960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 97058816 unmapped: 36487168 heap: 133545984 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 98 ms_handle_reset con 0x55f8ea0b7c00 session 0x55f8ed33f2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 98 heartbeat osd_stat(store_statfs(0x1b7fbd000/0x0/0x1bfc00000, data 0x3a38963/0x3ad0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:56.299403+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 98 ms_handle_reset con 0x55f8ec9ca800 session 0x55f8eb141680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 98 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ecb3cd20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 96198656 unmapped: 37347328 heap: 133545984 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 98 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ecb3c960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 98 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8ecb3cb40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b7c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 98 ms_handle_reset con 0x55f8ea0b7c00 session 0x55f8ecb3c3c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:57.299576+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 96206848 unmapped: 37339136 heap: 133545984 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 99 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ecb3d4a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 99 ms_handle_reset con 0x55f8ec9ca800 session 0x55f8ecb3d680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 99 ms_handle_reset con 0x55f8e8f27400 session 0x55f8eb1bf860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 99 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8eb1bf680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b7c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:58.299751+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 99 ms_handle_reset con 0x55f8ea0b7c00 session 0x55f8eb1bf4a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 99 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ed34c1e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 99 ms_handle_reset con 0x55f8ec9ca400 session 0x55f8eb30fe00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 99 ms_handle_reset con 0x55f8e8f27400 session 0x55f8e9c69c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 99 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8eb5501e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 96239616 unmapped: 37306368 heap: 133545984 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b7c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1193639 data_alloc: 285212672 data_used: 6914048
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:50:59.299941+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 100 handle_osd_map epochs [99,100], i have 100, src has [1,100]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 100 heartbeat osd_stat(store_statfs(0x1b75a1000/0x0/0x1bfc00000, data 0x444d1dc/0x44ec000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,4,3])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 100 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb781c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 100 ms_handle_reset con 0x55f8ea0b7c00 session 0x55f8eb55f860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113524736 unmapped: 27860992 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 100 ms_handle_reset con 0x55f8ec9ca800 session 0x55f8ed34c3c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 100 ms_handle_reset con 0x55f8e8f27400 session 0x55f8eb7c85a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 100 heartbeat osd_stat(store_statfs(0x1b75a1000/0x0/0x1bfc00000, data 0x444d1dc/0x44ec000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b7c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 100 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8ed34c780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:00.300381+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 100 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ecb3c780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113704960 unmapped: 27680768 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:01.300521+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 101 ms_handle_reset con 0x55f8ecb41800 session 0x55f8eb3010e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 101 ms_handle_reset con 0x55f8ea0b7c00 session 0x55f8eb7c9680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113942528 unmapped: 27443200 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:02.300718+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 122462208 unmapped: 18923520 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.665853500s of 10.035615921s, submitted: 337
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 102 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ed34d0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:03.300925+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 102 heartbeat osd_stat(store_statfs(0x1b5cb2000/0x0/0x1bfc00000, data 0x5d379bf/0x5ddc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119070720 unmapped: 22315008 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1454838 data_alloc: 318767104 data_used: 26312704
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:04.302688+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 102 heartbeat osd_stat(store_statfs(0x1b6f5a000/0x0/0x1bfc00000, data 0x4a4a95d/0x4aee000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120324096 unmapped: 21061632 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:05.303360+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 102 heartbeat osd_stat(store_statfs(0x1b6f5a000/0x0/0x1bfc00000, data 0x4a4a95d/0x4aee000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120406016 unmapped: 20979712 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:06.303525+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120406016 unmapped: 20979712 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:07.303711+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120471552 unmapped: 20914176 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:08.303841+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120471552 unmapped: 20914176 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1470224 data_alloc: 318767104 data_used: 27398144
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 heartbeat osd_stat(store_statfs(0x1b6f9b000/0x0/0x1bfc00000, data 0x4a4cbab/0x4af2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:09.304002+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120504320 unmapped: 20881408 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:10.304382+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120504320 unmapped: 20881408 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:11.304554+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb658400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120995840 unmapped: 20389888 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 heartbeat osd_stat(store_statfs(0x1b6f9b000/0x0/0x1bfc00000, data 0x4a4cbab/0x4af2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [0,0,3,1,1,2])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb658000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:12.304681+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebbdd400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 122724352 unmapped: 18661376 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.627667427s of 10.015462875s, submitted: 137
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 ms_handle_reset con 0x55f8ec9ca400 session 0x55f8ed34cd20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb3005a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:13.304831+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 ms_handle_reset con 0x55f8ec9cb000 session 0x55f8ed34cf00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8eb140f00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129343488 unmapped: 12042240 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1673000 data_alloc: 318767104 data_used: 27803648
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:14.304985+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 heartbeat osd_stat(store_statfs(0x1b559d000/0x0/0x1bfc00000, data 0x644bbab/0x64f1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 132145152 unmapped: 9240576 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:15.305199+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129564672 unmapped: 11821056 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:16.305502+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129867776 unmapped: 11517952 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:17.306008+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 ms_handle_reset con 0x55f8e8f27400 session 0x55f8eb242000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8eb1be960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129867776 unmapped: 11517952 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:18.306310+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8eb140780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130220032 unmapped: 11165696 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1698576 data_alloc: 318767104 data_used: 28221440
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:19.306599+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 ms_handle_reset con 0x55f8ecb41800 session 0x55f8eb1c0000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 ms_handle_reset con 0x55f8eb658400 session 0x55f8eb203680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128811008 unmapped: 12574720 heap: 141385728 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 104 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb63cd20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 104 ms_handle_reset con 0x55f8ecb41800 session 0x55f8ed34d4a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 104 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8eb242780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:20.306756+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 104 heartbeat osd_stat(store_statfs(0x1b54e3000/0x0/0x1bfc00000, data 0x6501f75/0x65aa000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 134332416 unmapped: 21626880 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 104 ms_handle_reset con 0x55f8eb658000 session 0x55f8eb1bf2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 104 ms_handle_reset con 0x55f8ebbdd400 session 0x55f8e9365e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 104 ms_handle_reset con 0x55f8ec9ca400 session 0x55f8eb1c0960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 104 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8eb300d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:21.306937+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 104 heartbeat osd_stat(store_statfs(0x1b5c17000/0x0/0x1bfc00000, data 0x5dd1ea1/0x5e77000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 131334144 unmapped: 24625152 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 105 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8eb140000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 105 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb5423c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:22.307084+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 131325952 unmapped: 24633344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 handle_osd_map epochs [106,106], i have 106, src has [1,106]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.117623329s of 10.230771065s, submitted: 276
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:23.307517+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 131325952 unmapped: 24633344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1481716 data_alloc: 301989888 data_used: 23502848
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:24.307690+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb658000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 ms_handle_reset con 0x55f8eb658000 session 0x55f8eb63c960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8ed34d860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8eb55f860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb781c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 127705088 unmapped: 28254208 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 ms_handle_reset con 0x55f8ec9ca400 session 0x55f8eb5510e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:25.307966+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 ms_handle_reset con 0x55f8ec9cb000 session 0x55f8ebd825a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8ebd82b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8eb781860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb242b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 ms_handle_reset con 0x55f8ecb41800 session 0x55f8eb52f680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112230400 unmapped: 43728896 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 heartbeat osd_stat(store_statfs(0x1b770b000/0x0/0x1bfc00000, data 0x3a4e528/0x3af3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x416f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:26.308147+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112230400 unmapped: 43728896 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:27.308369+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.a] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.5] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.1c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[2.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112254976 unmapped: 43704320 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:28.308535+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 ms_handle_reset con 0x55f8ec9ca400 session 0x55f8eb1bf680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8eb1be000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8eb1be960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112263168 unmapped: 43696128 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1215115 data_alloc: 285212672 data_used: 2859008
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb1bf860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:29.308684+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 ms_handle_reset con 0x55f8ecb41800 session 0x55f8eb202000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 ms_handle_reset con 0x55f8ec9cb000 session 0x55f8ea218960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8ea246960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8e9364000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116326400 unmapped: 39632896 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 heartbeat osd_stat(store_statfs(0x1b7265000/0x0/0x1bfc00000, data 0x4381786/0x4429000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,0,2])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:30.308946+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb5325a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116531200 unmapped: 39428096 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:31.309155+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116531200 unmapped: 39428096 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:32.309336+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 heartbeat osd_stat(store_statfs(0x1b6eb6000/0x0/0x1bfc00000, data 0x4730786/0x47d8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 ms_handle_reset con 0x55f8ecb41800 session 0x55f8eb55fe00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec014400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116539392 unmapped: 39419904 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec014800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:33.309539+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.961277008s of 10.367638588s, submitted: 96
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eda54000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113672192 unmapped: 42287104 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1333296 data_alloc: 285212672 data_used: 5009408
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:34.309708+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 108 ms_handle_reset con 0x55f8eda54000 session 0x55f8eb1be960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113885184 unmapped: 42074112 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0b6800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:35.310091+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 108 ms_handle_reset con 0x55f8ec014800 session 0x55f8eb1bef00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114311168 unmapped: 41648128 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:36.310246+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 108 heartbeat osd_stat(store_statfs(0x1b86b1000/0x0/0x1bfc00000, data 0x2f32b03/0x2fdc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116244480 unmapped: 39714816 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:37.310389+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118284288 unmapped: 37675008 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:38.310526+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8ed33e5a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 ms_handle_reset con 0x55f8ec014400 session 0x55f8eb523a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b86ad000/0x0/0x1bfc00000, data 0x2f34d51/0x2fe0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118292480 unmapped: 37666816 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1237666 data_alloc: 301989888 data_used: 16379904
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:39.310673+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb550780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115752960 unmapped: 40206336 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:40.310842+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115752960 unmapped: 40206336 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:41.311024+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b938f000/0x0/0x1bfc00000, data 0x2254d41/0x22ff000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115752960 unmapped: 40206336 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:42.311231+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115752960 unmapped: 40206336 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:43.311387+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115752960 unmapped: 40206336 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1101962 data_alloc: 301989888 data_used: 10002432
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:44.311519+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115752960 unmapped: 40206336 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:45.311730+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.983040810s of 12.153132439s, submitted: 101
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116473856 unmapped: 39485440 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:46.311925+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116539392 unmapped: 39419904 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:47.312168+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b88d9000/0x0/0x1bfc00000, data 0x2d0ad41/0x2db5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116539392 unmapped: 39419904 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:48.312388+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116875264 unmapped: 39084032 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1185500 data_alloc: 301989888 data_used: 10002432
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:49.312644+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116875264 unmapped: 39084032 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:50.312825+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116875264 unmapped: 39084032 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:51.313028+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116875264 unmapped: 39084032 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:52.313284+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b88d1000/0x0/0x1bfc00000, data 0x2d12d41/0x2dbd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116875264 unmapped: 39084032 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:53.313441+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116875264 unmapped: 39084032 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1185500 data_alloc: 301989888 data_used: 10002432
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:54.313669+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116883456 unmapped: 39075840 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:55.314042+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b88d1000/0x0/0x1bfc00000, data 0x2d12d41/0x2dbd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116883456 unmapped: 39075840 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:56.314222+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b88d1000/0x0/0x1bfc00000, data 0x2d12d41/0x2dbd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.936312675s of 11.132122993s, submitted: 57
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 ms_handle_reset con 0x55f8ea0b6800 session 0x55f8e9540b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8e9364b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116883456 unmapped: 39075840 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:57.314392+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb1bf860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:58.314631+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 983264 data_alloc: 285212672 data_used: 2875392
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:51:59.314794+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:00.315043+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b9df4000/0x0/0x1bfc00000, data 0x17f0d0e/0x1899000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:01.315380+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b9df4000/0x0/0x1bfc00000, data 0x17f0d0e/0x1899000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:02.315587+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:03.315918+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 983264 data_alloc: 285212672 data_used: 2875392
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:04.316116+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:05.316337+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:06.316664+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b9df4000/0x0/0x1bfc00000, data 0x17f0d0e/0x1899000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:07.317000+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:08.317276+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 983264 data_alloc: 285212672 data_used: 2875392
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:09.317722+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:10.318147+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:11.318559+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b9df4000/0x0/0x1bfc00000, data 0x17f0d0e/0x1899000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:12.318805+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:13.319031+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112771072 unmapped: 43188224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 983264 data_alloc: 285212672 data_used: 2875392
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:14.319247+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.606176376s of 17.764093399s, submitted: 37
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 heartbeat osd_stat(store_statfs(0x1b9df4000/0x0/0x1bfc00000, data 0x17f0d0e/0x1899000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec014400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 ms_handle_reset con 0x55f8ec014400 session 0x55f8eb30f2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 109 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112844800 unmapped: 43114496 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:15.319541+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112869376 unmapped: 43089920 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec014800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:16.319720+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112869376 unmapped: 43089920 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:17.319889+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 111 ms_handle_reset con 0x55f8ec014800 session 0x55f8eb112b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112877568 unmapped: 43081728 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:18.320310+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b9dea000/0x0/0x1bfc00000, data 0x17f5432/0x18a1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b9dea000/0x0/0x1bfc00000, data 0x17f5432/0x18a1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112877568 unmapped: 43081728 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 992082 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:19.320594+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 111 heartbeat osd_stat(store_statfs(0x1b9dea000/0x0/0x1bfc00000, data 0x17f5432/0x18a1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112877568 unmapped: 43081728 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:20.320960+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112877568 unmapped: 43081728 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:21.321450+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112877568 unmapped: 43081728 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:22.321664+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112877568 unmapped: 43081728 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:23.321836+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9dea000/0x0/0x1bfc00000, data 0x17f5432/0x18a1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112902144 unmapped: 43057152 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:24.322035+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 993294 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112902144 unmapped: 43057152 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:25.322224+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112902144 unmapped: 43057152 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:26.322365+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112902144 unmapped: 43057152 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:27.322551+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112902144 unmapped: 43057152 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:28.323407+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:29.323577+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 993294 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:30.323713+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:31.323852+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:32.324165+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:33.324438+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:34.324585+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 993294 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:35.324837+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:36.325088+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:37.325302+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:38.325438+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:39.325577+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 993294 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:40.325721+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:41.325905+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:42.326118+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:43.326345+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:44.326573+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 993294 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:45.326812+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:46.327036+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:47.327219+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:48.327443+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:49.327644+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 993294 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112918528 unmapped: 43040768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:50.327821+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112926720 unmapped: 43032576 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:51.328070+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112934912 unmapped: 43024384 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:52.328341+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112943104 unmapped: 43016192 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:53.328513+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112943104 unmapped: 43016192 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:54.328827+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 993294 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112943104 unmapped: 43016192 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:55.329110+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112943104 unmapped: 43016192 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:56.329386+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112943104 unmapped: 43016192 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:57.329613+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112943104 unmapped: 43016192 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:58.329862+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112951296 unmapped: 43008000 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:52:59.330076+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 993294 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112951296 unmapped: 43008000 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:00.330266+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:01.330491+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112951296 unmapped: 43008000 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:02.330726+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112951296 unmapped: 43008000 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:03.331021+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112951296 unmapped: 43008000 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:04.331273+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112951296 unmapped: 43008000 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 993294 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:05.331597+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112951296 unmapped: 43008000 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:06.331745+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112951296 unmapped: 43008000 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:07.332065+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112959488 unmapped: 42999808 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:08.332226+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112967680 unmapped: 42991616 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:09.332443+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112967680 unmapped: 42991616 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 993294 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:10.333801+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112967680 unmapped: 42991616 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:11.333941+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112967680 unmapped: 42991616 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:12.334158+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112967680 unmapped: 42991616 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:13.334341+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112967680 unmapped: 42991616 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:14.334480+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112967680 unmapped: 42991616 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 993294 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:15.334720+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112975872 unmapped: 42983424 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:16.334942+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112975872 unmapped: 42983424 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 heartbeat osd_stat(store_statfs(0x1b9de8000/0x0/0x1bfc00000, data 0x17f7680/0x18a5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:17.335069+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112975872 unmapped: 42983424 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:18.335232+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 63.526645660s of 63.674800873s, submitted: 50
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113025024 unmapped: 42934272 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:19.335356+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113025024 unmapped: 42934272 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 996968 data_alloc: 285212672 data_used: 2887680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 113 heartbeat osd_stat(store_statfs(0x1b9de3000/0x0/0x1bfc00000, data 0x17f99e8/0x18a9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:20.335482+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113025024 unmapped: 42934272 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:21.335620+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 112836608 unmapped: 43122688 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:22.335704+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113909760 unmapped: 42049536 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:23.335858+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114016256 unmapped: 41943040 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 115 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb55f2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:24.336103+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113999872 unmapped: 41959424 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1228219 data_alloc: 285212672 data_used: 2899968
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 115 heartbeat osd_stat(store_statfs(0x1b7dd9000/0x0/0x1bfc00000, data 0x37fe142/0x38b3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 115 ms_handle_reset con 0x55f8ecb41800 session 0x55f8eb5505a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:25.336307+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113983488 unmapped: 41975808 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 115 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:26.336449+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113770496 unmapped: 42188800 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 117 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8e9364780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 117 ms_handle_reset con 0x55f8ecb41800 session 0x55f8eb5501e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:27.336584+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113852416 unmapped: 42106880 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 118 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb550f00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:28.336753+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec014800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.536237717s of 10.001704216s, submitted: 113
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 119 ms_handle_reset con 0x55f8ec014800 session 0x55f8eb243c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113901568 unmapped: 42057728 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 119 heartbeat osd_stat(store_statfs(0x1b9dce000/0x0/0x1bfc00000, data 0x1804c60/0x18bf000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:29.336872+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113901568 unmapped: 42057728 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1030242 data_alloc: 285212672 data_used: 2912256
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 119 heartbeat osd_stat(store_statfs(0x1b9dc8000/0x0/0x1bfc00000, data 0x1806edb/0x18c4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:30.337105+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113901568 unmapped: 42057728 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eda54400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:31.337244+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113901568 unmapped: 42057728 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:32.337412+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113901568 unmapped: 42057728 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 119 heartbeat osd_stat(store_statfs(0x1b9dca000/0x0/0x1bfc00000, data 0x1806edb/0x18c4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:33.337565+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113901568 unmapped: 42057728 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:34.337720+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113909760 unmapped: 42049536 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1030574 data_alloc: 285212672 data_used: 2912256
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:35.337959+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113917952 unmapped: 42041344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:36.338129+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113934336 unmapped: 42024960 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 119 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb2434a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 119 ms_handle_reset con 0x55f8eda54400 session 0x55f8e93641e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 119 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb242960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:37.338282+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113401856 unmapped: 42557440 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 120 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8eb7c8780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 121 heartbeat osd_stat(store_statfs(0x1b9dc9000/0x0/0x1bfc00000, data 0x18076da/0x18c5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:38.338611+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec014800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113410048 unmapped: 42549248 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.974642754s of 10.159323692s, submitted: 55
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 121 ms_handle_reset con 0x55f8ec014800 session 0x55f8eb52f2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 122 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 122 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ebd832c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:39.338792+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113442816 unmapped: 42516480 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1038611 data_alloc: 285212672 data_used: 2924544
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:40.338986+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113442816 unmapped: 42516480 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:41.339156+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113442816 unmapped: 42516480 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:42.339346+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113442816 unmapped: 42516480 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 122 heartbeat osd_stat(store_statfs(0x1b9dbe000/0x0/0x1bfc00000, data 0x180d838/0x18cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 122 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:43.339468+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113557504 unmapped: 42401792 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:44.339644+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113557504 unmapped: 42401792 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1040269 data_alloc: 285212672 data_used: 2924544
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 123 heartbeat osd_stat(store_statfs(0x1b9dbc000/0x0/0x1bfc00000, data 0x180fa86/0x18d1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:45.340082+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113557504 unmapped: 42401792 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:46.340265+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113557504 unmapped: 42401792 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:47.340441+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113557504 unmapped: 42401792 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 123 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8e9c685a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:48.340616+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113303552 unmapped: 42655744 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.462577820s of 10.544756889s, submitted: 43
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 123 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8e9c683c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec014800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 123 heartbeat osd_stat(store_statfs(0x1b9dbc000/0x0/0x1bfc00000, data 0x180fb28/0x18d2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:49.340754+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113360896 unmapped: 42598400 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 123 ms_handle_reset con 0x55f8ec014800 session 0x55f8eb1401e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1041203 data_alloc: 285212672 data_used: 2924544
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:50.340942+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113369088 unmapped: 42590208 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:51.341052+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113369088 unmapped: 42590208 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:52.341207+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113369088 unmapped: 42590208 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:53.341411+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113369088 unmapped: 42590208 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:54.341575+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113369088 unmapped: 42590208 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1041203 data_alloc: 285212672 data_used: 2924544
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 123 heartbeat osd_stat(store_statfs(0x1b9dbc000/0x0/0x1bfc00000, data 0x180faa9/0x18d2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:55.341777+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113385472 unmapped: 42573824 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:56.341944+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113393664 unmapped: 42565632 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 124 heartbeat osd_stat(store_statfs(0x1b9db7000/0x0/0x1bfc00000, data 0x1811e11/0x18d6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 124 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb7805a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:57.342101+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113393664 unmapped: 42565632 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 124 handle_osd_map epochs [125,126], i have 124, src has [1,126]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 126 handle_osd_map epochs [125,126], i have 126, src has [1,126]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:58.342301+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 126 handle_osd_map epochs [125,126], i have 126, src has [1,126]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113467392 unmapped: 42491904 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 127 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:53:59.342459+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113491968 unmapped: 42467328 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1065872 data_alloc: 285212672 data_used: 2936832
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 127 heartbeat osd_stat(store_statfs(0x1b9da6000/0x0/0x1bfc00000, data 0x18195e5/0x18e6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eda54400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 127 handle_osd_map epochs [128,128], i have 127, src has [1,128]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.757010460s of 10.989528656s, submitted: 58
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 128 ms_handle_reset con 0x55f8eda54400 session 0x55f8eb780000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:00.342662+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113491968 unmapped: 42467328 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 128 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 129 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8eb243c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:01.342870+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113451008 unmapped: 42508288 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 130 handle_osd_map epochs [130,130], i have 130, src has [1,130]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:02.343029+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113467392 unmapped: 42491904 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 130 heartbeat osd_stat(store_statfs(0x1b9d95000/0x0/0x1bfc00000, data 0x1820ee3/0x18f7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 131 handle_osd_map epochs [131,131], i have 131, src has [1,131]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec014800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:03.343201+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 131 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb2423c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113508352 unmapped: 42450944 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 131 heartbeat osd_stat(store_statfs(0x1b9d90000/0x0/0x1bfc00000, data 0x182329f/0x18fb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:04.343349+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113606656 unmapped: 42352640 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1086408 data_alloc: 285212672 data_used: 2949120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 132 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8e9364000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 132 ms_handle_reset con 0x55f8ec014800 session 0x55f8eb52e960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 132 heartbeat osd_stat(store_statfs(0x1b9d92000/0x0/0x1bfc00000, data 0x18227fc/0x18f8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ecb41800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:05.343511+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 133 ms_handle_reset con 0x55f8ecb41800 session 0x55f8e9364780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113713152 unmapped: 42246144 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 133 handle_osd_map epochs [133,133], i have 133, src has [1,133]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:06.343676+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113713152 unmapped: 42246144 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 134 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8eb550780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:07.343835+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec014800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113762304 unmapped: 42196992 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 135 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb5505a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 136 handle_osd_map epochs [136,136], i have 136, src has [1,136]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 136 ms_handle_reset con 0x55f8ec014800 session 0x55f8eb52e1e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:08.343970+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113803264 unmapped: 42156032 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 137 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb1be960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:09.347321+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113860608 unmapped: 42098688 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 137 ms_handle_reset con 0x55f8ee53c000 session 0x55f8eb5325a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1098176 data_alloc: 285212672 data_used: 2949120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:10.347458+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113860608 unmapped: 42098688 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 137 heartbeat osd_stat(store_statfs(0x1b9d81000/0x0/0x1bfc00000, data 0x182ecfa/0x1909000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 137 heartbeat osd_stat(store_statfs(0x1b9d81000/0x0/0x1bfc00000, data 0x182ecfa/0x1909000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:11.347609+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113860608 unmapped: 42098688 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:12.347840+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113860608 unmapped: 42098688 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 137 handle_osd_map epochs [137,138], i have 137, src has [1,138]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.659995079s of 12.635141373s, submitted: 302
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:13.347997+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 138 ms_handle_reset con 0x55f8ee53c000 session 0x55f8ea218960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 138 heartbeat osd_stat(store_statfs(0x1b9d80000/0x0/0x1bfc00000, data 0x1830f68/0x190d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113893376 unmapped: 42065920 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:14.348930+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113893376 unmapped: 42065920 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1101362 data_alloc: 285212672 data_used: 2949120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:15.349137+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113893376 unmapped: 42065920 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 139 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8ec744f00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:16.349305+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 139 heartbeat osd_stat(store_statfs(0x1b9d7b000/0x0/0x1bfc00000, data 0x1833332/0x1912000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113950720 unmapped: 42008576 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec014800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 139 ms_handle_reset con 0x55f8ec014800 session 0x55f8eb1d10e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 139 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ea246960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:17.349675+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113958912 unmapped: 42000384 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:18.349838+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113958912 unmapped: 42000384 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 139 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8ec7452c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 139 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8ec7441e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets getting new tickets!
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:19.350085+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _finish_auth 0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:19.351998+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113999872 unmapped: 41959424 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1106738 data_alloc: 285212672 data_used: 2949120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:20.350251+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 113999872 unmapped: 41959424 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 139 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8eb63d4a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:21.350473+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114008064 unmapped: 41951232 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 140 handle_osd_map epochs [140,140], i have 140, src has [1,140]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 140 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8e9c69860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:22.350626+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 140 heartbeat osd_stat(store_statfs(0x1b9d78000/0x0/0x1bfc00000, data 0x183568c/0x1915000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114032640 unmapped: 41926656 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.913292885s of 10.211082458s, submitted: 75
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec014800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 140 ms_handle_reset con 0x55f8ec014800 session 0x55f8eb141c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:23.350783+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114098176 unmapped: 41861120 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:24.350959+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114098176 unmapped: 41861120 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 47
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1111218 data_alloc: 285212672 data_used: 2961408
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:25.351136+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114319360 unmapped: 41639936 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 140 ms_handle_reset con 0x55f8ee53c000 session 0x55f8eb2bef00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:26.351284+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115433472 unmapped: 40525824 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:27.351489+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 140 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8eb202b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 140 heartbeat osd_stat(store_statfs(0x1b9d6c000/0x0/0x1bfc00000, data 0x1840ce5/0x1922000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115433472 unmapped: 40525824 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 140 handle_osd_map epochs [140,141], i have 140, src has [1,141]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:28.351813+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115482624 unmapped: 40476672 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 48
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:29.351992+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115400704 unmapped: 40558592 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1118296 data_alloc: 285212672 data_used: 2973696
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 141 heartbeat osd_stat(store_statfs(0x1b9d66000/0x0/0x1bfc00000, data 0x1842f95/0x1927000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:30.352151+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114655232 unmapped: 41304064 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:31.352314+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114655232 unmapped: 41304064 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:32.352451+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114696192 unmapped: 41263104 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:33.352587+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114696192 unmapped: 41263104 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.030183792s of 11.247812271s, submitted: 62
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:34.352752+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114696192 unmapped: 41263104 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1118917 data_alloc: 285212672 data_used: 2977792
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 142 heartbeat osd_stat(store_statfs(0x1b9d64000/0x0/0x1bfc00000, data 0x1845f92/0x192a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 142 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb2030e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:35.352975+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114704384 unmapped: 41254912 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 142 handle_osd_map epochs [142,143], i have 142, src has [1,143]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:36.353166+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 143 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb2bf4a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114720768 unmapped: 41238528 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 143 heartbeat osd_stat(store_statfs(0x1b9d5f000/0x0/0x1bfc00000, data 0x18482fa/0x192e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 143 ms_handle_reset con 0x55f8ee53c400 session 0x55f8e95412c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:37.353319+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114720768 unmapped: 41238528 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 143 handle_osd_map epochs [144,144], i have 143, src has [1,144]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 144 ms_handle_reset con 0x55f8ee53c800 session 0x55f8eb7c9c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:38.353471+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 144 ms_handle_reset con 0x55f8ee53c800 session 0x55f8eb7c8b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114753536 unmapped: 41205760 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 145 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8eb7c83c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:39.353657+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114761728 unmapped: 41197568 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1133178 data_alloc: 285212672 data_used: 3002368
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:40.354033+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 145 heartbeat osd_stat(store_statfs(0x1b9d52000/0x0/0x1bfc00000, data 0x184edda/0x193a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114761728 unmapped: 41197568 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 145 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb5330e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 145 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8ebd82000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:41.354359+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114786304 unmapped: 41172992 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 145 ms_handle_reset con 0x55f8ee53c400 session 0x55f8ee602b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 145 heartbeat osd_stat(store_statfs(0x1b9d56000/0x0/0x1bfc00000, data 0x184f3dc/0x1938000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:42.354497+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114802688 unmapped: 41156608 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 145 handle_osd_map epochs [145,146], i have 145, src has [1,146]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 146 heartbeat osd_stat(store_statfs(0x1b9d51000/0x0/0x1bfc00000, data 0x185162a/0x193c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:43.354653+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114810880 unmapped: 41148416 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 146 heartbeat osd_stat(store_statfs(0x1b9d51000/0x0/0x1bfc00000, data 0x185162a/0x193c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:44.354868+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114810880 unmapped: 41148416 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.357652664s of 10.696464539s, submitted: 104
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1135472 data_alloc: 285212672 data_used: 3010560
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 146 ms_handle_reset con 0x55f8ee53c400 session 0x55f8ee6030e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:45.355122+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114810880 unmapped: 41148416 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 146 heartbeat osd_stat(store_statfs(0x1b9d50000/0x0/0x1bfc00000, data 0x185163b/0x193d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 146 handle_osd_map epochs [146,147], i have 146, src has [1,147]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:46.355297+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 147 handle_osd_map epochs [147,147], i have 147, src has [1,147]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114810880 unmapped: 41148416 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:47.355427+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 147 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ee6032c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114810880 unmapped: 41148416 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:48.355549+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 147 handle_osd_map epochs [147,148], i have 147, src has [1,148]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114819072 unmapped: 41140224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:49.355641+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 148 handle_osd_map epochs [147,148], i have 148, src has [1,148]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 148 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8ee6034a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114819072 unmapped: 41140224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1139700 data_alloc: 285212672 data_used: 3010560
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:50.355814+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114819072 unmapped: 41140224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 148 heartbeat osd_stat(store_statfs(0x1b9d4a000/0x0/0x1bfc00000, data 0x1855d4e/0x1944000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:51.355954+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114819072 unmapped: 41140224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:52.356116+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114819072 unmapped: 41140224 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 148 handle_osd_map epochs [148,149], i have 148, src has [1,149]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:53.356270+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114827264 unmapped: 41132032 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:54.356441+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114827264 unmapped: 41132032 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1143902 data_alloc: 285212672 data_used: 3022848
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:55.356613+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114827264 unmapped: 41132032 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 149 heartbeat osd_stat(store_statfs(0x1b9d45000/0x0/0x1bfc00000, data 0x1857f9c/0x1948000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.313089371s of 11.439864159s, submitted: 38
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:56.356761+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114835456 unmapped: 41123840 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 149 heartbeat osd_stat(store_statfs(0x1b9d45000/0x0/0x1bfc00000, data 0x1857f9c/0x1948000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:57.356946+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 149 heartbeat osd_stat(store_statfs(0x1b9d41000/0x0/0x1bfc00000, data 0x185d2ca/0x194d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114835456 unmapped: 41123840 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 149 heartbeat osd_stat(store_statfs(0x1b9d41000/0x0/0x1bfc00000, data 0x185d2ca/0x194d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:58.357094+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114835456 unmapped: 41123840 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:54:59.357333+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114843648 unmapped: 41115648 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1143698 data_alloc: 285212672 data_used: 3022848
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:00.358333+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114843648 unmapped: 41115648 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:01.358814+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114843648 unmapped: 41115648 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 149 heartbeat osd_stat(store_statfs(0x1b9d3d000/0x0/0x1bfc00000, data 0x1860a58/0x1951000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:02.358978+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114843648 unmapped: 41115648 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:03.359198+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114843648 unmapped: 41115648 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:04.359412+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 149 heartbeat osd_stat(store_statfs(0x1b9d3c000/0x0/0x1bfc00000, data 0x1862099/0x1952000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114843648 unmapped: 41115648 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1144562 data_alloc: 285212672 data_used: 3022848
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:05.359622+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114843648 unmapped: 41115648 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 149 ms_handle_reset con 0x55f8ee53c800 session 0x55f8ee603860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:06.359907+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.312617302s of 10.401549339s, submitted: 17
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea76a400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114909184 unmapped: 41050112 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:07.360051+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 149 ms_handle_reset con 0x55f8ea73c000 session 0x55f8ee602d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114909184 unmapped: 41050112 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:08.360191+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 149 heartbeat osd_stat(store_statfs(0x1b9d31000/0x0/0x1bfc00000, data 0x186c4d4/0x195d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114909184 unmapped: 41050112 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:09.360337+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114909184 unmapped: 41050112 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1152340 data_alloc: 285212672 data_used: 3022848
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:10.360507+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114941952 unmapped: 41017344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:11.360652+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114941952 unmapped: 41017344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:12.360813+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 149 handle_osd_map epochs [149,150], i have 149, src has [1,150]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 150 heartbeat osd_stat(store_statfs(0x1b9d1d000/0x0/0x1bfc00000, data 0x187ec1a/0x1970000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 114966528 unmapped: 40992768 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 150 heartbeat osd_stat(store_statfs(0x1b9d18000/0x0/0x1bfc00000, data 0x1880feb/0x1974000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:13.360999+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 150 heartbeat osd_stat(store_statfs(0x1b9d18000/0x0/0x1bfc00000, data 0x1880feb/0x1974000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 150 handle_osd_map epochs [151,151], i have 150, src has [1,151]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 150 handle_osd_map epochs [151,151], i have 151, src has [1,151]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 150 handle_osd_map epochs [151,151], i have 151, src has [1,151]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115007488 unmapped: 40951808 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:14.361197+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 151 handle_osd_map epochs [151,151], i have 151, src has [1,151]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115105792 unmapped: 40853504 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1157062 data_alloc: 285212672 data_used: 3047424
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:15.361510+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115105792 unmapped: 40853504 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:16.361650+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115105792 unmapped: 40853504 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.896094322s of 10.240992546s, submitted: 117
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:17.361799+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115105792 unmapped: 40853504 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 151 heartbeat osd_stat(store_statfs(0x1b9d0e000/0x0/0x1bfc00000, data 0x188c5de/0x1980000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 151 handle_osd_map epochs [152,152], i have 151, src has [1,152]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 151 handle_osd_map epochs [152,152], i have 152, src has [1,152]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:18.361984+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115113984 unmapped: 40845312 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:19.362118+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115113984 unmapped: 40845312 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1159296 data_alloc: 285212672 data_used: 3059712
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:20.362275+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb1bf2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115113984 unmapped: 40845312 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:21.362435+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115113984 unmapped: 40845312 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:22.362617+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115122176 unmapped: 40837120 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ea73c000 session 0x55f8ed34d0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b9cfb000/0x0/0x1bfc00000, data 0x189b146/0x1993000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:23.362785+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8ed34cf00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ee53c400 session 0x55f8ec745a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115630080 unmapped: 40329216 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:24.362965+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b916b000/0x0/0x1bfc00000, data 0x242b17f/0x2523000, compress 0x0/0x0/0x0, omap 0x649, meta 0x456f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115580928 unmapped: 40378368 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1257259 data_alloc: 285212672 data_used: 3063808
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ee53c800 session 0x55f8ebd82d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:25.363140+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 115613696 unmapped: 40345600 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:26.363296+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116662272 unmapped: 39297024 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.662670135s of 10.065321922s, submitted: 107
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:27.363469+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 116670464 unmapped: 39288832 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:28.363620+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117727232 unmapped: 38232064 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:29.363783+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117727232 unmapped: 38232064 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 49
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1176881 data_alloc: 285212672 data_used: 3059712
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b98dd000/0x0/0x1bfc00000, data 0x18b9800/0x19b1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:30.363960+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117874688 unmapped: 38084608 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:31.365115+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117874688 unmapped: 38084608 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:32.365241+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb63d2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117882880 unmapped: 38076416 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b98c3000/0x0/0x1bfc00000, data 0x18d1525/0x19cb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:33.365423+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117940224 unmapped: 38019072 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:34.365621+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb2be960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117940224 unmapped: 38019072 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1178721 data_alloc: 285212672 data_used: 3059712
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:35.366022+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ee53c400 session 0x55f8eb2bf4a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117948416 unmapped: 38010880 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:36.366154+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117948416 unmapped: 38010880 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.810359955s of 10.000989914s, submitted: 45
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ebfc5c00 session 0x55f8e9c69860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:37.366406+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b98aa000/0x0/0x1bfc00000, data 0x18e869a/0x19e4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117964800 unmapped: 37994496 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:38.366592+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117964800 unmapped: 37994496 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:39.366723+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117964800 unmapped: 37994496 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1183668 data_alloc: 285212672 data_used: 3059712
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:40.366858+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117972992 unmapped: 37986304 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:41.367022+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117972992 unmapped: 37986304 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:42.367166+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117981184 unmapped: 37978112 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:43.367329+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b989e000/0x0/0x1bfc00000, data 0x18f5f67/0x19f0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117981184 unmapped: 37978112 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:44.367515+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117981184 unmapped: 37978112 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1185864 data_alloc: 285212672 data_used: 3059712
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:45.367683+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ebfc5800 session 0x55f8e9c69680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 117989376 unmapped: 37969920 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:46.367838+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118054912 unmapped: 37904384 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.865970612s of 10.000689507s, submitted: 34
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:47.367982+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ebfc5800 session 0x55f8ec744f00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ea73c000 session 0x55f8ec7441e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118112256 unmapped: 37847040 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:48.368160+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ebfc5c00 session 0x55f8ed3b2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118013952 unmapped: 37945344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:49.368379+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b9890000/0x0/0x1bfc00000, data 0x190489e/0x19fe000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118013952 unmapped: 37945344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1187098 data_alloc: 285212672 data_used: 3059712
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:50.368524+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8ed3b23c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118013952 unmapped: 37945344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:51.368637+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118013952 unmapped: 37945344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:52.368828+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b9882000/0x0/0x1bfc00000, data 0x1912da7/0x1a0c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118013952 unmapped: 37945344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:53.368954+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118013952 unmapped: 37945344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:54.369101+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118013952 unmapped: 37945344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1188306 data_alloc: 285212672 data_used: 3059712
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:55.369299+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b9878000/0x0/0x1bfc00000, data 0x191cedf/0x1a16000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118013952 unmapped: 37945344 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:56.369461+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.776439667s of 10.000485420s, submitted: 52
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118095872 unmapped: 37863424 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:57.369597+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118095872 unmapped: 37863424 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:58.369733+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b986d000/0x0/0x1bfc00000, data 0x19270eb/0x1a21000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118095872 unmapped: 37863424 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:55:59.369859+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118095872 unmapped: 37863424 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1192184 data_alloc: 285212672 data_used: 3059712
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:00.370055+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 ms_handle_reset con 0x55f8ee53c400 session 0x55f8ed3b2780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118079488 unmapped: 37879808 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:01.370245+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 heartbeat osd_stat(store_statfs(0x1b9864000/0x0/0x1bfc00000, data 0x192fa03/0x1a2a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 118079488 unmapped: 37879808 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:02.370384+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119136256 unmapped: 36823040 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:03.370549+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 152 handle_osd_map epochs [152,153], i have 152, src has [1,153]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 153 ms_handle_reset con 0x55f8ebfc5c00 session 0x55f8ec745e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119152640 unmapped: 36806656 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:04.370684+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 153 handle_osd_map epochs [153,154], i have 153, src has [1,154]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 154 handle_osd_map epochs [154,154], i have 154, src has [1,154]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 154 ms_handle_reset con 0x55f8ea73c000 session 0x55f8ecb3cd20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119169024 unmapped: 36790272 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1206298 data_alloc: 285212672 data_used: 3072000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 154 heartbeat osd_stat(store_statfs(0x1b984f000/0x0/0x1bfc00000, data 0x193e702/0x1a3e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:05.370926+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 154 handle_osd_map epochs [154,155], i have 154, src has [1,155]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 154 handle_osd_map epochs [155,155], i have 155, src has [1,155]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 155 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb543a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119209984 unmapped: 36749312 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 155 ms_handle_reset con 0x55f8ebfc5400 session 0x55f8ed33f2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:06.371014+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 155 ms_handle_reset con 0x55f8ebfc5000 session 0x55f8eb543c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 155 handle_osd_map epochs [156,156], i have 155, src has [1,156]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 156 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.675148964s of 10.001482964s, submitted: 90
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 156 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb63cd20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119259136 unmapped: 36700160 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 156 ms_handle_reset con 0x55f8ee53c400 session 0x55f8ed3b2960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:07.371114+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 156 heartbeat osd_stat(store_statfs(0x1b9836000/0x0/0x1bfc00000, data 0x194f015/0x1a54000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 156 ms_handle_reset con 0x55f8ebfc5000 session 0x55f8ed3b34a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119283712 unmapped: 36675584 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 156 ms_handle_reset con 0x55f8ebfc5400 session 0x55f8ed3b3a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:08.371238+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119324672 unmapped: 36634624 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 157 heartbeat osd_stat(store_statfs(0x1b9839000/0x0/0x1bfc00000, data 0x1951255/0x1a54000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:09.371373+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 157 handle_osd_map epochs [158,158], i have 157, src has [1,158]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 158 ms_handle_reset con 0x55f8ebfc5c00 session 0x55f8ec745e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119357440 unmapped: 36601856 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1218333 data_alloc: 285212672 data_used: 3084288
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:10.371520+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 158 ms_handle_reset con 0x55f8ea73c000 session 0x55f8ed3b23c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119373824 unmapped: 36585472 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:11.371675+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119398400 unmapped: 36560896 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:12.371829+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 158 handle_osd_map epochs [159,159], i have 158, src has [1,159]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119431168 unmapped: 36528128 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:13.371960+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119431168 unmapped: 36528128 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:14.372112+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 159 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 159 heartbeat osd_stat(store_statfs(0x1b9822000/0x0/0x1bfc00000, data 0x196567d/0x1a6b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119431168 unmapped: 36528128 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1221948 data_alloc: 285212672 data_used: 3104768
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:15.372323+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 159 ms_handle_reset con 0x55f8ebfc5000 session 0x55f8ecb3dc20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119431168 unmapped: 36528128 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 159 ms_handle_reset con 0x55f8ebfc5400 session 0x55f8ec7454a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:16.372475+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 159 ms_handle_reset con 0x55f8ee53c400 session 0x55f8ee6023c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.357500076s of 10.005616188s, submitted: 200
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119447552 unmapped: 36511744 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 159 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8ee602f00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:17.372618+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 159 heartbeat osd_stat(store_statfs(0x1b9813000/0x0/0x1bfc00000, data 0x1971ca3/0x1a7b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119447552 unmapped: 36511744 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb55eb40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 handle_osd_map epochs [160,160], i have 160, src has [1,160]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:18.372751+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb1bf2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 ms_handle_reset con 0x55f8ebfc5400 session 0x55f8ed34cd20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 ms_handle_reset con 0x55f8ebfc5000 session 0x55f8eb7c8d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 119595008 unmapped: 36364288 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 heartbeat osd_stat(store_statfs(0x1b8c98000/0x0/0x1bfc00000, data 0x24e9df7/0x25f3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [0,0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 ms_handle_reset con 0x55f8ebfc4c00 session 0x55f8ebd82000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:19.372901+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 ms_handle_reset con 0x55f8ee53c400 session 0x55f8ebd82d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 ms_handle_reset con 0x55f8ebfc5000 session 0x55f8eb52ef00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 heartbeat osd_stat(store_statfs(0x1b8c98000/0x0/0x1bfc00000, data 0x24e9df7/0x25f3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb5430e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 ms_handle_reset con 0x55f8ebfc5400 session 0x55f8eb242000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120004608 unmapped: 35954688 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 heartbeat osd_stat(store_statfs(0x1b83d3000/0x0/0x1bfc00000, data 0x2daf535/0x2ebb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x496f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1406105 data_alloc: 285212672 data_used: 3117056
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:20.373089+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb203a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb52e960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120078336 unmapped: 35880960 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb30f0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:21.373226+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 161 handle_osd_map epochs [161,161], i have 161, src has [1,161]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 161 ms_handle_reset con 0x55f8ebfc5000 session 0x55f8ec7452c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121200640 unmapped: 34758656 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 161 ms_handle_reset con 0x55f8ebfc5400 session 0x55f8ecb3d860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:22.373353+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121208832 unmapped: 34750464 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 161 ms_handle_reset con 0x55f8ee53c400 session 0x55f8eb55f2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:23.373503+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 161 ms_handle_reset con 0x55f8ee53c400 session 0x55f8e9c69680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 161 heartbeat osd_stat(store_statfs(0x1b7dd9000/0x0/0x1bfc00000, data 0x2209871/0x2315000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 161 ms_handle_reset con 0x55f8ea73c000 session 0x55f8ec747e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121282560 unmapped: 34676736 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:24.373660+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 161 ms_handle_reset con 0x55f8ebfc5400 session 0x55f8eb543680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 161 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb1d0d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 162 ms_handle_reset con 0x55f8ebfc5000 session 0x55f8eb780960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121315328 unmapped: 34643968 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1329659 data_alloc: 285212672 data_used: 3137536
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:25.373915+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 162 heartbeat osd_stat(store_statfs(0x1b7dc6000/0x0/0x1bfc00000, data 0x221916a/0x2327000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 162 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb55fe00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 163 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 163 handle_osd_map epochs [163,163], i have 163, src has [1,163]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 163 ms_handle_reset con 0x55f8ebfc5400 session 0x55f8eb55eb40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 163 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8e95405a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121167872 unmapped: 34791424 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:26.374039+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 163 ms_handle_reset con 0x55f8ee53c400 session 0x55f8eb55e960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 163 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8eb52e5a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.936347961s of 10.002873421s, submitted: 286
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 163 handle_osd_map epochs [164,164], i have 163, src has [1,164]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 163 handle_osd_map epochs [164,164], i have 164, src has [1,164]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 164 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb63cd20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 164 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8eb1c1680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121233408 unmapped: 34725888 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:27.374757+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121307136 unmapped: 34652160 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:28.374935+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 165 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb140780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 165 ms_handle_reset con 0x55f8ee53c400 session 0x55f8ec744000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 165 ms_handle_reset con 0x55f8ebfc5400 session 0x55f8eb141680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 165 ms_handle_reset con 0x55f8ebfc4400 session 0x55f8ed33e000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121315328 unmapped: 34643968 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:29.375098+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 165 ms_handle_reset con 0x55f8ebfc5400 session 0x55f8eb543a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 165 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb63d0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b7da0000/0x0/0x1bfc00000, data 0x223cbbf/0x234d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121438208 unmapped: 34521088 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1343209 data_alloc: 285212672 data_used: 3141632
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:30.375269+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 165 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8eb140f00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b7da0000/0x0/0x1bfc00000, data 0x223cbbf/0x234d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 165 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8e9364000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121438208 unmapped: 34521088 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:31.375522+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120922112 unmapped: 35037184 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 165 heartbeat osd_stat(store_statfs(0x1b7d8c000/0x0/0x1bfc00000, data 0x2251d73/0x2362000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:32.375675+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120938496 unmapped: 35020800 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:33.375849+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 166 heartbeat osd_stat(store_statfs(0x1b7d86000/0x0/0x1bfc00000, data 0x22571b9/0x2368000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120979456 unmapped: 34979840 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:34.375985+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 120995840 unmapped: 34963456 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1354257 data_alloc: 285212672 data_used: 3153920
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:35.376365+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 167 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb5510e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 167 handle_osd_map epochs [167,167], i have 167, src has [1,167]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121036800 unmapped: 34922496 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:36.376550+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.421113014s of 10.001377106s, submitted: 184
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 167 handle_osd_map epochs [168,168], i have 167, src has [1,168]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 167 handle_osd_map epochs [167,168], i have 168, src has [1,168]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 168 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb550780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121036800 unmapped: 34922496 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:37.376737+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 168 heartbeat osd_stat(store_statfs(0x1b7d44000/0x0/0x1bfc00000, data 0x22912f6/0x23a9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 168 handle_osd_map epochs [169,169], i have 168, src has [1,169]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 169 handle_osd_map epochs [169,169], i have 169, src has [1,169]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121618432 unmapped: 34340864 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:38.376912+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 169 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8ecb3c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121618432 unmapped: 34340864 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:39.377287+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 169 ms_handle_reset con 0x55f8ebfc5400 session 0x55f8ecb3d860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121618432 unmapped: 34340864 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1376645 data_alloc: 285212672 data_used: 3166208
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:40.377463+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 169 ms_handle_reset con 0x55f8ee53c400 session 0x55f8ed33f2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121659392 unmapped: 34299904 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:41.378215+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 169 handle_osd_map epochs [170,170], i have 169, src has [1,170]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 169 heartbeat osd_stat(store_statfs(0x1b7cff000/0x0/0x1bfc00000, data 0x22d1270/0x23ee000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 169 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 169 handle_osd_map epochs [170,170], i have 170, src has [1,170]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 121659392 unmapped: 34299904 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:42.378371+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 170 ms_handle_reset con 0x55f8ee53c400 session 0x55f8ed33f0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 170 handle_osd_map epochs [170,171], i have 170, src has [1,171]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 171 handle_osd_map epochs [171,171], i have 171, src has [1,171]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 171 ms_handle_reset con 0x55f8ea73c000 session 0x55f8ed33f680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 171 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8ed33e780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 122953728 unmapped: 33005568 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:43.378592+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc5400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 122970112 unmapped: 32989184 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:44.378732+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 171 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8ed33fa40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 171 handle_osd_map epochs [172,172], i have 171, src has [1,172]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 122986496 unmapped: 32972800 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1407049 data_alloc: 285212672 data_used: 3198976
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:45.378918+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 172 ms_handle_reset con 0x55f8ebfc5400 session 0x55f8eb523a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 172 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8eb781860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 172 ms_handle_reset con 0x55f8ea73c000 session 0x55f8ebd83a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 172 handle_osd_map epochs [172,172], i have 172, src has [1,172]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 124100608 unmapped: 31858688 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:46.379047+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 172 handle_osd_map epochs [173,173], i have 172, src has [1,173]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 173 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8eb242b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 173 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb5421e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.900584221s of 10.002549171s, submitted: 347
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 125214720 unmapped: 30744576 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:47.379163+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 173 handle_osd_map epochs [174,174], i have 173, src has [1,174]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 174 handle_osd_map epochs [174,174], i have 174, src has [1,174]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 174 heartbeat osd_stat(store_statfs(0x1b7c79000/0x0/0x1bfc00000, data 0x234f796/0x2474000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 174 ms_handle_reset con 0x55f8ee53c400 session 0x55f8ecb3d2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 174 handle_osd_map epochs [175,175], i have 174, src has [1,175]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 175 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb141c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 125239296 unmapped: 30720000 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:48.379368+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 175 handle_osd_map epochs [175,175], i have 175, src has [1,175]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:49.379761+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 125239296 unmapped: 30720000 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 175 handle_osd_map epochs [175,176], i have 175, src has [1,176]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 176 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8eb7c9a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 176 heartbeat osd_stat(store_statfs(0x1b7c4b000/0x0/0x1bfc00000, data 0x2376b54/0x249e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x5b0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 176 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8eb7c94a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:50.379952+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 125149184 unmapped: 30810112 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1417160 data_alloc: 285212672 data_used: 3198976
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:51.380123+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 125198336 unmapped: 30760960 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 176 handle_osd_map epochs [177,177], i have 176, src has [1,177]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 177 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb7c8000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 177 handle_osd_map epochs [177,177], i have 177, src has [1,177]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:52.380595+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 125255680 unmapped: 30703616 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 177 handle_osd_map epochs [177,177], i have 177, src has [1,177]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 177 ms_handle_reset con 0x55f8e9d8e000 session 0x55f8ec7472c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 177 handle_osd_map epochs [178,178], i have 177, src has [1,178]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 177 handle_osd_map epochs [178,178], i have 178, src has [1,178]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 178 ms_handle_reset con 0x55f8ee53c400 session 0x55f8eb781c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:53.380765+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128679936 unmapped: 27279360 heap: 155959296 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 178 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ebd82960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 178 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8eb7810e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 178 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8ed33f2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 178 heartbeat osd_stat(store_statfs(0x1b5d5a000/0x0/0x1bfc00000, data 0x2cc5b89/0x2df3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 178 handle_osd_map epochs [179,179], i have 178, src has [1,179]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 179 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb542d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:54.381051+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 179 heartbeat osd_stat(store_statfs(0x1b5d5a000/0x0/0x1bfc00000, data 0x2cc5b89/0x2df3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128442368 unmapped: 31195136 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 50
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:55.381301+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128729088 unmapped: 30908416 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1520410 data_alloc: 285212672 data_used: 3211264
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 179 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb7c8780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 179 handle_osd_map epochs [179,180], i have 179, src has [1,180]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 179 handle_osd_map epochs [180,180], i have 180, src has [1,180]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 180 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb523860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:56.381445+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128761856 unmapped: 30875648 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 180 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8ed3b3860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 180 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8ecb3c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.004740715s of 10.002447128s, submitted: 548
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 181 ms_handle_reset con 0x55f8ee53c400 session 0x55f8eb2434a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:57.381579+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 181 ms_handle_reset con 0x55f8ee53c400 session 0x55f8ec746000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128770048 unmapped: 30867456 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 181 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb141680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 181 handle_osd_map epochs [182,182], i have 181, src has [1,182]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 182 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb1be1e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:58.381845+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128843776 unmapped: 30793728 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 182 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8ed396780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:56:59.381985+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128770048 unmapped: 30867456 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 182 handle_osd_map epochs [183,183], i have 182, src has [1,183]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 183 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8ed3970e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b660a000/0x0/0x1bfc00000, data 0x240ec5b/0x2542000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:00.382133+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128794624 unmapped: 30842880 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1460429 data_alloc: 285212672 data_used: 3223552
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 183 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ed397c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:01.382303+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 183 heartbeat osd_stat(store_statfs(0x1b6601000/0x0/0x1bfc00000, data 0x24145d7/0x254a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x70af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128802816 unmapped: 30834688 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 183 ms_handle_reset con 0x55f8ea73c000 session 0x55f8ed397e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 183 handle_osd_map epochs [184,184], i have 183, src has [1,184]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 184 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8ebd83e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:02.382502+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128901120 unmapped: 30736384 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 184 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8ed34da40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 184 handle_osd_map epochs [184,185], i have 184, src has [1,185]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 185 handle_osd_map epochs [185,185], i have 185, src has [1,185]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 185 ms_handle_reset con 0x55f8ee53c400 session 0x55f8eb1c0960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:03.382664+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128983040 unmapped: 30654464 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 185 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ec744b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 185 handle_osd_map epochs [185,186], i have 185, src has [1,186]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 186 handle_osd_map epochs [186,186], i have 186, src has [1,186]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:04.382809+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:05.383044+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1463720 data_alloc: 285212672 data_used: 3227648
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 186 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb301c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 186 heartbeat osd_stat(store_statfs(0x1b75da000/0x0/0x1bfc00000, data 0x241da9e/0x2554000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:06.383229+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.215642929s of 10.002377510s, submitted: 222
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:07.383414+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 186 handle_osd_map epochs [187,187], i have 186, src has [1,187]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:08.383574+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:09.383745+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:10.383942+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1464770 data_alloc: 285212672 data_used: 3239936
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:11.384146+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b75c9000/0x0/0x1bfc00000, data 0x242cd73/0x2564000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:12.384327+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:13.384524+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:14.384726+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:15.384940+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1464770 data_alloc: 285212672 data_used: 3239936
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:16.385074+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b75c9000/0x0/0x1bfc00000, data 0x242cd73/0x2564000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:17.385200+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:18.385406+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:19.385594+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:20.385780+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129024000 unmapped: 30613504 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1464770 data_alloc: 285212672 data_used: 3239936
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b75c9000/0x0/0x1bfc00000, data 0x242cd73/0x2564000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.108931541s of 14.132076263s, submitted: 16
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 187 heartbeat osd_stat(store_statfs(0x1b75c9000/0x0/0x1bfc00000, data 0x242cd73/0x2564000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:21.385981+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129040384 unmapped: 30597120 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 187 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8eb55f4a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:22.386147+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129040384 unmapped: 30597120 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 187 handle_osd_map epochs [188,188], i have 187, src has [1,188]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 188 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8eb2034a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:23.386327+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129048576 unmapped: 30588928 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 188 heartbeat osd_stat(store_statfs(0x1b75b4000/0x0/0x1bfc00000, data 0x243eca2/0x2579000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 188 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb52e960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:24.386462+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129073152 unmapped: 30564352 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 188 handle_osd_map epochs [188,189], i have 188, src has [1,189]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 189 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ed3b3c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 189 heartbeat osd_stat(store_statfs(0x1b75a7000/0x0/0x1bfc00000, data 0x244b645/0x2586000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:25.386650+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128425984 unmapped: 31211520 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1478827 data_alloc: 285212672 data_used: 3256320
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 189 heartbeat osd_stat(store_statfs(0x1b7595000/0x0/0x1bfc00000, data 0x245d566/0x2599000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:26.389021+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128737280 unmapped: 30900224 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 189 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb55fc20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:27.389165+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128745472 unmapped: 30892032 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:28.389313+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128950272 unmapped: 30687232 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:29.389501+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129089536 unmapped: 30547968 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:30.389684+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 189 heartbeat osd_stat(store_statfs(0x1b756a000/0x0/0x1bfc00000, data 0x2488c70/0x25c4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129097728 unmapped: 30539776 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1480243 data_alloc: 285212672 data_used: 3252224
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:31.389849+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129097728 unmapped: 30539776 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.179577827s of 10.439832687s, submitted: 59
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 189 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8ee6021e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:32.389945+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129449984 unmapped: 30187520 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 189 handle_osd_map epochs [189,190], i have 189, src has [1,190]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 190 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8ec7443c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 190 handle_osd_map epochs [190,190], i have 190, src has [1,190]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 190 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8ebd823c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:33.393767+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 190 heartbeat osd_stat(store_statfs(0x1b7541000/0x0/0x1bfc00000, data 0x24aeacb/0x25ed000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 190 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb1beb40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129490944 unmapped: 30146560 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:34.393924+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129490944 unmapped: 30146560 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 190 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb1d05a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:35.394107+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1485157 data_alloc: 285212672 data_used: 3264512
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129556480 unmapped: 30081024 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 190 heartbeat osd_stat(store_statfs(0x1b752b000/0x0/0x1bfc00000, data 0x24c550a/0x2603000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 190 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb1d1680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:36.394288+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129581056 unmapped: 30056448 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:37.394497+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 190 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8eb5423c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129613824 unmapped: 30023680 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 190 ms_handle_reset con 0x55f8ebfc4800 session 0x55f8ed34d860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:38.394676+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 128983040 unmapped: 30654464 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 190 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb1c1e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:39.394850+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129040384 unmapped: 30597120 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:40.395066+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1490852 data_alloc: 285212672 data_used: 3264512
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129040384 unmapped: 30597120 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 190 handle_osd_map epochs [191,191], i have 190, src has [1,191]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 191 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ed3b23c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:41.395255+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 191 heartbeat osd_stat(store_statfs(0x1b7500000/0x0/0x1bfc00000, data 0x24ec424/0x262d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129040384 unmapped: 30597120 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.925024986s of 10.247732162s, submitted: 98
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 191 handle_osd_map epochs [192,192], i have 191, src has [1,192]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 192 ms_handle_reset con 0x55f8ea73c000 session 0x55f8ed3b3680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 192 handle_osd_map epochs [192,192], i have 192, src has [1,192]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:42.395401+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129187840 unmapped: 30449664 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 192 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8e9c683c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9cb400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:43.395549+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129196032 unmapped: 30441472 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 193 ms_handle_reset con 0x55f8ec9cb400 session 0x55f8eb5501e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 193 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb52ef00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:44.395740+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 129212416 unmapped: 30425088 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 194 handle_osd_map epochs [194,194], i have 194, src has [1,194]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 194 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ea219860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 194 handle_osd_map epochs [194,194], i have 194, src has [1,194]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 194 ms_handle_reset con 0x55f8ea73c000 session 0x55f8eb55eb40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 194 handle_osd_map epochs [193,194], i have 194, src has [1,194]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:45.395950+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1507115 data_alloc: 285212672 data_used: 3276800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130301952 unmapped: 29335552 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 194 heartbeat osd_stat(store_statfs(0x1b74c9000/0x0/0x1bfc00000, data 0x251d943/0x2663000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 194 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8eb243c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:46.396105+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130301952 unmapped: 29335552 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 194 ms_handle_reset con 0x55f8e8f27400 session 0x55f8eb2423c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 194 ms_handle_reset con 0x55f8e8f27400 session 0x55f8eb242780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:47.396239+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130310144 unmapped: 29327360 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 194 handle_osd_map epochs [195,195], i have 194, src has [1,195]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 195 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ed34d0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:48.396396+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 195 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ecb3d4a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130359296 unmapped: 29278208 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea73c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 195 heartbeat osd_stat(store_statfs(0x1b74b5000/0x0/0x1bfc00000, data 0x252fe65/0x2678000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [0,0,0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 195 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8eb1c10e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 195 ms_handle_reset con 0x55f8ea73c000 session 0x55f8ed3b2960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:49.396550+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130408448 unmapped: 29229056 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 195 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ed3b3e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 195 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ec49f2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:50.396729+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 195 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ec49f680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1517796 data_alloc: 285212672 data_used: 3297280
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130441216 unmapped: 29196288 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 195 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8ec49f860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:51.396918+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 195 heartbeat osd_stat(store_statfs(0x1b7498000/0x0/0x1bfc00000, data 0x254c7b4/0x2696000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130441216 unmapped: 29196288 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.612747192s of 10.047723770s, submitted: 113
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f26000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:52.397094+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 195 ms_handle_reset con 0x55f8e8f26000 session 0x55f8ec49fa40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130482176 unmapped: 29155328 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:53.397325+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 196 handle_osd_map epochs [196,196], i have 196, src has [1,196]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 196 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ec49fe00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130498560 unmapped: 29138944 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:54.397520+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130498560 unmapped: 29138944 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 196 heartbeat osd_stat(store_statfs(0x1b746f000/0x0/0x1bfc00000, data 0x2572619/0x26be000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 196 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ec49f2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:55.397764+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 196 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ed3b2960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1526544 data_alloc: 285212672 data_used: 3309568
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130514944 unmapped: 29122560 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ece51c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8ecb3d4a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8ece51c00 session 0x55f8ed34c1e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ece51c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8ece51c00 session 0x55f8ed34d0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:56.398003+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8e8f27400 session 0x55f8eb2423c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130564096 unmapped: 29073408 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 heartbeat osd_stat(store_statfs(0x1b744f000/0x0/0x1bfc00000, data 0x2591fe6/0x26de000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:57.398156+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb55eb40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb55e1e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebe69c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130588672 unmapped: 29048832 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8ebe69c00 session 0x55f8e95412c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:58.398382+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8e8f27400 session 0x55f8e9c683c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 130670592 unmapped: 28966912 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:57:59.398571+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 131866624 unmapped: 27770880 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ed3b23c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb5423c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:00.398790+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1533759 data_alloc: 285212672 data_used: 3313664
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 131883008 unmapped: 27754496 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ece51c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8ece51c00 session 0x55f8eb5325a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:01.398950+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 heartbeat osd_stat(store_statfs(0x1b7412000/0x0/0x1bfc00000, data 0x25d311c/0x271c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 131883008 unmapped: 27754496 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.510819435s of 10.116579056s, submitted: 157
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:02.399143+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ece51400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8ece51400 session 0x55f8eb1c0000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 131907584 unmapped: 27729920 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ee41c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 197 handle_osd_map epochs [198,198], i have 197, src has [1,198]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:03.399383+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 131907584 unmapped: 27729920 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:04.399584+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 131907584 unmapped: 27729920 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:05.399843+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1541173 data_alloc: 285212672 data_used: 3325952
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 131907584 unmapped: 27729920 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:06.400080+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 heartbeat osd_stat(store_statfs(0x1b73c3000/0x0/0x1bfc00000, data 0x261d5ca/0x276a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 131915776 unmapped: 27721728 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:07.400226+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 131923968 unmapped: 27713536 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:08.400424+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 132005888 unmapped: 27631616 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:09.400665+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 132005888 unmapped: 27631616 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:10.400997+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1544325 data_alloc: 285212672 data_used: 3325952
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 132005888 unmapped: 27631616 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:11.402279+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 heartbeat osd_stat(store_statfs(0x1b739b000/0x0/0x1bfc00000, data 0x264709f/0x2793000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 132005888 unmapped: 27631616 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.769298553s of 10.004710197s, submitted: 63
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:12.402813+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 132014080 unmapped: 27623424 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:13.404375+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 heartbeat osd_stat(store_statfs(0x1b7374000/0x0/0x1bfc00000, data 0x266b729/0x27b9000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 132014080 unmapped: 27623424 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ee41c3c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:14.404509+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 heartbeat osd_stat(store_statfs(0x1b7376000/0x0/0x1bfc00000, data 0x266b662/0x27b8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 132104192 unmapped: 27533312 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:15.404657+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1543339 data_alloc: 285212672 data_used: 3325952
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 132112384 unmapped: 27525120 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:16.404989+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 heartbeat osd_stat(store_statfs(0x1b7376000/0x0/0x1bfc00000, data 0x266b662/0x27b8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ee41c5a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 134225920 unmapped: 25411584 heap: 159637504 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ece51c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:17.405449+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 141508608 unmapped: 26525696 heap: 168034304 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:18.406297+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 133447680 unmapped: 42983424 heap: 176431104 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:19.406506+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 133464064 unmapped: 42967040 heap: 176431104 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:20.406986+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53d800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 ms_handle_reset con 0x55f8ee53d800 session 0x55f8eb523680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1661305 data_alloc: 285212672 data_used: 3325952
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 133627904 unmapped: 42803200 heap: 176431104 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:21.407149+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 heartbeat osd_stat(store_statfs(0x1b6338000/0x0/0x1bfc00000, data 0x36aa6a0/0x37f6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 142024704 unmapped: 34406400 heap: 176431104 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 heartbeat osd_stat(store_statfs(0x1b6338000/0x0/0x1bfc00000, data 0x36aa6a0/0x37f6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:22.407377+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.290822983s of 10.585278511s, submitted: 54
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 133636096 unmapped: 42795008 heap: 176431104 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53dc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:23.407609+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 ms_handle_reset con 0x55f8ee53dc00 session 0x55f8ebd82000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53dc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 133775360 unmapped: 42655744 heap: 176431104 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ed34d860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 ms_handle_reset con 0x55f8ee53dc00 session 0x55f8e9541a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:24.407743+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8f004cf00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 144154624 unmapped: 40673280 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:25.407988+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2137427 data_alloc: 285212672 data_used: 3325952
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 135790592 unmapped: 49037312 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:26.408156+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 heartbeat osd_stat(store_statfs(0x1b2868000/0x0/0x1bfc00000, data 0x7178d49/0x72c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 136962048 unmapped: 47865856 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53d800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:27.408306+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 199 ms_handle_reset con 0x55f8ee53d800 session 0x55f8ed3b2b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 199 handle_osd_map epochs [199,199], i have 199, src has [1,199]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 137936896 unmapped: 46891008 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 199 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8eb542960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:28.408468+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 199 handle_osd_map epochs [199,200], i have 199, src has [1,200]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 ms_handle_reset con 0x55f8ebfc4400 session 0x55f8eb1c0b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 handle_osd_map epochs [200,200], i have 200, src has [1,200]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8f004d680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 137879552 unmapped: 46948352 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:29.408679+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 146464768 unmapped: 38363136 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ee6883c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:30.408962+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 heartbeat osd_stat(store_statfs(0x1b0425000/0x0/0x1bfc00000, data 0x95b52b4/0x9709000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2524413 data_alloc: 285212672 data_used: 3338240
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 146808832 unmapped: 38019072 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 heartbeat osd_stat(store_statfs(0x1aec0f000/0x0/0x1bfc00000, data 0xadca81c/0xaf1f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ee6885a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:31.409241+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 138412032 unmapped: 46415872 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:32.409432+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53d800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53dc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.189112663s of 10.241912842s, submitted: 187
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 140132352 unmapped: 44695552 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 ms_handle_reset con 0x55f8ee53d800 session 0x55f8e9c69c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:33.409561+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ee689e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 heartbeat osd_stat(store_statfs(0x1ad22e000/0x0/0x1bfc00000, data 0xc7aca88/0xc900000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 140992512 unmapped: 43835392 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:34.409851+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 200 handle_osd_map epochs [200,201], i have 200, src has [1,201]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 141000704 unmapped: 43827200 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 201 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:35.410071+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 201 ms_handle_reset con 0x55f8ee53dc00 session 0x55f8e9467860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2718731 data_alloc: 285212672 data_used: 3350528
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 141205504 unmapped: 43622400 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 201 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ed3b2b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 201 handle_osd_map epochs [201,201], i have 201, src has [1,201]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:36.410227+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 201 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8ee688780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 141131776 unmapped: 43696128 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:37.410414+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 141230080 unmapped: 43597824 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 201 heartbeat osd_stat(store_statfs(0x1ac9f2000/0x0/0x1bfc00000, data 0xcfe7850/0xd13c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x60cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:38.410596+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 141467648 unmapped: 43360256 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 201 handle_osd_map epochs [201,202], i have 201, src has [1,202]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:39.410950+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 202 handle_osd_map epochs [202,202], i have 202, src has [1,202]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 149913600 unmapped: 34914304 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 202 handle_osd_map epochs [202,203], i have 202, src has [1,203]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:40.411112+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2837641 data_alloc: 285212672 data_used: 3362816
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 203 handle_osd_map epochs [203,203], i have 203, src has [1,203]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 149921792 unmapped: 34906112 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:41.411254+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 141647872 unmapped: 43180032 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:42.411445+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 203 ms_handle_reset con 0x55f8ebfc4400 session 0x55f8eb5423c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 203 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb30e5a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.847164154s of 10.057739258s, submitted: 136
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 141705216 unmapped: 43122688 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 203 handle_osd_map epochs [203,204], i have 203, src has [1,204]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 204 handle_osd_map epochs [204,204], i have 204, src has [1,204]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:43.411632+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 204 heartbeat osd_stat(store_statfs(0x1ab738000/0x0/0x1bfc00000, data 0xde9b29d/0xdff5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 141729792 unmapped: 43098112 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:44.412011+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 141803520 unmapped: 43024384 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:45.412182+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2877134 data_alloc: 285212672 data_used: 3375104
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 141926400 unmapped: 42901504 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:46.412308+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 204 handle_osd_map epochs [204,205], i have 204, src has [1,205]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 205 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 143204352 unmapped: 41623552 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 205 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 205 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:47.413002+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 205 handle_osd_map epochs [205,205], i have 205, src has [1,205]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 151535616 unmapped: 33292288 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 205 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8e9c683c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:48.413133+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 206 ms_handle_reset con 0x55f8e8f27400 session 0x55f8eb1beb40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 143400960 unmapped: 41426944 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 206 handle_osd_map epochs [206,206], i have 206, src has [1,206]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 206 ms_handle_reset con 0x55f8ebfc4400 session 0x55f8eb55e1e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 206 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:49.413249+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 207 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8eb243e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 207 handle_osd_map epochs [207,207], i have 207, src has [1,207]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 207 heartbeat osd_stat(store_statfs(0x1a8a24000/0x0/0x1bfc00000, data 0x10b224fb/0x10c83000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [0,0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 207 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8ee602780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 143654912 unmapped: 41172992 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:50.413485+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3224709 data_alloc: 285212672 data_used: 3407872
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 143548416 unmapped: 41279488 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 207 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ee41c5a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:51.413635+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 207 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ed34d0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 208 handle_osd_map epochs [208,208], i have 208, src has [1,208]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 208 handle_osd_map epochs [208,208], i have 208, src has [1,208]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53dc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 208 ms_handle_reset con 0x55f8ee53dc00 session 0x55f8e9467c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 152141824 unmapped: 32686080 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:52.413781+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 209 handle_osd_map epochs [209,209], i have 209, src has [1,209]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 209 ms_handle_reset con 0x55f8ebfc4400 session 0x55f8ed34c1e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 209 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ecb3da40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.900153160s of 10.061613083s, submitted: 266
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 143876096 unmapped: 40951808 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 210 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:53.414020+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 210 handle_osd_map epochs [210,210], i have 210, src has [1,210]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 210 handle_osd_map epochs [210,211], i have 210, src has [1,211]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 210 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 143302656 unmapped: 41525248 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:54.414143+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 211 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ed3963c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 211 handle_osd_map epochs [211,211], i have 211, src has [1,211]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 144367616 unmapped: 40460288 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 211 heartbeat osd_stat(store_statfs(0x1a54f8000/0x0/0x1bfc00000, data 0x140c9ca9/0x14234000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:55.414302+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3640693 data_alloc: 285212672 data_used: 3420160
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 152993792 unmapped: 31834112 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 212 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8eb1c10e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53dc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 212 handle_osd_map epochs [212,212], i have 212, src has [1,212]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 212 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8ee689a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:56.414413+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 212 ms_handle_reset con 0x55f8ee53dc00 session 0x55f8ed3b2960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 144736256 unmapped: 40091648 heap: 184827904 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:57.414553+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 212 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ee689860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 212 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ee41d0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 212 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8ee688d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 212 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8ee688b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ee53d800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 212 ms_handle_reset con 0x55f8ee53d800 session 0x55f8ee688960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 145457152 unmapped: 43573248 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:58.414698+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 213 ms_handle_reset con 0x55f8e8f27400 session 0x55f8e9467680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 213 handle_osd_map epochs [213,213], i have 213, src has [1,213]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 145506304 unmapped: 43524096 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 213 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb112b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0ea800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:58:59.414853+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 213 ms_handle_reset con 0x55f8ea0ea800 session 0x55f8e94663c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 154091520 unmapped: 34938880 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:00.415019+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 213 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8e94665a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 213 ms_handle_reset con 0x55f8ec9ca800 session 0x55f8e94661e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4254470 data_alloc: 285212672 data_used: 3416064
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 213 heartbeat osd_stat(store_statfs(0x19f7ca000/0x0/0x1bfc00000, data 0x19df9c2f/0x19f64000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 145850368 unmapped: 43180032 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:01.415190+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 147120128 unmapped: 41910272 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:02.415354+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 213 heartbeat osd_stat(store_statfs(0x19e7cb000/0x0/0x1bfc00000, data 0x1adf9b94/0x1af63000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.545690536s of 10.013888359s, submitted: 296
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 213 handle_osd_map epochs [213,214], i have 213, src has [1,214]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 156508160 unmapped: 32522240 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:03.415481+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 214 handle_osd_map epochs [214,214], i have 214, src has [1,214]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 148119552 unmapped: 40910848 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:04.415629+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 214 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb55e5a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 214 handle_osd_map epochs [214,214], i have 214, src has [1,214]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 148316160 unmapped: 40714240 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 215 ms_handle_reset con 0x55f8ece51c00 session 0x55f8ee41da40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:05.415821+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 215 heartbeat osd_stat(store_statfs(0x19b782000/0x0/0x1bfc00000, data 0x1de4091f/0x1dfab000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4741106 data_alloc: 301989888 data_used: 8536064
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 148381696 unmapped: 40648704 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 215 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8ec49e000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:06.415977+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ef05c400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ef05c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 215 ms_handle_reset con 0x55f8ef05c000 session 0x55f8e9c68b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 148422656 unmapped: 40607744 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:07.416142+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 216 handle_osd_map epochs [216,216], i have 216, src has [1,216]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 148553728 unmapped: 40476672 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:08.416421+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 216 ms_handle_reset con 0x55f8ef05c400 session 0x55f8eb7c85a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 216 heartbeat osd_stat(store_statfs(0x1b5f5a000/0x0/0x1bfc00000, data 0x36650e3/0x37d4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 147169280 unmapped: 41861120 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 216 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb52f2c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 216 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8e9364780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ece51c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:09.416588+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 216 ms_handle_reset con 0x55f8ece51c00 session 0x55f8ec7441e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 147210240 unmapped: 41820160 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:10.416742+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 1920231 data_alloc: 301989888 data_used: 8536064
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 147210240 unmapped: 41820160 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:11.416929+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ef05c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 216 ms_handle_reset con 0x55f8ef05c000 session 0x55f8ed34c3c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 148889600 unmapped: 40140800 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:12.417101+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.916218758s of 10.086256981s, submitted: 312
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 153206784 unmapped: 35823616 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b4c34000/0x0/0x1bfc00000, data 0x498994b/0x4afa000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:13.417277+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 153485312 unmapped: 35545088 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:14.417436+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8ec747c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ecb3de00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 153976832 unmapped: 35053568 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:15.417741+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8ebd834a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2088106 data_alloc: 301989888 data_used: 9723904
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 154042368 unmapped: 34988032 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:16.418058+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ece51c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ef05c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8ece51c00 session 0x55f8ed396d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8ef05c000 session 0x55f8f004c3c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8eb1d0960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 153903104 unmapped: 35127296 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:17.418269+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                                          ** DB Stats **
                                                          Uptime(secs): 8400.1 total, 600.0 interval
                                                          Cumulative writes: 16K writes, 61K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.01 MB/s
                                                          Cumulative WAL: 16K writes, 5542 syncs, 2.94 writes per sync, written: 0.06 GB, 0.01 MB/s
                                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                                          Interval writes: 11K writes, 39K keys, 11K commit groups, 1.0 writes per commit group, ingest: 36.01 MB, 0.06 MB/s
                                                          Interval WAL: 11K writes, 4754 syncs, 2.35 writes per sync, written: 0.04 GB, 0.06 MB/s
                                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b4b58000/0x0/0x1bfc00000, data 0x4a5ff24/0x4bd5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8ee603860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb2bef00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 153919488 unmapped: 35110912 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:18.418501+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b4b4f000/0x0/0x1bfc00000, data 0x4a69177/0x4bdd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 153927680 unmapped: 35102720 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:19.418669+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b4b4f000/0x0/0x1bfc00000, data 0x4a69177/0x4bdd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 154419200 unmapped: 34611200 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:20.418966+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8e94674a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2089151 data_alloc: 301989888 data_used: 9728000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 154443776 unmapped: 34586624 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b4b09000/0x0/0x1bfc00000, data 0x4ab0966/0x4c25000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:21.419209+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ece51c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8ece51c00 session 0x55f8ed3b3c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ef05c000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 154468352 unmapped: 34562048 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8ef05c000 session 0x55f8eb300d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b4b0a000/0x0/0x1bfc00000, data 0x4ab0904/0x4c24000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:22.419374+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.282476425s of 10.095924377s, submitted: 214
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ed3b3680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 155533312 unmapped: 33497088 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:23.419518+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8e95405a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 155541504 unmapped: 33488896 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:24.419708+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b4acf000/0x0/0x1bfc00000, data 0x4aed396/0x4c5e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 155541504 unmapped: 33488896 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:25.419949+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2099570 data_alloc: 301989888 data_used: 9728000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 156131328 unmapped: 32899072 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:26.420140+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 156139520 unmapped: 32890880 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:27.420301+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 156139520 unmapped: 32890880 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:28.420437+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 156147712 unmapped: 32882688 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:29.420643+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 156147712 unmapped: 32882688 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:30.420810+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b4a5d000/0x0/0x1bfc00000, data 0x4b5c7d6/0x4cd0000, compress 0x0/0x0/0x0, omap 0x649, meta 0x64cf9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2103528 data_alloc: 301989888 data_used: 9728000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 156147712 unmapped: 32882688 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:31.420963+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 156164096 unmapped: 32866304 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:32.421131+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 156205056 unmapped: 32825344 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:33.421344+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.363632202s of 10.829681396s, submitted: 115
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 156205056 unmapped: 32825344 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:34.421541+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 heartbeat osd_stat(store_statfs(0x1b3867000/0x0/0x1bfc00000, data 0x4bb24fd/0x4d25000, compress 0x0/0x0/0x0, omap 0x649, meta 0x766f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 156205056 unmapped: 32825344 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:35.421758+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2120866 data_alloc: 301989888 data_used: 9728000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 157286400 unmapped: 31744000 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:36.421975+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8ee41d0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 157294592 unmapped: 31735808 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:37.422177+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ece51c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec014800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 157532160 unmapped: 31498240 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:38.422369+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67a400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 218 handle_osd_map epochs [218,218], i have 218, src has [1,218]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67ac00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 218 ms_handle_reset con 0x55f8eb67ac00 session 0x55f8ee688960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 218 ms_handle_reset con 0x55f8ec014800 session 0x55f8eb1c10e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 157564928 unmapped: 31465472 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 218 heartbeat osd_stat(store_statfs(0x1b33bf000/0x0/0x1bfc00000, data 0x4c574f6/0x4dce000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,2,3,4,5] op hist [0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:39.422549+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 219 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8e9c68d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea0d2000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67ac00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 219 handle_osd_map epochs [219,219], i have 219, src has [1,219]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 219 ms_handle_reset con 0x55f8eb67ac00 session 0x55f8eb52e5a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 219 ms_handle_reset con 0x55f8ea0d2000 session 0x55f8efa4e000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 157581312 unmapped: 31449088 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:40.422750+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 219 handle_osd_map epochs [219,220], i have 219, src has [1,220]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ebfc4000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eea8e000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 220 ms_handle_reset con 0x55f8ebfc4000 session 0x55f8eb141a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2142466 data_alloc: 301989888 data_used: 9752576
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 157810688 unmapped: 31219712 heap: 189030400 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 220 ms_handle_reset con 0x55f8eb67a400 session 0x55f8ee41d860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:41.422904+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67a400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 183713792 unmapped: 30515200 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:42.423094+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163717120 unmapped: 50511872 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:43.423251+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.950443268s of 10.014329910s, submitted: 194
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163160064 unmapped: 51068928 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:44.423434+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67ac00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 221 ms_handle_reset con 0x55f8eb67ac00 session 0x55f8eb523680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 221 heartbeat osd_stat(store_statfs(0x1aaf27000/0x0/0x1bfc00000, data 0xd0e5e47/0xd266000, compress 0x0/0x0/0x0, omap 0x649, meta 0x7a6f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163176448 unmapped: 51052544 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 222 handle_osd_map epochs [222,222], i have 222, src has [1,222]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:45.423636+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 222 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ebd82780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 3381353 data_alloc: 301989888 data_used: 9760768
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 159105024 unmapped: 55123968 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:46.423801+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 165748736 unmapped: 48480256 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:47.423938+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 222 handle_osd_map epochs [222,223], i have 222, src has [1,223]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 223 handle_osd_map epochs [223,223], i have 223, src has [1,223]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 223 handle_osd_map epochs [223,224], i have 223, src has [1,224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 224 handle_osd_map epochs [223,223], i have 224, src has [1,223]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 224 handle_osd_map epochs [223,223], i have 224, src has [1,223]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 224 handle_osd_map epochs [224,224], i have 224, src has [1,224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 170024960 unmapped: 44204032 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:48.424074+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 170041344 unmapped: 44187648 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:49.424211+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67bc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 224 heartbeat osd_stat(store_statfs(0x1a3518000/0x0/0x1bfc00000, data 0x1394c3f9/0x13ad3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 161718272 unmapped: 52510720 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:50.424361+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 224 handle_osd_map epochs [224,225], i have 224, src has [1,225]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 225 handle_osd_map epochs [225,225], i have 225, src has [1,225]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 225 heartbeat osd_stat(store_statfs(0x1a1d12000/0x0/0x1bfc00000, data 0x1515508b/0x152dc000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67b400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 225 ms_handle_reset con 0x55f8eb67b400 session 0x55f8ee603a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 4198001 data_alloc: 301989888 data_used: 9785344
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 161693696 unmapped: 52535296 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:51.425998+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 226 handle_osd_map epochs [226,226], i have 226, src has [1,226]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 226 handle_osd_map epochs [226,226], i have 226, src has [1,226]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 178667520 unmapped: 35561472 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:52.427039+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea76bc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 226 heartbeat osd_stat(store_statfs(0x19e8f1000/0x0/0x1bfc00000, data 0x1856eea2/0x186fb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,1,0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 226 ms_handle_reset con 0x55f8ea76bc00 session 0x55f8ee603e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 226 ms_handle_reset con 0x55f8eb67bc00 session 0x55f8ed302d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 226 handle_osd_map epochs [227,227], i have 227, src has [1,227]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 162004992 unmapped: 52224000 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:53.427194+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.373954773s of 10.013221741s, submitted: 197
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 161996800 unmapped: 52232192 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 227 ms_handle_reset con 0x55f8eb67a400 session 0x55f8ed3023c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:54.427410+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 227 ms_handle_reset con 0x55f8eea8e000 session 0x55f8ec49ed20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 227 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb2434a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea76bc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 227 ms_handle_reset con 0x55f8ea76bc00 session 0x55f8eb242780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 162086912 unmapped: 52142080 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:55.427606+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 228 handle_osd_map epochs [228,228], i have 228, src has [1,228]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67ac00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 228 ms_handle_reset con 0x55f8eb67ac00 session 0x55f8eb2be780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea76bc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2311748 data_alloc: 301989888 data_used: 9809920
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 228 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb2be1e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163225600 unmapped: 51003392 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:56.427804+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67a400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 228 ms_handle_reset con 0x55f8eb67a400 session 0x55f8f004da40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eea8e000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 229 handle_osd_map epochs [229,229], i have 229, src has [1,229]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 229 ms_handle_reset con 0x55f8eea8e000 session 0x55f8eb140d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 229 ms_handle_reset con 0x55f8ea76bc00 session 0x55f8ee6034a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163266560 unmapped: 50962432 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:57.428028+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67b400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163282944 unmapped: 50946048 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:58.428229+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 229 handle_osd_map epochs [230,230], i have 230, src has [1,230]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 230 handle_osd_map epochs [230,230], i have 230, src has [1,230]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 230 ms_handle_reset con 0x55f8eb67b400 session 0x55f8eb52f680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 230 heartbeat osd_stat(store_statfs(0x1b2077000/0x0/0x1bfc00000, data 0x4de5ade/0x4f74000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 230 ms_handle_reset con 0x55f8ece51c00 session 0x55f8ed3b2960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 230 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb30e3c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea76bc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163356672 unmapped: 50872320 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T09:59:59.428383+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 230 heartbeat osd_stat(store_statfs(0x1b2056000/0x0/0x1bfc00000, data 0x4e08768/0x4f97000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 230 ms_handle_reset con 0x55f8ec9ca800 session 0x55f8e9466000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 230 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ed302b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67a400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 231 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 231 ms_handle_reset con 0x55f8ea76bc00 session 0x55f8ebd83a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 160604160 unmapped: 53624832 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:00.428509+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b203b000/0x0/0x1bfc00000, data 0x4e21ce7/0x4fb2000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 231 ms_handle_reset con 0x55f8eb67a400 session 0x55f8e9466f00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2041724 data_alloc: 285212672 data_used: 3526656
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 160636928 unmapped: 53592064 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:01.428701+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 231 heartbeat osd_stat(store_statfs(0x1b4081000/0x0/0x1bfc00000, data 0x2dd907d/0x2f6a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,0,0,3])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 231 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ed397e00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 160219136 unmapped: 54009856 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:02.429001+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 160219136 unmapped: 54009856 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:03.429275+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.3] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.12] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.677630424s of 10.058815956s, submitted: 406
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.8] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 232 heartbeat osd_stat(store_statfs(0x1b402a000/0x0/0x1bfc00000, data 0x2e311bc/0x2fc3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.6] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[5.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea76bc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 161341440 unmapped: 52887552 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:04.429476+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 51
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 233 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb203c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 233 handle_osd_map epochs [233,233], i have 233, src has [1,233]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 233 heartbeat osd_stat(store_statfs(0x1b4026000/0x0/0x1bfc00000, data 0x2e33588/0x2fc7000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 161480704 unmapped: 52748288 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:05.429673+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ec9ca800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ece51c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 234 handle_osd_map epochs [234,234], i have 234, src has [1,234]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 234 ms_handle_reset con 0x55f8ece51c00 session 0x55f8eb5501e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 234 ms_handle_reset con 0x55f8ec9ca800 session 0x55f8ec7461e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2054837 data_alloc: 285212672 data_used: 3555328
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 234 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ec746780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 161497088 unmapped: 52731904 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:06.429864+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 234 handle_osd_map epochs [234,235], i have 234, src has [1,235]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 235 ms_handle_reset con 0x55f8ea76bc00 session 0x55f8ee6890e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67a400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 235 handle_osd_map epochs [235,235], i have 235, src has [1,235]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 235 heartbeat osd_stat(store_statfs(0x1b3ff3000/0x0/0x1bfc00000, data 0x2e63cfd/0x2ff8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [0,0,2])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 235 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ec746960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ece51c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 161603584 unmapped: 52625408 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 235 ms_handle_reset con 0x55f8eb67a400 session 0x55f8eb7c9c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:07.430051+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 235 ms_handle_reset con 0x55f8ece51c00 session 0x55f8ec746000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.
Feb 20 10:06:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.
Feb 20 10:06:54 np0005625203.localdomain systemd[1]: Started /usr/bin/podman healthcheck run efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 236 ms_handle_reset con 0x55f8e8f27400 session 0x55f8eb523a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 236 handle_osd_map epochs [236,236], i have 236, src has [1,236]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 161873920 unmapped: 52355072 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:08.430307+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 236 handle_osd_map epochs [236,236], i have 236, src has [1,236]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 236 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb542d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 161882112 unmapped: 52346880 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:09.430447+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 237 handle_osd_map epochs [237,237], i have 237, src has [1,237]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea76bc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 237 ms_handle_reset con 0x55f8ea76bc00 session 0x55f8eb781860
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67a400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 162111488 unmapped: 52117504 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:10.430596+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 237 heartbeat osd_stat(store_statfs(0x1b3fb1000/0x0/0x1bfc00000, data 0x2ea5c2e/0x303b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 237 ms_handle_reset con 0x55f8eb67a400 session 0x55f8eb781c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2058087 data_alloc: 285212672 data_used: 3551232
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 162258944 unmapped: 51970048 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:11.430787+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eea8e000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 237 ms_handle_reset con 0x55f8eea8e000 session 0x55f8eb7c9a40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 162349056 unmapped: 51879936 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:12.430945+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 237 handle_osd_map epochs [238,238], i have 238, src has [1,238]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 238 ms_handle_reset con 0x55f8e8f27400 session 0x55f8eb1c05a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 162349056 unmapped: 51879936 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:13.431156+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 238 heartbeat osd_stat(store_statfs(0x1b3f7f000/0x0/0x1bfc00000, data 0x2ed8528/0x306e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea76bc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.310037613s of 10.130145073s, submitted: 194
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 238 ms_handle_reset con 0x55f8ea76bc00 session 0x55f8e9364b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 238 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ed8021e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 162553856 unmapped: 51675136 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:14.431338+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 238 heartbeat osd_stat(store_statfs(0x1b3f60000/0x0/0x1bfc00000, data 0x2ef84b2/0x308e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67a400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 238 ms_handle_reset con 0x55f8eb67a400 session 0x55f8f004d0e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea76a000
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163610624 unmapped: 50618368 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:15.431509+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 238 ms_handle_reset con 0x55f8ea76a000 session 0x55f8e9467680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2073895 data_alloc: 285212672 data_used: 3563520
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163610624 unmapped: 50618368 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:16.431711+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 238 ms_handle_reset con 0x55f8e8f27400 session 0x55f8eb7c9680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 238 heartbeat osd_stat(store_statfs(0x1b3f5e000/0x0/0x1bfc00000, data 0x2efaf84/0x3090000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 238 handle_osd_map epochs [238,239], i have 238, src has [1,239]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 239 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ecb3c1e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163913728 unmapped: 50315264 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:17.431907+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8ea76bc00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 239 ms_handle_reset con 0x55f8ea76bc00 session 0x55f8eb140780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67a400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 239 ms_handle_reset con 0x55f8eb67a400 session 0x55f8eb52fa40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 239 handle_osd_map epochs [239,239], i have 239, src has [1,239]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163962880 unmapped: 50266112 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:18.432074+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb954c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 239 ms_handle_reset con 0x55f8eb954c00 session 0x55f8ec7461e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 239 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ec744d20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163962880 unmapped: 50266112 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:19.432279+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163315712 unmapped: 50913280 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:20.434624+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2079204 data_alloc: 285212672 data_used: 3575808
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:21.437466+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163315712 unmapped: 50913280 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 239 heartbeat osd_stat(store_statfs(0x1b3f30000/0x0/0x1bfc00000, data 0x2f28451/0x30bd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:22.437756+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163414016 unmapped: 50814976 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:23.437929+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163414016 unmapped: 50814976 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 240 handle_osd_map epochs [240,240], i have 240, src has [1,240]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:24.438236+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163602432 unmapped: 50626560 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.723815918s of 10.373005867s, submitted: 150
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 240 handle_osd_map epochs [240,240], i have 240, src has [1,240]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:25.439205+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163610624 unmapped: 50618368 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 240 heartbeat osd_stat(store_statfs(0x1b3ef2000/0x0/0x1bfc00000, data 0x2f62f51/0x30fb000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2087536 data_alloc: 285212672 data_used: 3588096
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:26.439999+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163708928 unmapped: 50520064 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 240 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb2be1e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:27.440634+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 162488320 unmapped: 51740672 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 240 handle_osd_map epochs [240,241], i have 240, src has [1,241]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 241 handle_osd_map epochs [241,241], i have 241, src has [1,241]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:28.440800+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163545088 unmapped: 50683904 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:29.441135+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 163717120 unmapped: 50511872 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 242 handle_osd_map epochs [242,242], i have 242, src has [1,242]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:30.441304+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 164765696 unmapped: 49463296 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 243 heartbeat osd_stat(store_statfs(0x1b3e82000/0x0/0x1bfc00000, data 0x2fcbc46/0x316a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2102584 data_alloc: 285212672 data_used: 3600384
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:31.441516+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 164765696 unmapped: 49463296 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67a400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 243 ms_handle_reset con 0x55f8eb67a400 session 0x55f8eb2be780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:32.441944+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 164765696 unmapped: 49463296 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 243 handle_osd_map epochs [243,244], i have 243, src has [1,244]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:33.442329+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 164765696 unmapped: 49463296 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb955800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 244 ms_handle_reset con 0x55f8eb955800 session 0x55f8eb242780
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:34.442566+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 164765696 unmapped: 49463296 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.059086800s of 10.410729408s, submitted: 82
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 244 handle_osd_map epochs [244,245], i have 244, src has [1,245]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 245 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 245 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:35.443003+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 164798464 unmapped: 49430528 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb955c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 245 heartbeat osd_stat(store_statfs(0x1b3e26000/0x0/0x1bfc00000, data 0x3025649/0x31c6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 245 ms_handle_reset con 0x55f8eb955c00 session 0x55f8eb2434a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2116142 data_alloc: 285212672 data_used: 3624960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:36.443526+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 164798464 unmapped: 49430528 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 245 handle_osd_map epochs [245,246], i have 245, src has [1,246]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:37.443721+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 165961728 unmapped: 48267264 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 246 handle_osd_map epochs [246,247], i have 246, src has [1,247]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 52
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:38.443939+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 166199296 unmapped: 48029696 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 247 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8ee688960
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 247 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:39.444194+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 166199296 unmapped: 48029696 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:40.444374+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 166199296 unmapped: 48029696 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2126364 data_alloc: 285212672 data_used: 3637248
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:41.444567+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 166199296 unmapped: 48029696 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 247 heartbeat osd_stat(store_statfs(0x1b3dce000/0x0/0x1bfc00000, data 0x307a287/0x3220000, compress 0x0/0x0/0x0, omap 0x649, meta 0x8c0f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:42.444829+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb67a400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 166199296 unmapped: 48029696 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 247 ms_handle_reset con 0x55f8eb67a400 session 0x55f8e95405a0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:43.444989+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 166232064 unmapped: 47996928 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 248 handle_osd_map epochs [248,248], i have 248, src has [1,248]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:44.445152+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 166240256 unmapped: 47988736 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb955800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 249 ms_handle_reset con 0x55f8eb955800 session 0x55f8ed3b3680
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:45.445349+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 166240256 unmapped: 47988736 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 249 handle_osd_map epochs [249,250], i have 249, src has [1,250]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.064092636s of 10.817487717s, submitted: 135
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 250 handle_osd_map epochs [249,250], i have 250, src has [1,250]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb955c00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:46.445507+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2146676 data_alloc: 285212672 data_used: 3649536
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 250 ms_handle_reset con 0x55f8eb955c00 session 0x55f8ed3b3c20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 166305792 unmapped: 47923200 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 250 heartbeat osd_stat(store_statfs(0x1b3974000/0x0/0x1bfc00000, data 0x30cd1c4/0x3277000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:47.446901+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 166412288 unmapped: 47816704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:48.447116+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 166412288 unmapped: 47816704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:49.447315+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 167469056 unmapped: 46759936 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 250 heartbeat osd_stat(store_statfs(0x1b3946000/0x0/0x1bfc00000, data 0x30fe5ec/0x32a8000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:50.447498+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 167682048 unmapped: 46546944 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:51.447714+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2148224 data_alloc: 285212672 data_used: 3649536
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 167788544 unmapped: 46440448 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:52.447961+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 167788544 unmapped: 46440448 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:53.448374+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[3.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168001536 unmapped: 46227456 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 251 heartbeat osd_stat(store_statfs(0x1b3919000/0x0/0x1bfc00000, data 0x3129fe8/0x32d5000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:54.448693+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168017920 unmapped: 46211072 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:55.448998+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168026112 unmapped: 46202880 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.950773239s of 10.206655502s, submitted: 79
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:56.449531+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2150856 data_alloc: 285212672 data_used: 3661824
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168108032 unmapped: 46120960 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:57.449810+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168108032 unmapped: 46120960 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:58.450365+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168116224 unmapped: 46112768 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:00:59.450682+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168124416 unmapped: 46104576 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 251 heartbeat osd_stat(store_statfs(0x1b38c6000/0x0/0x1bfc00000, data 0x317ad0f/0x3328000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:00.450972+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168239104 unmapped: 45989888 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:01.451346+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2159284 data_alloc: 285212672 data_used: 3661824
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168337408 unmapped: 45891584 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:02.451541+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168337408 unmapped: 45891584 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 252 handle_osd_map epochs [252,252], i have 252, src has [1,252]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 252 heartbeat osd_stat(store_statfs(0x1b38a6000/0x0/0x1bfc00000, data 0x319bd18/0x3348000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:03.451721+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168345600 unmapped: 45883392 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:04.452023+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168345600 unmapped: 45883392 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:05.452267+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168345600 unmapped: 45883392 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:06.452468+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2161004 data_alloc: 285212672 data_used: 3674112
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168345600 unmapped: 45883392 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.878138542s of 11.087326050s, submitted: 66
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:07.452676+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168353792 unmapped: 45875200 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 252 handle_osd_map epochs [252,253], i have 252, src has [1,253]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:08.452901+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168361984 unmapped: 45867008 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 253 heartbeat osd_stat(store_statfs(0x1b389c000/0x0/0x1bfc00000, data 0x31a03d2/0x3351000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:09.453036+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168370176 unmapped: 45858816 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:10.453189+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168370176 unmapped: 45858816 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 254 heartbeat osd_stat(store_statfs(0x1b3899000/0x0/0x1bfc00000, data 0x31a266d/0x3353000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:11.453381+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2171572 data_alloc: 285212672 data_used: 3686400
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168378368 unmapped: 45850624 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:12.453571+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168378368 unmapped: 45850624 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 254 heartbeat osd_stat(store_statfs(0x1b389a000/0x0/0x1bfc00000, data 0x31a2708/0x3354000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:13.453764+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 169451520 unmapped: 44777472 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:14.453962+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 169451520 unmapped: 44777472 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:15.454228+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168402944 unmapped: 45826048 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:16.454455+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2177244 data_alloc: 285212672 data_used: 3698688
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 255 heartbeat osd_stat(store_statfs(0x1b3894000/0x0/0x1bfc00000, data 0x31a49ca/0x3359000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168402944 unmapped: 45826048 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:17.454676+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168402944 unmapped: 45826048 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.719186783s of 10.995144844s, submitted: 85
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:18.454995+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168411136 unmapped: 45817856 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:19.455181+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168411136 unmapped: 45817856 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:20.455380+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168411136 unmapped: 45817856 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:21.455567+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2177604 data_alloc: 285212672 data_used: 3698688
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168411136 unmapped: 45817856 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:22.455744+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 255 heartbeat osd_stat(store_statfs(0x1b3893000/0x0/0x1bfc00000, data 0x31a4c4b/0x335b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168411136 unmapped: 45817856 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:23.455931+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168411136 unmapped: 45817856 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:24.456095+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168419328 unmapped: 45809664 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 53
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:25.456615+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168443904 unmapped: 45785088 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:26.457125+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2185692 data_alloc: 285212672 data_used: 3698688
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 255 heartbeat osd_stat(store_statfs(0x1b3892000/0x0/0x1bfc00000, data 0x31a4bbe/0x335c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168460288 unmapped: 45768704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:27.457537+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168460288 unmapped: 45768704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:28.457955+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168460288 unmapped: 45768704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.948318481s of 11.106399536s, submitted: 23
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:29.458353+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168460288 unmapped: 45768704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:30.458715+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168460288 unmapped: 45768704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 255 heartbeat osd_stat(store_statfs(0x1b3891000/0x0/0x1bfc00000, data 0x31a4bbe/0x335c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:31.459018+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2187496 data_alloc: 285212672 data_used: 3698688
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168460288 unmapped: 45768704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:32.459328+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168460288 unmapped: 45768704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:33.459544+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168460288 unmapped: 45768704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:34.459752+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168476672 unmapped: 45752320 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:35.459932+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168476672 unmapped: 45752320 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:36.460143+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 255 heartbeat osd_stat(store_statfs(0x1b3892000/0x0/0x1bfc00000, data 0x31a4bbe/0x335c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2187496 data_alloc: 285212672 data_used: 3698688
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 168476672 unmapped: 45752320 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:37.460313+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 169541632 unmapped: 44687360 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:38.460551+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 169525248 unmapped: 44703744 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:39.460727+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 169525248 unmapped: 44703744 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.780779839s of 10.902603149s, submitted: 30
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:40.460971+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 169533440 unmapped: 44695552 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 255 heartbeat osd_stat(store_statfs(0x1b3848000/0x0/0x1bfc00000, data 0x31eefcd/0x33a6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:41.461130+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2202416 data_alloc: 285212672 data_used: 3698688
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 169656320 unmapped: 44572672 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:42.461290+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 169672704 unmapped: 44556288 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:43.461520+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 169672704 unmapped: 44556288 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:44.461738+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 169689088 unmapped: 44539904 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:45.461969+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 170868736 unmapped: 43360256 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:46.462180+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2211952 data_alloc: 285212672 data_used: 3698688
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 170868736 unmapped: 43360256 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 255 handle_osd_map epochs [255,256], i have 255, src has [1,256]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 255 heartbeat osd_stat(store_statfs(0x1b37a1000/0x0/0x1bfc00000, data 0x32946aa/0x344a000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:47.462327+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 171188224 unmapped: 43040768 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 256 handle_osd_map epochs [256,256], i have 256, src has [1,256]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:48.462549+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 171245568 unmapped: 42983424 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:49.462709+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 171245568 unmapped: 42983424 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 256 heartbeat osd_stat(store_statfs(0x1b375e000/0x0/0x1bfc00000, data 0x32d64c7/0x348e000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:50.462953+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 171376640 unmapped: 42852352 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.687679291s of 11.304363251s, submitted: 156
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:51.463138+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2221930 data_alloc: 285212672 data_used: 3710976
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 171384832 unmapped: 42844160 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:52.463303+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 171384832 unmapped: 42844160 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:53.463529+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 170508288 unmapped: 43720704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:54.463718+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b370a000/0x0/0x1bfc00000, data 0x3329276/0x34e3000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 171565056 unmapped: 42663936 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:55.463894+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 171573248 unmapped: 42655744 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:56.464082+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2223586 data_alloc: 285212672 data_used: 3723264
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 171573248 unmapped: 42655744 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:57.464282+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 171573248 unmapped: 42655744 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:58.464449+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 171597824 unmapped: 42631168 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:01:59.464614+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172048384 unmapped: 42180608 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b3664000/0x0/0x1bfc00000, data 0x33ccd73/0x3587000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:00.464810+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172048384 unmapped: 42180608 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:01.464958+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.712913513s of 10.110676765s, submitted: 100
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2236640 data_alloc: 285212672 data_used: 3723264
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172154880 unmapped: 42074112 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:02.465139+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 173318144 unmapped: 40910848 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:03.465304+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 173326336 unmapped: 40902656 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:04.465501+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 173441024 unmapped: 40787968 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:05.465738+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172646400 unmapped: 41582592 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b35f7000/0x0/0x1bfc00000, data 0x343cc33/0x35f6000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:06.465931+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2237992 data_alloc: 285212672 data_used: 3723264
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172662784 unmapped: 41566208 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:07.466101+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172744704 unmapped: 41484288 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:08.466279+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172744704 unmapped: 41484288 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:09.466474+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172744704 unmapped: 41484288 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b35ae000/0x0/0x1bfc00000, data 0x3486dcf/0x3640000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:10.466664+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b35ae000/0x0/0x1bfc00000, data 0x3486dcf/0x3640000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172744704 unmapped: 41484288 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:11.466852+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2244094 data_alloc: 285212672 data_used: 3723264
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172744704 unmapped: 41484288 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.286856651s of 10.505909920s, submitted: 47
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:12.467037+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172769280 unmapped: 41459712 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:13.467194+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172892160 unmapped: 41336832 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:14.467344+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 173342720 unmapped: 40886272 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 257 heartbeat osd_stat(store_statfs(0x1b352a000/0x0/0x1bfc00000, data 0x350bcab/0x36c4000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:15.467549+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 174563328 unmapped: 39665664 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:16.467705+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2255296 data_alloc: 285212672 data_used: 3723264
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 173498368 unmapped: 40730624 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:17.467933+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 173842432 unmapped: 40386560 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 258 handle_osd_map epochs [258,258], i have 258, src has [1,258]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:18.468058+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 173916160 unmapped: 40312832 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:19.468216+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 174063616 unmapped: 40165376 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 259 heartbeat osd_stat(store_statfs(0x1b34bf000/0x0/0x1bfc00000, data 0x3572b47/0x372d000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:20.468393+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 172900352 unmapped: 41328640 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:21.468546+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 259 heartbeat osd_stat(store_statfs(0x1b3493000/0x0/0x1bfc00000, data 0x35a07eb/0x375b000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2260952 data_alloc: 285212672 data_used: 3735552
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 173031424 unmapped: 41197568 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.494126320s of 10.002210617s, submitted: 223
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:22.469082+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 259 heartbeat osd_stat(store_statfs(0x1b3458000/0x0/0x1bfc00000, data 0x35dbd0b/0x3796000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 173129728 unmapped: 41099264 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 260 handle_osd_map epochs [260,260], i have 260, src has [1,260]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:23.469235+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 174366720 unmapped: 39862272 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:24.470169+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 174383104 unmapped: 39845888 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 54
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:25.470377+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 174399488 unmapped: 39829504 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 262 heartbeat osd_stat(store_statfs(0x1b33ff000/0x0/0x1bfc00000, data 0x362ab52/0x37ec000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [0,0,0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:26.470545+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2283430 data_alloc: 285212672 data_used: 3747840
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 174563328 unmapped: 39665664 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 263 handle_osd_map epochs [263,263], i have 263, src has [1,263]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:27.470727+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 174833664 unmapped: 39395328 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:28.470910+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 175939584 unmapped: 38289408 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 264 handle_osd_map epochs [264,264], i have 264, src has [1,264]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:29.471053+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 264 heartbeat osd_stat(store_statfs(0x1b339b000/0x0/0x1bfc00000, data 0x368e941/0x3851000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 176029696 unmapped: 38199296 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:30.471265+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 176037888 unmapped: 38191104 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:31.471453+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2300028 data_alloc: 285212672 data_used: 3760128
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.423227310s of 10.004524231s, submitted: 195
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 176324608 unmapped: 37904384 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:32.471694+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 176332800 unmapped: 37896192 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:33.471854+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 266 heartbeat osd_stat(store_statfs(0x1b3333000/0x0/0x1bfc00000, data 0x36f4d17/0x38ba000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 176513024 unmapped: 37715968 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 266 heartbeat osd_stat(store_statfs(0x1b331f000/0x0/0x1bfc00000, data 0x3706bde/0x38cd000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:34.472012+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 176521216 unmapped: 37707776 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:35.472255+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:36.472406+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 176521216 unmapped: 37707776 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2308192 data_alloc: 285212672 data_used: 3772416
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:37.472582+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 176529408 unmapped: 37699584 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 266 heartbeat osd_stat(store_statfs(0x1b32fd000/0x0/0x1bfc00000, data 0x372a95e/0x38f1000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:38.472749+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177627136 unmapped: 36601856 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:39.472973+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177635328 unmapped: 36593664 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b32ca000/0x0/0x1bfc00000, data 0x375bff7/0x3924000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:40.473130+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177635328 unmapped: 36593664 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 267 heartbeat osd_stat(store_statfs(0x1b32ca000/0x0/0x1bfc00000, data 0x375bff7/0x3924000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:41.473336+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177635328 unmapped: 36593664 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2315510 data_alloc: 285212672 data_used: 3784704
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.711993217s of 10.004926682s, submitted: 98
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 267 handle_osd_map epochs [267,268], i have 267, src has [1,268]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:42.473473+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177750016 unmapped: 36478976 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:43.473612+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 176971776 unmapped: 37257216 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:44.473811+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 176971776 unmapped: 37257216 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:45.474066+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177217536 unmapped: 37011456 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:46.474249+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177225728 unmapped: 37003264 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 268 heartbeat osd_stat(store_statfs(0x1b3243000/0x0/0x1bfc00000, data 0x37e0551/0x39ab000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2328078 data_alloc: 285212672 data_used: 3796992
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:47.474391+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177758208 unmapped: 36470784 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:48.474587+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177946624 unmapped: 36282368 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:49.474780+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 178216960 unmapped: 36012032 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:50.475013+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 178216960 unmapped: 36012032 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 270 heartbeat osd_stat(store_statfs(0x1b31be000/0x0/0x1bfc00000, data 0x3860dad/0x3a2f000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:51.475199+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 178233344 unmapped: 35995648 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2337600 data_alloc: 285212672 data_used: 3825664
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.660231590s of 10.004627228s, submitted: 120
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:52.475380+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177078272 unmapped: 37150720 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:53.475519+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177086464 unmapped: 37142528 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 271 heartbeat osd_stat(store_statfs(0x1b31a2000/0x0/0x1bfc00000, data 0x387e42b/0x3a4c000, compress 0x0/0x0/0x0, omap 0x649, meta 0x900f9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 271 ms_handle_reset con 0x55f8e8f27400 session 0x55f8ed3023c0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:54.475750+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 55
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177364992 unmapped: 36864000 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:55.475948+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177627136 unmapped: 36601856 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 271 ms_handle_reset con 0x55f8eb659000 session 0x55f8eb780f00
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e8f27800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:56.476114+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 177627136 unmapped: 36601856 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2348010 data_alloc: 285212672 data_used: 3842048
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:57.476272+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 178675712 unmapped: 35553280 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 271 heartbeat osd_stat(store_statfs(0x1b1f9c000/0x0/0x1bfc00000, data 0x38e2b5e/0x3ab2000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:58.476437+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 178675712 unmapped: 35553280 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:02:59.476583+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 178896896 unmapped: 35332096 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:00.476732+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 179118080 unmapped: 35110912 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 271 heartbeat osd_stat(store_statfs(0x1b1f64000/0x0/0x1bfc00000, data 0x391a2f5/0x3aea000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:01.476972+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 179118080 unmapped: 35110912 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2352860 data_alloc: 285212672 data_used: 3842048
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.791482925s of 10.019005775s, submitted: 336
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:02.477174+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 179216384 unmapped: 35012608 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:03.477348+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 271 heartbeat osd_stat(store_statfs(0x1b1f2c000/0x0/0x1bfc00000, data 0x395275c/0x3b22000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 179216384 unmapped: 35012608 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:04.477605+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 179380224 unmapped: 34848768 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:05.477822+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 179453952 unmapped: 34775040 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:06.477986+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 179453952 unmapped: 34775040 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2357894 data_alloc: 285212672 data_used: 3842048
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 271 heartbeat osd_stat(store_statfs(0x1b1eed000/0x0/0x1bfc00000, data 0x3992109/0x3b61000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _renew_subs
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:07.478171+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 179568640 unmapped: 34660352 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:08.478365+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 179568640 unmapped: 34660352 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:09.478512+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 179707904 unmapped: 34521088 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 272 heartbeat osd_stat(store_statfs(0x1b1e77000/0x0/0x1bfc00000, data 0x3a06a10/0x3bd7000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:10.478699+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 179707904 unmapped: 34521088 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:11.478837+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 179707904 unmapped: 34521088 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2368756 data_alloc: 285212672 data_used: 3854336
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.000362396s of 10.326949120s, submitted: 78
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:12.478999+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 180740096 unmapped: 33488896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:13.479196+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 180748288 unmapped: 33480704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 273 handle_osd_map epochs [273,273], i have 273, src has [1,273]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:14.479400+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 180748288 unmapped: 33480704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:15.479634+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 180879360 unmapped: 33349632 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 273 heartbeat osd_stat(store_statfs(0x1b1e33000/0x0/0x1bfc00000, data 0x3a47842/0x3c19000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [0,0,0,0,0,0,0,0,1])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:16.479828+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 180879360 unmapped: 33349632 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2374364 data_alloc: 285212672 data_used: 3866624
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:17.479983+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 180879360 unmapped: 33349632 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 274 handle_osd_map epochs [274,274], i have 274, src has [1,274]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:18.480195+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181002240 unmapped: 33226752 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:19.480347+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181002240 unmapped: 33226752 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:20.480496+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181002240 unmapped: 33226752 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:21.480648+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181002240 unmapped: 33226752 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 274 heartbeat osd_stat(store_statfs(0x1b1df1000/0x0/0x1bfc00000, data 0x3a889ad/0x3c5c000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2378250 data_alloc: 285212672 data_used: 3878912
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:22.480805+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181051392 unmapped: 33177600 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.742454529s of 10.916521072s, submitted: 90
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.18] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1e] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.16] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.13] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1f] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.c] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[6.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:23.480990+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181059584 unmapped: 33169408 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:24.481183+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181059584 unmapped: 33169408 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:25.481346+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181059584 unmapped: 33169408 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b1dcd000/0x0/0x1bfc00000, data 0x3aab6fc/0x3c80000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:26.481532+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181059584 unmapped: 33169408 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2382452 data_alloc: 285212672 data_used: 3891200
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:27.481721+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181059584 unmapped: 33169408 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:28.481945+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181059584 unmapped: 33169408 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:29.482118+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181059584 unmapped: 33169408 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:30.482274+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181059584 unmapped: 33169408 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:31.482452+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181075968 unmapped: 33153024 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2381088 data_alloc: 285212672 data_used: 3891200
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b1dcd000/0x0/0x1bfc00000, data 0x3aab6fc/0x3c80000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:32.482691+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181075968 unmapped: 33153024 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b1dc8000/0x0/0x1bfc00000, data 0x3ab1070/0x3c86000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:33.482823+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181075968 unmapped: 33153024 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:34.483014+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181092352 unmapped: 33136640 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b1dc8000/0x0/0x1bfc00000, data 0x3ab1070/0x3c86000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:35.483264+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181092352 unmapped: 33136640 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:36.483424+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181092352 unmapped: 33136640 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2381088 data_alloc: 285212672 data_used: 3891200
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:37.483593+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181092352 unmapped: 33136640 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:38.483768+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b1dc8000/0x0/0x1bfc00000, data 0x3ab1070/0x3c86000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181092352 unmapped: 33136640 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:39.483962+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181100544 unmapped: 33128448 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:40.484149+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181100544 unmapped: 33128448 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:41.484337+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181100544 unmapped: 33128448 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2381088 data_alloc: 285212672 data_used: 3891200
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:42.484496+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181108736 unmapped: 33120256 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b1dc8000/0x0/0x1bfc00000, data 0x3ab1070/0x3c86000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:43.484638+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181108736 unmapped: 33120256 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:44.484805+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181108736 unmapped: 33120256 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:45.485002+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181108736 unmapped: 33120256 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b1dc8000/0x0/0x1bfc00000, data 0x3ab1070/0x3c86000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:46.485162+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181108736 unmapped: 33120256 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2381088 data_alloc: 285212672 data_used: 3891200
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:47.485346+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181116928 unmapped: 33112064 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:48.485533+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181116928 unmapped: 33112064 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:49.485726+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181116928 unmapped: 33112064 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:50.485957+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181125120 unmapped: 33103872 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:51.486133+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b1dc8000/0x0/0x1bfc00000, data 0x3ab1070/0x3c86000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181125120 unmapped: 33103872 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2381088 data_alloc: 285212672 data_used: 3891200
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:52.486280+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181125120 unmapped: 33103872 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:53.486442+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181125120 unmapped: 33103872 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:54.486641+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181125120 unmapped: 33103872 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:55.486906+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181149696 unmapped: 33079296 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:56.487111+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181149696 unmapped: 33079296 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b1dc8000/0x0/0x1bfc00000, data 0x3ab1070/0x3c86000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2381088 data_alloc: 285212672 data_used: 3891200
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:57.487278+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181149696 unmapped: 33079296 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:58.487492+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181157888 unmapped: 33071104 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:03:59.487618+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181157888 unmapped: 33071104 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:00.487752+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181157888 unmapped: 33071104 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:01.487950+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181157888 unmapped: 33071104 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2381088 data_alloc: 285212672 data_used: 3891200
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:02.488092+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b1dc8000/0x0/0x1bfc00000, data 0x3ab1070/0x3c86000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181157888 unmapped: 33071104 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:03.488314+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181166080 unmapped: 33062912 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8e9d8e800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 40.825126648s of 40.859657288s, submitted: 20
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:04.488528+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181166080 unmapped: 33062912 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:05.488776+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 275 heartbeat osd_stat(store_statfs(0x1b15c7000/0x0/0x1bfc00000, data 0x42b1080/0x4487000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181559296 unmapped: 32669696 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 276 handle_osd_map epochs [276,276], i have 276, src has [1,276]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 276 ms_handle_reset con 0x55f8e9d8e800 session 0x55f8eb242b40
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:06.488951+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: handle_auth_request added challenge on 0x55f8eb955800
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181575680 unmapped: 32653312 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2446467 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 277 handle_osd_map epochs [276,277], i have 277, src has [1,277]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:07.489096+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 277 ms_handle_reset con 0x55f8eb955800 session 0x55f8ed3021e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181592064 unmapped: 32636928 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:08.489232+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181592064 unmapped: 32636928 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:09.935027+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181592064 unmapped: 32636928 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:10.935246+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181592064 unmapped: 32636928 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 277 heartbeat osd_stat(store_statfs(0x1b1dbe000/0x0/0x1bfc00000, data 0x3ab5794/0x3c8e000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:11.935477+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181608448 unmapped: 32620544 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2391362 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:12.935686+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181616640 unmapped: 32612352 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 277 handle_osd_map epochs [278,278], i have 278, src has [1,278]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1d] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.1] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.2] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.19] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.15] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.14] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.9] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.17] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.b] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 scrub-queue::remove_from_osd_queue removing pg[4.4] failed. State was: not registered w/ OSD
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:13.936063+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181624832 unmapped: 32604160 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:14.936219+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181624832 unmapped: 32604160 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbb000/0x0/0x1bfc00000, data 0x3ab79e2/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:15.936481+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181624832 unmapped: 32604160 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:16.936676+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181624832 unmapped: 32604160 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393692 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:17.936919+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181624832 unmapped: 32604160 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:18.937171+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181633024 unmapped: 32595968 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbb000/0x0/0x1bfc00000, data 0x3ab79e2/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:19.937361+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181633024 unmapped: 32595968 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:20.937593+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181633024 unmapped: 32595968 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:21.937744+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181633024 unmapped: 32595968 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbb000/0x0/0x1bfc00000, data 0x3ab79e2/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393692 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:22.937952+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181633024 unmapped: 32595968 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.085924149s of 19.315191269s, submitted: 63
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 ms_handle_reset con 0x55f8ea76a400 session 0x55f8eb5330e0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:23.938111+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 182009856 unmapped: 32219136 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Got map version 56
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: mgrc handle_mgr_map Active mgr is now [v2:172.18.0.106:6810/1895764224,v1:172.18.0.106:6811/1895764224]
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:24.938264+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181755904 unmapped: 32473088 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:25.938470+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181755904 unmapped: 32473088 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:26.938631+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:27.939046+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:28.939255+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:29.939459+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:30.939675+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:31.939900+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:32.940064+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:33.940219+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:34.940382+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:35.940558+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:36.940756+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:37.940940+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:38.941091+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:39.941261+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:40.941447+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181764096 unmapped: 32464896 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:41.941623+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181772288 unmapped: 32456704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:42.941829+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181772288 unmapped: 32456704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:43.941977+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181772288 unmapped: 32456704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:44.942116+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181772288 unmapped: 32456704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:45.942251+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181772288 unmapped: 32456704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:46.942426+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181772288 unmapped: 32456704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:47.942533+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181772288 unmapped: 32456704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:48.942680+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181772288 unmapped: 32456704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:49.942855+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181772288 unmapped: 32456704 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:50.943026+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181788672 unmapped: 32440320 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:51.943174+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181788672 unmapped: 32440320 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:52.943321+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181788672 unmapped: 32440320 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:53.943471+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181788672 unmapped: 32440320 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:54.943652+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181796864 unmapped: 32432128 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:55.943939+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181796864 unmapped: 32432128 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:56.944096+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181796864 unmapped: 32432128 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:57.944277+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181796864 unmapped: 32432128 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:58.944458+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181796864 unmapped: 32432128 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:04:59.944600+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181805056 unmapped: 32423936 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:00.944798+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181805056 unmapped: 32423936 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:01.944977+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181813248 unmapped: 32415744 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:02.945100+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181813248 unmapped: 32415744 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:03.945308+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181813248 unmapped: 32415744 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:04.945453+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181813248 unmapped: 32415744 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:05.945628+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181813248 unmapped: 32415744 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:06.945789+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181846016 unmapped: 32382976 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:07.946039+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181846016 unmapped: 32382976 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:08.946212+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181846016 unmapped: 32382976 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:09.946349+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181846016 unmapped: 32382976 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:10.946538+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181846016 unmapped: 32382976 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:11.946679+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181846016 unmapped: 32382976 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:12.946947+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181846016 unmapped: 32382976 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:13.947114+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181846016 unmapped: 32382976 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:14.947301+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181862400 unmapped: 32366592 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:15.947505+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181862400 unmapped: 32366592 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:16.947685+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181862400 unmapped: 32366592 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:17.947906+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181870592 unmapped: 32358400 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:18.948069+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181878784 unmapped: 32350208 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:19.948209+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181878784 unmapped: 32350208 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:20.948364+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181878784 unmapped: 32350208 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:21.948516+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181878784 unmapped: 32350208 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:22.948963+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181886976 unmapped: 32342016 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:23.949151+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181886976 unmapped: 32342016 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:24.949302+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181895168 unmapped: 32333824 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:25.949503+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181895168 unmapped: 32333824 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:26.949661+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181895168 unmapped: 32333824 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:27.949854+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181895168 unmapped: 32333824 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:28.950049+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181895168 unmapped: 32333824 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:29.950219+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181895168 unmapped: 32333824 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:30.950377+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181911552 unmapped: 32317440 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:31.950566+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181911552 unmapped: 32317440 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:32.950747+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181919744 unmapped: 32309248 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:33.950945+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181919744 unmapped: 32309248 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:34.951087+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181919744 unmapped: 32309248 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:35.951281+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181919744 unmapped: 32309248 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:36.951404+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181919744 unmapped: 32309248 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:37.951577+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181919744 unmapped: 32309248 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:38.951775+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181927936 unmapped: 32301056 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:39.951919+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181927936 unmapped: 32301056 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:40.952069+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181927936 unmapped: 32301056 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:41.952219+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181944320 unmapped: 32284672 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:42.952422+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181952512 unmapped: 32276480 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:43.952618+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181952512 unmapped: 32276480 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:44.952775+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181952512 unmapped: 32276480 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:45.953062+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181952512 unmapped: 32276480 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:46.953266+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181968896 unmapped: 32260096 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:47.953475+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181968896 unmapped: 32260096 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:48.953648+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181968896 unmapped: 32260096 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:49.953800+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181968896 unmapped: 32260096 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:50.953973+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181968896 unmapped: 32260096 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:51.954132+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181968896 unmapped: 32260096 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:52.954290+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181968896 unmapped: 32260096 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:53.954440+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181968896 unmapped: 32260096 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:54.954598+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181977088 unmapped: 32251904 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:55.954815+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181977088 unmapped: 32251904 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:56.955028+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181977088 unmapped: 32251904 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:57.955218+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181985280 unmapped: 32243712 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:58.955407+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181985280 unmapped: 32243712 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:05:59.955575+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181985280 unmapped: 32243712 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _send_mon_message to mon.np0005625203 at v2:172.18.0.104:3300/0
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:00.955762+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181985280 unmapped: 32243712 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:01.955942+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181985280 unmapped: 32243712 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:02.956114+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181993472 unmapped: 32235520 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:03.956303+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181993472 unmapped: 32235520 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:04.956486+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181993472 unmapped: 32235520 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:05.956697+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181993472 unmapped: 32235520 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:06.956850+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181993472 unmapped: 32235520 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:07.956995+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181993472 unmapped: 32235520 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:08.957135+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181993472 unmapped: 32235520 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:09.957294+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181993472 unmapped: 32235520 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:10.957462+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 182001664 unmapped: 32227328 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:11.957784+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 182001664 unmapped: 32227328 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:12.957900+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 182001664 unmapped: 32227328 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:13.960756+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 182009856 unmapped: 32219136 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:14.960867+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 182009856 unmapped: 32219136 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:15.961003+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 182009856 unmapped: 32219136 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:16.961127+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 182009856 unmapped: 32219136 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:17.961295+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 182018048 unmapped: 32210944 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:18.961423+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 182026240 unmapped: 32202752 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:19.961589+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 182026240 unmapped: 32202752 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:20.961729+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 182026240 unmapped: 32202752 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:21.961868+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: do_command 'config diff' '{prefix=config diff}'
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: do_command 'config show' '{prefix=config show}'
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181747712 unmapped: 32481280 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: do_command 'counter dump' '{prefix=counter dump}'
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: do_command 'counter schema' '{prefix=counter schema}'
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.235294
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0384615
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: bluestore.MempoolThread(0x55f8e7acbb60) _resize_shards cache_size: 4047415775 kv_alloc: 1744830464 kv_used: 2144 kv_onode_alloc: 285212672 kv_onode_used: 464 meta_alloc: 1677721600 meta_used: 2393212 data_alloc: 285212672 data_used: 3903488
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:22.962008+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181624832 unmapped: 32604160 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: tick
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_tickets
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-02-20T10:06:23.962133+0000)
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: prioritycache tune_memory target: 5709084876 mapped: 181821440 unmapped: 32407552 heap: 214228992 old mem: 4047415775 new mem: 4047415775
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: osd.1 278 heartbeat osd_stat(store_statfs(0x1b1dbc000/0x0/0x1bfc00000, data 0x3ab7bf5/0x3c92000, compress 0x0/0x0/0x0, omap 0x649, meta 0xa1af9b7), peers [0,2,3,4,5] op hist [])
Feb 20 10:06:54 np0005625203.localdomain ceph-osd[31970]: do_command 'log dump' '{prefix=log dump}'
Feb 20 10:06:54 np0005625203.localdomain systemd[1]: tmp-crun.NQ1Oa1.mount: Deactivated successfully.
Feb 20 10:06:54 np0005625203.localdomain podman[332062]: 2026-02-20 10:06:54.781971072 +0000 UTC m=+0.095816238 container health_status dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, health_status=healthy, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, vcs-type=git, release=1770267347, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., cpe=cpe:/a:redhat:enterprise_linux:9::appstream, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, build-date=2026-02-05T04:57:10Z, version=9.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, org.opencontainers.image.created=2026-02-05T04:57:10Z, name=ubi9/ubi-minimal)
Feb 20 10:06:54 np0005625203.localdomain podman[332062]: 2026-02-20 10:06:54.821285169 +0000 UTC m=+0.135130325 container exec_died dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d (image=quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c, name=openstack_network_exporter, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1770267347, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=21849199b7179dc3074812b8e24698ec609d6a5c, io.openshift.tags=minimal rhel9, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-type=git, org.opencontainers.image.created=2026-02-05T04:57:10Z, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:156f95f51d0a91422548c574e96ee37f07a200c948e173b22523982f24f1e79c', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, cpe=cpe:/a:redhat:enterprise_linux:9::appstream, version=9.7, build-date=2026-02-05T04:57:10Z, container_name=openstack_network_exporter, config_id=openstack_network_exporter, org.opencontainers.image.revision=21849199b7179dc3074812b8e24698ec609d6a5c, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9/ubi-minimal, vendor=Red Hat, Inc.)
Feb 20 10:06:54 np0005625203.localdomain systemd[1]: dedb00cacfc58cb989114c84473a1f5c8309780b646812d33c25af11400bdd0d.service: Deactivated successfully.
Feb 20 10:06:54 np0005625203.localdomain podman[332061]: 2026-02-20 10:06:54.835898741 +0000 UTC m=+0.148728395 container health_status aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, health_status=healthy, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260127, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.3)
Feb 20 10:06:54 np0005625203.localdomain podman[332061]: 2026-02-20 10:06:54.853098543 +0000 UTC m=+0.165928217 container exec_died aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c-823a4f209b8040ee83325e09365a0da410c9a317c7ec0bbf15986ee7232fb683'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, org.label-schema.build-date=20260127, org.label-schema.name=CentOS Stream 9 Base Image)
Feb 20 10:06:54 np0005625203.localdomain systemd[1]: aeca155ea97c0eb9dec942dd0448085b99c9236a20538d76d2c429314ccf1812.service: Deactivated successfully.
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/3537827992' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain podman[332063]: 2026-02-20 10:06:54.88434204 +0000 UTC m=+0.193587434 container health_status efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: pgmap v745: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/14354088' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3660405424' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3733402714' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/700815640' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/4189794073' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.98918 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1041369153' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3147845967' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1942123242' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/494065428' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3962166500' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2467835095' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/638926905' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/3537827992' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 20 10:06:54 np0005625203.localdomain podman[332063]: 2026-02-20 10:06:54.977360389 +0000 UTC m=+0.286605783 container exec_died efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '6077946a4180028b328f149742d76066d281f4235c0c64d898675f3233fbaa4c'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260127, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b85d0548925081ae8c6bdd697658cec4, org.label-schema.license=GPLv2)
Feb 20 10:06:55 np0005625203.localdomain systemd[1]: efe12132548855625c7cf043478ba54573b3ffcce86747551874f3be492dcd41.service: Deactivated successfully.
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1895372588' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 20 10:06:55 np0005625203.localdomain rsyslogd[758]: imjournal from <localhost:ceph-osd>: begin to drop messages due to rate-limiting
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/20682300' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4203312444' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/110035165' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3699056303' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1631189302' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/4128328217' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/4095727606' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1895372588' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2292524633' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2854468367' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3786798133' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/20682300' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/4203312444' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/4213092245' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1926924911' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3748941254' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/110035165' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/401735792' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "osd metadata"} v 0)
Feb 20 10:06:55 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2340209825' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain sudo[332299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Feb 20 10:06:56 np0005625203.localdomain sudo[332299]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:06:56 np0005625203.localdomain sudo[332299]: pam_unix(sudo:session): session closed for user root
Feb 20 10:06:56 np0005625203.localdomain sudo[332327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/a8557ee9-b55d-5519-942c-cf8f6172f1d8/cephadm.d4329ff0b58389a1c874427e6fa8cdadc2545079117c7744dd9edf4a3e4fc83f --timeout 895 gather-facts
Feb 20 10:06:56 np0005625203.localdomain sudo[332327]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2751144286' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "osd utilization"} v 0)
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/980354851' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain sudo[332327]: pam_unix(sudo:session): session closed for user root
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: pgmap v746: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2340209825' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3944110738' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3797019353' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/504695397' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2751144286' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/57478616' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/980354851' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3624829222' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/781692820' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2276765394' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2137022502' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/855344926' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Feb 20 10:06:56 np0005625203.localdomain sudo[332488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Feb 20 10:06:56 np0005625203.localdomain sudo[332488]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1002)
Feb 20 10:06:56 np0005625203.localdomain sudo[332488]: pam_unix(sudo:session): session closed for user root
Feb 20 10:06:57 np0005625203.localdomain systemd[1]: Starting Hostname Service...
Feb 20 10:06:57 np0005625203.localdomain systemd[1]: Started Hostname Service.
Feb 20 10:06:57 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:06:57.818 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:06:57 np0005625203.localdomain ceph-mon[296066]: from='client.59371 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:57 np0005625203.localdomain ceph-mon[296066]: from='client.50148 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:57 np0005625203.localdomain ceph-mon[296066]: from='client.59377 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:57 np0005625203.localdomain ceph-mon[296066]: from='client.50154 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:57 np0005625203.localdomain ceph-mon[296066]: from='client.59383 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:57 np0005625203.localdomain ceph-mon[296066]: from='client.50163 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:57 np0005625203.localdomain ceph-mon[296066]: from='client.59389 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:57 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1775975022' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Feb 20 10:06:57 np0005625203.localdomain ceph-mon[296066]: from='client.50169 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:57 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2991465511' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Feb 20 10:06:57 np0005625203.localdomain ceph-mon[296066]: from='client.59395 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:57 np0005625203.localdomain ceph-mon[296066]: from='client.50175 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "quorum_status"} v 0)
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2425338279' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "versions"} v 0)
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/732875958' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.99044 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.99041 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: pgmap v747: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.59407 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.50187 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.99050 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/2945838039' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2425338279' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.99065 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.50202 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.99059 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.99071 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/732875958' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 20 10:06:58 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/799342705' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 20 10:06:58 np0005625203.localdomain podman[240359]: time="2026-02-20T10:06:58Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Feb 20 10:06:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:06:58 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 155894 "" "Go-http-client/1.1"
Feb 20 10:06:59 np0005625203.localdomain podman[240359]: @ - - [20/Feb/2026:10:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 18325 "" "Go-http-client/1.1"
Feb 20 10:06:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Feb 20 10:06:59 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/334263105' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 20 10:06:59 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Feb 20 10:06:59 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/2523254873' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 20 10:06:59 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 20 10:06:59 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 20 10:06:59 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 20 10:06:59 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.50214 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.59428 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.99083 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='mgr.44375 172.18.0.106:0/1082098019' entity='mgr.np0005625202.arwxwo' 
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.50220 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.59437 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1756905580' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/334263105' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3518226692' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.99101 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/2523254873' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3900495989' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/254070468' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/601954486' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Feb 20 10:07:00 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:07:00.357 279640 DEBUG oslo_service.periodic_task [None req-9a521a38-299f-4079-8b66-9d81ac795fb4 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "config dump"} v 0)
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1396299726' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 20 10:07:00 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='client.99113 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: pgmap v748: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='client.99125 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/2523455324' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1396299726' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/870806350' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/931757883' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "df"} v 0)
Feb 20 10:07:01 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/4234861803' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "fs dump"} v 0)
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/1165278192' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: from='client.59494 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: from='client.50283 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/4258071969' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/931757883' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3176478051' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/4234861803' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/1539358497' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/550215609' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2330083557' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.32:0/2330083557' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/1165278192' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "fs dump"} v 0)
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.106:0/3168464906' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Feb 20 10:07:02 np0005625203.localdomain kernel: cfg80211: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Feb 20 10:07:02 np0005625203.localdomain kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Feb 20 10:07:02 np0005625203.localdomain kernel: cfg80211: failed to load regulatory.db
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: mon.np0005625203@1(peon) e17 handle_command mon_command({"prefix": "fs ls"} v 0)
Feb 20 10:07:02 np0005625203.localdomain ceph-mon[296066]: log_channel(audit) log [DBG] : from='client.? 172.18.0.107:0/40887704' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 20 10:07:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:07:02.820 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Feb 20 10:07:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:07:02.824 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:07:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:07:02.824 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Feb 20 10:07:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:07:02.824 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:07:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:07:02.825 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Feb 20 10:07:02 np0005625203.localdomain nova_compute[279636]: 2026-02-20 10:07:02.826 279640 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Feb 20 10:07:03 np0005625203.localdomain ceph-mon[296066]: from='client.99182 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Feb 20 10:07:03 np0005625203.localdomain ceph-mon[296066]: pgmap v749: 177 pgs: 177 active+clean; 221 MiB data, 1.3 GiB used, 41 GiB / 42 GiB avail
Feb 20 10:07:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3168464906' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Feb 20 10:07:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/3286986597' entity='client.admin' cmd={"prefix": "df"} : dispatch
Feb 20 10:07:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.107:0/40887704' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 20 10:07:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.106:0/3478369784' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Feb 20 10:07:03 np0005625203.localdomain ceph-mon[296066]: from='client.? 172.18.0.108:0/1711977834' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
